Think Big, Work Small

How does your team work?

Some companies only work big. Large, prescriptive projects with no incremental value delivery, no experimentation, and infrequent integration. Efforts advance at a glacial pace. Scope expands to fill the available time, and then some. And the work “on the roadmap”? That work gets bigger and bigger as well through a cycle of disappointment, fear, and planning overcompensation. 

Some companies only work small. They sprint in circles. The work lacks coherence and feels scattershot. There’s a perception of progress, but looking back the team sees a lot of disjointed, reactive work. The resulting experience is incomplete and imbalanced. But management applauds the team(s) for being responsive, and the cycle continues. Plus… more features to sell! 

Some companies define big, and work small. Large, prescriptive projects get broken into many small pieces. Think of this as a combination of #1 and #2. The team works small and integrates frequently, which reduces risk and accentuates progress. But there is little room to respond to learning and feedback. Design work is more set-in-stone, and less strategic. Like a big lego set, it is placing tiny pieces according to the plan. What if the finished lego set is the wrong lego set? What if 20% of the work represented 80% of the value? Then again, the team is applauded for finishing “big things”.

Finally, we have thinking big, and working small. The team rallies behind a compelling mission linked to a coherent strategy. The mission is outcome/impact oriented. The team contemplates a vision for the holistic experience but works with broad strokes. They sequence work with the riskiest assumptions first — experimenting, testing, and learning. This is not a ship-and-forget or a ship-and-maybe-in-a-year-we-come-back operation.

For items 1-3, note how incentives can hold these ways of working in place. Big prescriptive projects look bold and compelling. High velocity is intoxicating. Rapid progress on big prescriptive projects …exciting!

Thinking big, and working small is more nuanced. There’s more acceptance of uncertainty. There’s less set in stone. There’s an art to framing a mission to leave space for creativity, while also capturing the opportunity. 

Start by figuring out where your company tends to work right now. If you only work big, then start working small. If you are only working small, maybe start by defining the bigger thing. If you’re working small and defining big things prescriptively, then start easing off that level of prescriptiveness focus on missions and strategies.

From The Beautiful Mess – Think Big, Work Small

Better Experiments/Pilots

Scoping a pilot or experiment is often ignored and before you know it you have a production solution masquerading as a pilot. This article from John Cutler helps.

  1. Set aside ~90 minutes.
  2. Pick a problem or observation. 
  3. Read and discuss the dimensions described below. For each dimension, brainstorm example experiments representing the “extremes”. These don’t need to be real. Have fun.
  4. Optionally (as demonstrated with L+ and R+), chat about how the extremes could be considered positive.
  5. Return to the problem or observation. Ask individuals to brainstorm 1-3 candidate experiments to address that problem or observation. 
  6. Ask team members to individually describe each candidate experiment using the ranges below.
  7. As a group, discuss each experiment, and where each team member placed each experiment.
  8. Finally, ask team members to dot vote on the best-fit experiment (for the given context). Discuss ranking. Ideally, pick an experiment to try.

Local | Global

How containable (or localized) is the experiment?

L+: Localized impact, fewer dependencies, less visibility/oversight/meddling.

R+: Broader impact, more support, more visibility.

Flexible | Rigid

Will it be possible to pivot the experiment on the fly?

L+: May be easier to sustain. More adaptable to changing environments and new information.

R+:May be easier to understand, teach, support, and promote.

Short Duration | Long Duration

How long must the experiment last to provide meaningful information?

L+: Less disruptive. Easier to pitch. Faster feedback.

R+: More time to “simmer” and pick up steam. Commitment.

Invitation | Imposition

Will the participants be invited to take part in the experiment, or will the experiment be imposed?

L+: More intrinsic motivation. More vested in outcome. “Advocates for life!”

R+: Speed. Less need to “sell” change. 

Small Shift | Large Shift

Will the experiment represent a small change from how things currently work, or will it feel foreign and new? Perhaps different participants will experience different degrees of change.

L+: Easier. Less disruptive. More potential to “pick up momentum”.

R+: “Get it over with”. Less chance of getting stuck in local maximum.

Self-powering | Requires “fuel” & external support

Can the experiment sustain itself without outside support and resources, or will it require external support?

L+: Independent. Easier. Can be sustained indefinitely.

R+: Involves and “vests” broader group in the effort. 

Value in 2nd/3rd order effects | Risk in 2nd/3rd order effects

Second and third order effects are common when running an experiment. Is the experiment expected to “throw off” potentially valuable 2nd/3rd order effects? 

L+: Discover valuable things!

R+: Risk may be necessary to explore new areas of uncertainty.

Fewer dependencies, lower blast radius | 
More dependencies, higher blast radius

How independent/dependent is the experiment on other things (people, projects, systems, processes, etc.) in the org?

L+: Independent. More degrees of freedom. Less constrained.

R+: Potentially more impactful. Potentially more involvement and support.

Shorter feedback loops | Longer feedback loops

How easily and quickly can we get feedback?

L+: Can respond more quickly. Can pivot experiment more quickly.

R+: May be less noisy. May provide “deeper” or more cohesive information.

Low threat to formal structures/incentives | Challenges formal structures/incentives

Does the experiment represent a threat to formal power/incentive structures?

L+: Can fly under radar. Consider “safe” and non-threatening.

R+: May be less likely to test (and change) formal power/incentive structures.

From The Beautiful Mess – Better Experiments

Six Thinking Hats

Six Thinking Hats was written by Dr. Edward de Bono. “Six Thinking Hats” and the associated idea parallel thinking provide a means for groups to plan thinking processes in a detailed and cohesive way, and in doing so to think together more effectively.

Although this can be used in groups to aid thinking assigning each person a role has felt restrictive for me. Better example – group assumes all roles at the same time so you move through the different hats together. Also find it useful if going through thinking on my own and avoids falling into favouring your own idea and focussing on yellow and blue hats while missing the others.

Wardley Maps

A Wardley map is a map of the structure of a business or service, mapping the components needed to serve the customer or user. Wardley maps are named after Simon Wardley who claims to have created them in 2005. This form of mapping was first used in Fotango (a British Company) in 2005, within Canonical UK between 2008 and 2010 and components of mapping can be found in the Better For Less paper published in 2010.

Each component in a Wardley map is classified by the value it has to the customer or user and by the maturity of that component, ranging from custom-made to commodity. Components are drawn as nodes on a graph with value on the y-axis and commodity on the x-axis. A custom-made component with no direct value to the user would sit at the bottom-left of such a graph while a commodity component with high direct value to the user would sit at the top-right of such a graph. Components are connected on the graph with edges showing that they are linked.

Much of the theory of Wardley mapping is set out in a series of 19 blog posts and a dedicated wiki called Wardleypedia. As use of the technique has broadened to new institutions and been used to map new things the application of the technique in practice has drifted from the original vision.

Wardley Map Example

Links

Architecture Decision Record

An architectural decision record (ADR) is a document that captures an important architectural decision made along with its context and consequences.

What is an architecture decision record?

An architecture decision record (ADR) is a document that captures an important architectural decision made along with its context and consequences.

An architecture decision (AD) is a software design choice that addresses a significant requirement.

An architecture decision log (ADL) is the collection of all ADRs created and maintained for a particular project (or organization).

An architecturally-significant requirement (ASR) is a requirement that has a measurable effect on a software system’s architecture.

All these are within the topic of architecture knowledge management (AKM).

The goal of this document is to provide a fast overview of ADRs, how to create them, and where to look for more information.

Abbreviations:

  • AD: architecture decision
  • ADL: architecture decision log
  • ADR: architecture decision record
  • AKM: architecture knowledge management
  • ASR: architecturally-significant requirement

How to start using ADRs

To start using ADRs, talk with your teammates about these areas.

Decision identification:

  • How urgent and how important is the AD?
  • Does it have to be made now, or can it wait until more is known?
  • Both personal and collective experience, as well as recognized design methods and practices, can assist with decision identification.
  • Ideally maintain a decision todo list that complements the product todo list.

Decision making:

  • A number of decision making techniques exists, both general ones and software architecture specific ones, for instance, dialogue mapping.
  • Group decision making is an active research topic.

Decision enactment and enforcement:

  • ADs are used in software design; hence they have to be communicated to, and accepted by, the stakeholders of the system that fund, develop, and operate it.
  • Architecturally evident coding styles and code reviews that focus on architectural concerns and decisions are two related practices.
  • ADs also have to be (re-)considered when modernizing a software system in software evolution.

Decision sharing (optional):

  • Many ADs recur across projects.
  • Hence, experiences with past decisions, both good and bad, can be valuable reusable assets when employing an explicit knowledge management strategy.
  • Group decision making is an active research topic.

Decision documentation:

  • Many templates and tools for decisison capturing exist.
  • See agile communities, e.g. M. Nygard’s ADRs.
  • See traditional software engineering and architecture design processes, e.g. table layouts suggested by IBM UMF and by Tyree and Akerman from CapitalOne.

Decision guidance:

  • The steps above are adopted from the Wikipedia entry on Architectural Decision
  • A number of decision making techniques exists, both general ones and software and software architecture specific ones, for instance, dialogue mapping.

How to start using ADRs with tools

You can start using ADRs with tools any way you want.

For example:

  • If you like using Google Drive and online editing, then you can create a Google Doc, or Google Sheet.
  • If you like use source code version control, such as git, then you can create a file for each ADR.
  • If you like using project planning tools, such as Atlassian Jira, then you can use the tool’s planning tracker.
  • If you like using wikis, such as MediaWiki, then you can create an ADR wiki.

How to start using ADRs with git

If you like using git version control, then here is how we like to start using ADRs with git for a typical software project with source code.

Create a directory for ADR files:

$ mkdir adr

For each ADR, create a text file, such as database.txt:

$ vi database.txt

Write anything you want in the ADR. See the templates in this repository for ideas.

Commit the ADR to your git repo.

ADR file name conventions

If you choose to create your ADRs using typical text files, then you may want to come up with your own ADR file name convention.

We prefer to use a file name convention that has a specific format.

Examples:

  • choose_database.md
  • format_timestamps.md
  • manage_passwords.md
  • handle_exceptions.md

Our file name convention:

  • The name has a present tense imperative verb phrase. This helps readability and matches our commit message format.
  • The name uses lowercase and underscores (same as this repo). This is a balance of readability and system usability.
  • The extension is markdown. This can be useful for easy formatting.

Suggestions for writing good ADRs

Characteristics of a good ADR:

  • Point in Time – Identify when the AD was made
  • Rationality – Explain the reason for making the particular AD
  • Immutable record – The decisions made in a previously published ADR should not be altered
  • Specificity – Each ADR should be about a single AD

Characteristics of a good context in an ADR:

  • Explain your organization’s situation and business priorities
  • Include rationale and considerations based on social and skills makeups of your teams

Characteristics of good Consequences in an ADR::

  • Right approach – “We need to start doing X instead of Y”
  • Wrong approach – Do not explain the AD in terms of “Pros” and “Cons” of having made the particular AD

A new ADR may take the place of a previous ADR:

  • When an AD is made that replaces or invalidates a previous ADR, a new ADR should be created

ADR example templates

ADR example templates that we have collected on the net:

For more information

Introduction:

Templates:

In-depth:

Tools:

Examples:

See also:

From https://github.com/joelparkerhenderson/architecture_decision_record

Technical Debt

Software systems are prone to the build up of cruft – deficiencies in internal quality that make it harder than it would ideally be to modify and extend the system further. Technical Debt is a metaphor, coined by Ward Cunningham, that frames how to think about dealing with this cruft, thinking of it like a financial debt. The extra effort that it takes to add new features is the interest paid on the debt.

Martins Fowlers description of Technical Debt

The metaphor of debt is sometimes used to justify neglecting internal quality. The argument is that it takes time and effort to stop cruft from building up. If there new features that are needed urgently, then perhaps it’s best to take on the debt, accepting that this debt will have to be managed in the future.

The danger here is that most of the time this analysis isn’t done well. Cruft has a quick impact, slowing down the very new features that are needed quickly. Teams who do this end up maxing out all their credit cards, but still delivering later than they would have done had they put the effort into higher internal quality. Here the metaphor often leads people astray, as the dynamics don’t really match those for financial loans. Taking on debt to speed delivery only works if you stay below the design payoff line of the Design Stamina Hypothesis, and teams hit that line in weeks rather than months.

There are regular debates whether different kinds of cruft should be considered as debt or not. I found it useful to think about whether the debt is acquired deliberately and whether it is prudent or reckless – leading me to the Technical Debt Quadrant.

Technical Debt Quadrant

From https://martinfowler.com/bliki/TechnicalDebt.html.

See also The Human Cost of Technical Debt – the human side of the problem.  And, make no mistake — in business, all human problems are also business problems,viewed with a wide enough lens.  Unhappy humans are unhappy workers, and unhappy workers are less productive.  Yet, this angle of technical debt is seldom discussed, in my experience.

  • Unpleasant Work
  • Team Infighting
  • Atrophied Skills
  • The Hidden Business Cost: Turnover and Attrition

Charlie Kindel 5 P’s

Charlie Kindel – how to achieve focus in any endeavor you embark on by simply writing down the PurposePrinciplesPrioritiesPeople, and Plan (the 5Ps). An endeavor could be a software development project, a job search, or a phase in your life. I have personally found the 5Ps a useful tool for small projects (e.g. prepping for a VC demo/presentation) as well as large scale projects that include 1,000s of people.

  • Purpose: Why do we exist? Why are we in business? Where do we want to be in the future? What will we deliver?
  • Principles: What are the non-negotiable rules and key strategies? How will we act?
  • Priorities: What’s the framework for tradeoffs? In what order do you do things? How much mass or energy do you apply to each element of the plan? What is not important?
  • Plan: How are we going to stage and tackle solving the problems? What are the known dates & forcing functions on the calendar?
  • People: Who’s accountable for every key part of the plan?

From https://ceklog.kindel.com/2011/06/14/the-5-ps-achieving-focus-in-any-endeavor/