(F)acts, (A)ssumptions, (B)eliefs

Here’s an activity ANY team can do to stay more aligned.

It is fabulous. It is simple. But consider how often you and your team are misaligned on the foundational things underpinning your work.

FAB – Facts, Assumptions, Beliefs

1. Start a document

2. List incontrovertible FACTS relevant to your work.

For example:

  • Net Revenue Retention for [Profile] is currently ###%
  • [Competitor] recently launched [Product]
  • There’s a relationship between the housing market and interest rates

Provide links to the data/insights that support these facts.

3. List ASSUMPTIONS relevant to your work, along with something to indicate the current level of certainty. Indicate the relationship between the assumption and your current work.

For example:

We are operating on the assumption that [Profile] will not adopt the new product extensions in 2023 but will gradually adopt them in 2024.

  • Implications: decreased investment in 2023 with a ramp into 2024.
  • Confidence: medium-high (though falling)
  • Type: Operating (revisit on #/##/####)

We assume that a lack of expertise in [Segment] will hamper the shift to [New Technology]. 

  • Implications: New product initiatives centered around the expertise gap in [Segment]
  • Confidence: low (but increasing, with active research)
  • Type: Testing (report research on #/##/####)

Provide links to the data/insights that support these assumptions.

Note the difference between operating assumptions—things we are currently operating based on and will revisit—and assumptions we are testing.

4. List your BELIEFS. 

“Wait, aren’t those assumptions?” Yes, and no. Beliefs are foundational assumptions. They may span years or decades. By calling them beliefs, we are also acknowledging contrarian and untestable assumptions.

e.g.

  • We believe in a sea-change shift to [some practice]
  • We believe [technology] will eventually become a commodity

5. After establishing this document, establish a ritual of regularly revisiting FAB. 

  1. Keep a history of the doc
  2. Review and update regularly
  3. Encourage your team to make notes in the document and batch up the feedback for the next meeting.

Bonus 1: FAB is fractal. There are company-wide FABs and team-level FABs. In an ideal world, you make all of these public and accessible.

Bonus 2: Note how some things remain stable (hopefully), while other things change all the time (not necessarily a bad thing). If EVERYTHING changes all the time, you should explore that. Why?

From https://cutlefish.substack.com/p/tbm-4652-facts-assumptions-beliefs

Linear Roadmaps are Misleading

Linear roadmaps are misleading without a crystal ball for seeing the future. A roadmap that recognizes the existence of risk as time goes on is more honest. But an effective PM needs to anticipate possible branches, too – and create clear criteria for following each path.

The second roadmap is a very good start, because it recognizes the existence of uncertainty. But that’s just the first step – the team should be actively working to reduce uncertainty. Make a plan: this is what we need to learn by this milestone to decide what to do next.

By being explicit about our plans for the future, we are not increasing risk or reducing agility. We open our ideas to critique and experimentation before they are set in stone, so that they can evolve with our understanding of opportunities.

It’s crucial to set success criteria at the beginning of the project: we know that we were successful if this metric is increased by that amount. But you should also have a plan for when it hasn’t, and find out before the last minute whether or not it’s working.

The temporal component in diagram 3 is virtual. The steps are not timeboxed: we know that hypothesis A will be followed by different experiments depending on whether it was confirmed, or we know we may achieve our OKR and will need to do something after, but we don’t know when.

The straight lines might be misleading; the dots are not meant to represent every milestone, merely those decision points that are known to be likely ahead of time. You may wander for some time before you reach them!

The one reason you may want to put features in the branches is if you’ve been pressured by senor management to acknowledge their pet idea. Clearly show that while it’s within the possibility space, there are conditions for reaching it. If they’re not met, it won’t happen.

The great thing about strategic roadmaps is that working backwards from “what do we need to learn” helps easily tie your work to tangible goals. The definition of “good” becomes “does is help us answer the question?” rather than “did management like it?”

The roadmap tweet lit up on LinkedIn again so it’s time to publish a small clarifying update.

I’m seeing readers focus on the former aspect of the branching paths: “how do we make a decision?” But IMO the latter aspect is far more important: “how do we get where we want to go?”

From https://twitter.com/pavelasamsonov/status/1296818042928861184?s=61&t=rHrwF0m-o8nX1nk6oIOp6A

Think Big, Work Small

How does your team work?

Some companies only work big. Large, prescriptive projects with no incremental value delivery, no experimentation, and infrequent integration. Efforts advance at a glacial pace. Scope expands to fill the available time, and then some. And the work “on the roadmap”? That work gets bigger and bigger as well through a cycle of disappointment, fear, and planning overcompensation. 

Some companies only work small. They sprint in circles. The work lacks coherence and feels scattershot. There’s a perception of progress, but looking back the team sees a lot of disjointed, reactive work. The resulting experience is incomplete and imbalanced. But management applauds the team(s) for being responsive, and the cycle continues. Plus… more features to sell! 

Some companies define big, and work small. Large, prescriptive projects get broken into many small pieces. Think of this as a combination of #1 and #2. The team works small and integrates frequently, which reduces risk and accentuates progress. But there is little room to respond to learning and feedback. Design work is more set-in-stone, and less strategic. Like a big lego set, it is placing tiny pieces according to the plan. What if the finished lego set is the wrong lego set? What if 20% of the work represented 80% of the value? Then again, the team is applauded for finishing “big things”.

Finally, we have thinking big, and working small. The team rallies behind a compelling mission linked to a coherent strategy. The mission is outcome/impact oriented. The team contemplates a vision for the holistic experience but works with broad strokes. They sequence work with the riskiest assumptions first — experimenting, testing, and learning. This is not a ship-and-forget or a ship-and-maybe-in-a-year-we-come-back operation.

For items 1-3, note how incentives can hold these ways of working in place. Big prescriptive projects look bold and compelling. High velocity is intoxicating. Rapid progress on big prescriptive projects …exciting!

Thinking big, and working small is more nuanced. There’s more acceptance of uncertainty. There’s less set in stone. There’s an art to framing a mission to leave space for creativity, while also capturing the opportunity. 

Start by figuring out where your company tends to work right now. If you only work big, then start working small. If you are only working small, maybe start by defining the bigger thing. If you’re working small and defining big things prescriptively, then start easing off that level of prescriptiveness focus on missions and strategies.

From The Beautiful Mess – Think Big, Work Small

Better Experiments/Pilots

Scoping a pilot or experiment is often ignored and before you know it you have a production solution masquerading as a pilot. This article from John Cutler helps.

  1. Set aside ~90 minutes.
  2. Pick a problem or observation. 
  3. Read and discuss the dimensions described below. For each dimension, brainstorm example experiments representing the “extremes”. These don’t need to be real. Have fun.
  4. Optionally (as demonstrated with L+ and R+), chat about how the extremes could be considered positive.
  5. Return to the problem or observation. Ask individuals to brainstorm 1-3 candidate experiments to address that problem or observation. 
  6. Ask team members to individually describe each candidate experiment using the ranges below.
  7. As a group, discuss each experiment, and where each team member placed each experiment.
  8. Finally, ask team members to dot vote on the best-fit experiment (for the given context). Discuss ranking. Ideally, pick an experiment to try.

Local | Global

How containable (or localized) is the experiment?

L+: Localized impact, fewer dependencies, less visibility/oversight/meddling.

R+: Broader impact, more support, more visibility.

Flexible | Rigid

Will it be possible to pivot the experiment on the fly?

L+: May be easier to sustain. More adaptable to changing environments and new information.

R+:May be easier to understand, teach, support, and promote.

Short Duration | Long Duration

How long must the experiment last to provide meaningful information?

L+: Less disruptive. Easier to pitch. Faster feedback.

R+: More time to “simmer” and pick up steam. Commitment.

Invitation | Imposition

Will the participants be invited to take part in the experiment, or will the experiment be imposed?

L+: More intrinsic motivation. More vested in outcome. “Advocates for life!”

R+: Speed. Less need to “sell” change. 

Small Shift | Large Shift

Will the experiment represent a small change from how things currently work, or will it feel foreign and new? Perhaps different participants will experience different degrees of change.

L+: Easier. Less disruptive. More potential to “pick up momentum”.

R+: “Get it over with”. Less chance of getting stuck in local maximum.

Self-powering | Requires “fuel” & external support

Can the experiment sustain itself without outside support and resources, or will it require external support?

L+: Independent. Easier. Can be sustained indefinitely.

R+: Involves and “vests” broader group in the effort. 

Value in 2nd/3rd order effects | Risk in 2nd/3rd order effects

Second and third order effects are common when running an experiment. Is the experiment expected to “throw off” potentially valuable 2nd/3rd order effects? 

L+: Discover valuable things!

R+: Risk may be necessary to explore new areas of uncertainty.

Fewer dependencies, lower blast radius | 
More dependencies, higher blast radius

How independent/dependent is the experiment on other things (people, projects, systems, processes, etc.) in the org?

L+: Independent. More degrees of freedom. Less constrained.

R+: Potentially more impactful. Potentially more involvement and support.

Shorter feedback loops | Longer feedback loops

How easily and quickly can we get feedback?

L+: Can respond more quickly. Can pivot experiment more quickly.

R+: May be less noisy. May provide “deeper” or more cohesive information.

Low threat to formal structures/incentives | Challenges formal structures/incentives

Does the experiment represent a threat to formal power/incentive structures?

L+: Can fly under radar. Consider “safe” and non-threatening.

R+: May be less likely to test (and change) formal power/incentive structures.

From The Beautiful Mess – Better Experiments

Six Thinking Hats

Six Thinking Hats was written by Dr. Edward de Bono. “Six Thinking Hats” and the associated idea parallel thinking provide a means for groups to plan thinking processes in a detailed and cohesive way, and in doing so to think together more effectively.

Although this can be used in groups to aid thinking assigning each person a role has felt restrictive for me. Better example – group assumes all roles at the same time so you move through the different hats together. Also find it useful if going through thinking on my own and avoids falling into favouring your own idea and focussing on yellow and blue hats while missing the others.