Boring Magic
boringmagi.cc.web.brid.gy
Boring Magic
@boringmagi.cc.web.brid.gy
Product and design consultancy. Helping you build products and services that meet people’s expectations. No hype, no fluff. Good, straightforward stuff that just works.
Recap of the AI Leaders panel, January 2026
<p>Here’s what we said at Ministry of Justice’s <a href="https://events.teams.microsoft.com/event/de1b76ef-757f-4fcd-9ef6-c98d4e480467@c6874728-71e6-41fe-a9e1-2e8c36776ad8">AI Leaders panel</a>, the opening event of their <a href="https://www.eventbrite.com/e/ai-digital-professions-conference-tickets-1970192031402?viewDetails=true">AI Digital Professions conference</a> in January 2026.</p> <p>Thank you to Nikola Goger for the invitation to speak, and thanks too to the other panellists: Tim Paul (Head of User Centred Design at i.AI), Paul Haigh (Chief Technology Officer at Ministry of Justice) and Jan Murdoch (Head of Horizon Scanning at Department for Environment, Food &amp; Rural Affairs).</p> <p>This is a tidied-up version of what was said – there were many more instances of ‘um’, ‘like’ and far too many instances of ‘I think’ in the original transcript!</p> <h3 id="question-how-are-you-seeing-ai-change-each-of-your-professions-or-areas-and-how-do-you-think-this-will-impact-the-future-of-your-profession-or-area">Question: How are you seeing AI change each of your professions or areas? And how do you think this will impact the future of your profession or area?</h3> <p>As the product person, I’ve taken a broader team view rather than focusing solely on product-specific matters. However, I believe the key point Mark highlighted earlier is that writing code isn’t necessarily a bottleneck anymore. This raises questions about how it impacts the rest of your process and overall workflow.</p> <p>Essentially, it shifts that bottleneck earlier in the value stream. Consequently, you must prioritise establishing solid foundations for product architecture and design. For example, if agents can easily complete tickets quickly, you must ensure these foundations are in place to deliver successful products and services.</p> <p>There’s a slight advantage to it: delivery could potentially become faster. However, there’s also a risk of delivering the wrong thing. Therefore – if we consider service stages – the alpha stage really should be seen as a way to test various solutions and concepts, really try out different ideas.</p> <p>You have the ability to explore more divergent thinking, which is a good thing. Therefore, you want to optimise your approach to cycle around the ‘problem understanding, solution exploration’ loop more quickly and more often. This means using design methods that encourage this divergence and generate a variety of ideas, and then methods of testing that give you clear signals on where to converge.</p> <p>I think the impact lies in the fact that if we’ve been discussing agile practices for the past decade, we’re transitioning towards a phase where Lean practices and Lean Startup methods will become significantly more integral to our work. Mastering writing hypotheses and measuring outcomes will be crucial. Furthermore embracing experimentation and remaining open to being proven wrong will be incredibly beneficial.</p> <p>Now that computers can do the some of the computer tasks, we should look at shifting our focus to human-centric activities. We should look more at using co-design, which offers efficiencies in prototyping and testing. Right now you test a product with five people on one day, make some iterations, then return a few days later to test again. Instead, we could gather groups of people in a room for a few days, allowing them to iterate the product with us and provide valuable feedback. This approach empowers users and generates stronger evidence. (And as many people know, access to users can be a stultifying constraint.)</p> <p>You know, in the product profession we consistently emphasise falling in love with the problem as an important thing to do. Interestingly, this concept resonates in the private sector too. For example, Y Combinator’s Paul Graham frequently discusses that those startups who secure funding and Y Combinator’s are the ones who are intensely focused on users and obsessed with their problems. This aligns closely with our longstanding focus on prioritising user needs in government digital services. Ultimately, truly understanding the problem and having skilled designers who grasp constraints and engineers who design effective architecture will be crucial in delivering high-quality experiences, especially now more than ever.</p> <p>We need to become much more strategic and excel at demonstrating outcomes rather than simply pumping out a digital version of a form.</p> <h3 id="question-lets-talk-limits-where-do-you-see-the-limits-of-ai-in-public-service-delivery-specifically-so-what-are-the-things-that-should-never-be-automated-and-what-kind-of-unintended-consequences-have-you-seen-or-you-think-are-there">Question: Let’s talk limits. Where do you see the limits of AI in public service delivery specifically? So what are the things that should never be automated and what kind of unintended consequences have you seen or you think are there?</h3> <p>Tim’s point about judgement is really important. It’s something we think about working on with Extract, a product that uses AI to extract geospatial data from documents. I did go down a bit of a rabbit hole looking at automated decision-making and how does AI have an impact on professionals?</p> <p>When you consider professionals like doctors and lawyers, you’re looking at the professionalisation of judgement and decision-making. The way a profession is structured, including its educational pathway, specific behaviours and codes of conduct, this all revolves around explainability and audit-ability. This allows for an in-depth examination and inspection of how decisions were made, meaning you can adjudicate what happened. However, stuffing AI into things introduces a ‘black box’ element, you lose that transparency. And while there are people actively working on explainability and shining a light into the black box, I believe it’s a crucial aspect in considering what not to automate.</p> <p>Returning to our digital professions, I wanted to discuss product management. I believe a potential danger arises when AI is used for tasks like writing tickets, vision statements or mission briefs. As I joked with Tim a few years ago, product management essentially boils down to prompt engineering! This is partly true because providing sufficient context and direction is crucial for achieving successful outcomes. I still believe this is the core responsibility of the role.</p> <p>Therefore, ensuring everyone has a comprehensive understanding of users, their context and the problem space they’re working within, is essential. This, I think, is the true craft of product management. Much of this occurs in tickets, in discussions, and vision statements, among other things. Therefore, I don’t think outsourcing that craft is a good idea.</p> <p>Some times I’m okay with it, like writing boring tickets. For example, recently I had to create tickets for implementing an analytics service on some prototype. I went to look at the documentation and realised I didn’t need to do it myself anymore. I could simply point an agent at the documentation and ask them to write a decent ticket. All I had to do was outline the stages I wanted to measure and the metrics I wanted to capture. But the agent was able to write the ticket for me in a very short space of time.</p> <p>The danger with using AI in some product jobs is that it’s all about managing context. You need to be careful about what you include in people’s context, and that’s really the craft of product management. So be present, don’t automate that.</p> <h3 id="question-what-skill-sets-do-digital-professionals-need-to-work-effectively-with-ai-in-your-profession-and-what-would-you-advise-people-to-learn">Question: What skill sets do digital professionals need to work effectively with AI in your profession? And what would you advise people to learn?</h3> <p>I mentioned it earlier but I think improving your interpersonal skills – the human stuff – is really beneficial. This includes developing conversation skills, getting better at writing and the ability to bring groups together. Get used to considering and discussing feedback together, having differing points of view.</p> <p>I believe we should create more space and time for apprenticeships. Earlier, Paul mentioned hiring <em>more</em> juniors rather than fewer, which is great. It reminds me of Yanagi Soetsu, an arts and crafts historian from the early 20th century. During the period of mechanisation in Korea and Japan, he discussed the importance of preserving craft. He talked about ‘village kilns’ and emphasised the importance of skilled artisans working with apprentices to teach methods and pass on knowledge. I believe this will be incredibly crucial in the coming years, otherwise what’s the social contract we’re signing up to?</p> <p>At a day-to-day level, I believe simply downloading some open large language models and using them through software like LM Studio will be incredibly beneficial. Start by understanding the differences between various models, observing their responses to prompts and how tweaking them produces different outputs. Learn to control inference, such as extending the context window, and delve into data and statistics. I genuinely think this knowledge will be very handy.</p> <p>Jeremy Keith once said AI is simply applied statistics, which is both funny and true. Therefore, if you learn the feel of the material, you can get better at using it (or not).</p>
boringmagi.cc
February 5, 2026 at 4:34 AM
Metrics, measures and indicators: a few things to bear in mind
Metrics, measures and indicators help you track and evaluate outcomes. They can tell us if we’re moving in the right direction, if things aren’t going well, or if we’ve achieved the outcome we set out to achieve. If you’ve reported on key performance indicators (KPIs), checked progress against objectives and key results (OKRs) or looked at user analytics, you’ll have some experience with metrics, measures and indicators. These words are often used interchangably and, in general, the difference isn’t important. Not for this post anyway. We can talk about the difference between metrics, measures and indicators later. In this post we’ll cover some guiding principles for designing and using metrics, measures and indicators. A few things to bear in mind. ## Guiding principles 1. Value outcomes over outputs 2. Measures, not targets 3. Balance the what (quantitative) and the why (qualitative) 4. Measure the entire product or service 5. Keep it light and actionable 6. Revisit or refine as things change ### Value outcomes over outputs We acknowledge that outputs are on the path to achieving outcomes. You can’t cater for a memorable birthday party without making some sandwiches. But delivering outcomes is the real reason why we’re here. So we don’t measure whether we’ve delivered a product or feature, we measure the impact it’s having. ### Measures, not targets Follow Goodhart’s Law: ‘When a measure becomes a target, it ceases to be a good measure.’ There are numerous factors that contribute to a number or reading going up or down. Metrics, measures and indicators are a starting point for a conversation, so we can ask why and do something about it (or not). The measures are in service of learning: tools, not goals. ### Balance the what (quantitative) and the why (qualitative) Grown-ups love numbers. But it’s very easy to ignore what users think and feel when you only track quantitative measures. Numbers tell us what’s happening, but feedback can tell us why. There’s no point doing something faster if it makes the experience worse for users, for example – we have to balance quantity and quality. ### Measure the entire product or service If we can see where people start, how they move through and where they end, we can identify where to focus our efforts for improvements. The same is true for people who come back too, we want to see whether we’ve made things better than last time they were here. If you’re only measuring one part, you only know how one part is performing. Get holistic readings. ### Keep them light and actionable It’s easy to go overboard and start tracking everything, but too much information can be a bad thing. If we track too many metrics, we run the risk of analysis paralysis. Similarly, one measure is too few: it’s not enough to understand an entire system. Four to eight key metrics or indicators per team is enough and should inspire action. ### Revisit or refine as things change Our priorities will change over time, meaning we will need to change our indicators, measures and metrics too. It’s no use tracking and reporting on datapoints that don’t relate to outcomes. Measure what matters. We should aim not to change them too frequently – that causes whiplash. But it’s all right to change them when you change direction or focus. ## Are we on the way? Or did we get there? Those principles are handy for working out what to measure, but there’s two types of indicator you need to know about: leading and lagging. Leading indicators tell us whether we’re making progress towards an outcome. _Are we on the way?_ For example, if we want to make it easy to find datasets, are people searching for data? Is the number of people searching for data going up? Lagging indicators tell us whether we’ve achieved the outcome. _Did we get there?_ In that same example, making it easy to find datasets, what’s the user satisfaction score? Are they requesting new datasets?
boringmagi.cc
October 30, 2025 at 3:27 PM
Using quarters as a checkpoint
Breaking your strategy down into smaller, more manageable chunks can help you make more progress sooner. Some things take a long while to achieve, but smaller goals help us celebrate the wins along the way. Many organisations use a quarter – a block of 3 months – to do this. And it can be helpful to look back before you look forward, to celebrate the progress you’ve made and work out what to do next. Every 3 months, we encourage product teams to take the opportunity to step back from the day-to-day and consider the objectives they’re working towards. The quarterly checkpoint is a time to refocus efforts and double-down, change direction or move on to the next objective. There are 2 stages to using the quarterly checkpoint well: 1. Check on your progress 2. Plan how to achieve your new objectives Here are two workshops you can run at each stage, but you can combine them into one workshop if you like. Whatever works. ## Check on your progress First, check on the progress your team has made on your objectives and key results (OKRs). You can do this in a team workshop lasting 30 to 60 minutes. ### 1. List out the OKRs you’ve been working on (10 to 20 mins) Run through the OKRs you’ve been working on. Talk about the progress you made on each key result and celebrate the successes – big or small! ### 2. Think about what’s left to do (20 to 40 mins) For any OKRs you haven’t completed – where progress on key results isn’t 100% – discuss as a team which initiatives you have left to do to fully achieve the objective. For example, you may need to collect some data, run a test, build a thing or achieve an outcome. Consider whether you should change your approach, for example, by doing something smaller or using different methods, based on what you’ve learned over the last quarter. It’s OK to stick to the original plan if it’s still the best approach. Write down what initiatives your team has agreed to do. ## Plan how to achieve your new objectives Next, you’ll need to form a loose plan for how to achieve your new objectives. You can treat unfinished objectives from the previous quarter as a new objective. Run another workshop lasting 30 to 45 minutes for each objective. Everyone on the team will need to input on the plan using the outline below. Write it in a doc, a slide deck or on a whiteboard – whatever works. You will probably want to present these plans to the senior management team or service owner at the start of the new quarter. If it’s easier than starting with a blank page, team leads can fill in the outline and get feedback from the rest of the team. As long as everyone gets a chance to input, it doesn’t matter. It’s OK if you take less than 30 minutes, especially if you already have a plan. ### 1. Write down and describe the objective An objective is a bold and qualitative goal that the organisation wants to achieve. It’s best that they’re ambitious, not super easy to achieve or audacious in nature; they are not sales targets. Write down the problem you’re solving and who it’s a problem for. Discuss how you’ll know when you’re done. What are the success criteria? ### 2. Think about risks and unknowns What might be a challenge? What are the riskiest assumptions or big unknowns to highlight? Do you need to try new techniques? These might form the first initiatives in your plan. You can frame your assumptions using the hypothesis statement: **Because** [of something we think or know] **We believe that** [an initiative or task] **Will achieve** [desired outcome] **Measured by** [metric] Note down dependencies on other teams, for example, where you may need another team to do something for you. ### 3. Detail all the initiatives Write a sentence for all the initiatives – tasks and activities – you’ll need to do to achieve the objective. Consider research and discovery activities, which can help you gather information to turn unknowns into knowns. Consider alphas, things to prototype, spikes, and experiments that can help you de-risk or validate assumptions. Make sure to remember the development and delivery work too – that’s how we release value to users! ### 4. What will you measure? Review your success criteria. Define the metrics that will tell you when you’ve finished or achieved the objective. These should tell you when you’re done and will become your key results. Remember, metrics should be: * tangible and quantitative * specific and measurable * achievable and realistic ### 5. Prioritise radically What would you do differently if you only had half the time? How will you start small and build up? What’s the least amount of work you can do to learn the most? Use these thoughts to consider any changes to your initiatives. Go back and edit the initiatives if you need to. ## Don’t worry about adapting your plans A core tenet of agile is responding to change over following a plan, so don’t be afraid to change your plans based on new information. The quarterly checkpoint isn’t the only time you can look back to look forward – that’s why retrospectives are useful. You can use the activities above at any point. The best product teams build these behaviours into their regular practice. If you’d like help running these workshops or have any questions, get in touch and we’ll set up a chat.
boringmagi.cc
October 30, 2025 at 3:27 PM
Going faster with Lean principles
Software teams are often asked to go faster. There are many factors that influence the speed at which teams can discover, design and deliver solutions, and those factors aren’t always in a team’s control. But Lean principles offer teams a way to analyse and adapt their operating model – their ways of working. ## What is Lean? Lean is a method of manufacturing that emerged from Toyota’s Production System in the 1950s and 1960s. It’s a system that incorporates methods of production and leadership together. The early Agile community used Lean principles to inspire methods for making digital products and services. These principles have had influence beyond the production environment and have been adapted for business and strategy functions too. ## Books on Lean Four books on Lean principles have influenced the way I work. **1._Lean Software Development: An Agile Toolkit_ by Mary and Tom Poppendieck** The earliest of the four books. It really set the standard. **2._The Lean Startup_ by Eric Ries** This started a big movement for applying Lean principles to your startup, including testing out new business models or growth opportunities. **3._Lean UX_ by Jeff Gothelf and Josh Seiden** One of my favourites. This one really brought strategic goals and user experience closer together. It also shifted teams from writing problem statements to writing hypotheses. **4._The Lean Product Playbook_ by Dan Olsen** This is relatively similar to _The Lean Startup_ but is more of a playbook, showing the practice that goes with the theory. The highlight is its emphasis on MVP tests: experiments you can run to learn something without building anything. ## Lean principles All these books have some principles in their pages, all based on the original Lean principles from Toyota. They’re all pretty similar. Combining their approaches helps us apply Lean principles to business model development, strategy, user-centred design and software delivery. > A note on principles: Principles are not rules. Principles guide your thinking and doing. Rules say what’s right and wrong. ### 1. Eliminate waste Reduce anything which does not help deliver value to the user. So: partially done work; scope creep; re-learning; task-switching; waiting; hand-offs; defects; management activities. Outcomes, not outputs. ### 2. Amplify learning Build, measure, learn. Create feedback loops. Build scrappy prototypes, run spikes. Write tests first. Think in iterations. ### 3. Decide as late as possible Call out the assumptions or uncertainties, try out different options, and make decisions based on facts or evidence. ### 4. Deliver as fast as possible Shorter cycles improve learning and communication, and helps us meet users’ needs as soon as possible. Reduce work in progress, get one thing done, and iterate. ### 5. Empower the team Figure it out together. Managers provide goals, encourage progress, spot issues and remove impediments. Designers, developers and data engineers suggest how to achieve a goal and feed in to continuous improvement. ### 6. Build integrity in Agility needs quality. Automated tests and proven design patterns allow you to focus on smaller parts of the system. A regular flow of insights to act on aids agility. ### 7. Optimise the whole Focus on the entire value stream, not just individual tasks. Align strategy with development. Consider the entire user experience in the design process. ## Three simpler principles If those seem like too many to get started with, I want to introduce three simpler principles that can help you go faster. I came across these in a book about running, which doesn’t seem like the place you’d find inspiration about product management! Think easy, light and smooth. It’s from a man called Micah True who lived in the Mexican desert and went running with the local Native Americans. They called him Caballo Blanco – ‘White Horse’ – because of his speed. > “You start with easy, because if that’s all you get, that’s not so bad. Then work on light. Make it effortless, like you don’t give a shit how high the hill is or how far you’ve got to go. When you’ve practised that so long that you forget you’re practicing, you work on making it smooooooth. You won’t have to worry about the last one – you get those three, and you’ll be fast.” You can do this every cycle. Find one thing to make easier, one thing to make lighter, and one thing to make smoother. Fast will happen naturally.
boringmagi.cc
October 30, 2025 at 3:27 PM
Our positions on generative AI
Like many trends in technology before it, we’re keeping an eye on artificial intelligence (AI). AI is more of a concept, but generative AI as a general purpose technology has come to the fore due to recent developments in cloud-based computation and machine learning. Plus, technology is more widespread and available to more people, so more people are talking about generative AI – compared to something _even more_ ubiquitous like HTML. Given the hype, it feels worthwhile stating our positions on generative AI – or as we like to call it, ‘applied statistics’. We’re open to working on and with it, but there’s a few ideas we’ll bring to the table. ## The positions 1. Utility trumps hyperbole 2. Augmented not artificial intelligence 3. Local and open first 4. There will be consequences 5. Outcomes over outputs ### Utility trumps hyperbole The fundamental principle to Boring Magic’s work is that people want technologies to work. People prefer things to be functional first; the specific technologies only matter when they reduce or undermine the quality of the utility. There are outsized, unfounded claims being made about the utility of AI. It is not ‘more profound than fire’. The macroeconomic implications of AI are often overstated, but it’ll still likely have an impact on productivity. We think it’s sensible to look at how generative AI can be useful or make things less tedious, so we’re exploring the possibilities: from making analysis more accessible through to automating repeatable tasks. We won’t sell you a bunch of hype, just deliver stuff that works. ### Augmented not artificial intelligence Technologies have an impact on the availability of jobs. The introduction of the digital spreadsheet meant that chartered accountants could easily punch the numbers, leading to accounting clerks becoming surplus to requirements. Jevon’s paradox teaches us that AI will lead to more work, not less. Over time accountants needed fewer clerks, but increases in financial activity have lead to a greater need for auditors. So we will still need people in jobs to do thinking, reasoning, assessing and other things people are good at. Rather than replacing people with machines to reduce costs, technology should be used to empower human workers. We should augment the intelligence of our people, not replace it. That means using things like large language models (LLMs) to reduce the inertia of the blank page problem, helping with brainstorming, rather than asking an LLM to write something for you. Extensive not intensive technology. ### Local and open first Right now, we’re in a hype cycle, with lots of enthusiasm, funding and support for generative AI. The boom of a hype cycle is always followed by a bust, and AI winters have been common for decades. If you add AI to your product or service and rely on a cloud-based supplier for that capability, you could find the supplier goes into administration – or worse, enshittification, when fees go up and the quality of service plunges. And free services are monetised eventually. But there are lots of openly-available generative text and vision models you can run on your own computer – your ‘local machine’ – breaking the reliance on external suppliers. When exploring how to apply generative AI to a client’s problem, we’ll always use an open model and run it locally first. It’s cheaper than using a third party, and it’s more sustainable too. It also mitigates some risks around privacy and security by keeping all data processing local, not running on a machine in a data centre. That means we can get started sooner and do a data protection impact assessment later, when necessary. We can use the big players like OpenAI and Anthropic if we need to, but let’s go local and open first. ### There will be consequences People like to think of technology as a box that does a specific thing, but technology impacts and is impacted by everything around it. Technology exists within an ecology. It’s an inescapable fact, so we should try to sense the likely and unlikely consequences of implementing generative AI – on people, animals, the environment, organisations, policy, society and economies. That sounds like a big project, but there are plenty of tools out there to make it easier. We’ve used tools like consequence scanning, effects mapping, financial forecasting, Four Futures and other extrapolation methods to explore risks and harms in the past. As responsible people, it’s our duty to bring unforeseen consequences more into view, so that we can think about how to mitigate the risks or stop. ### Outcomes over outputs It feels like everyone’s doing something with generative AI at the moment, and, if you’re not, it can lead to feeling left out. But this doesn’t mean you have to do something: FOMO is not a strategy. We’ll take a look at where generative AI might be useful, but we’ll also recommend other technologies if those are cheaper, faster or more sustainable. That might mean implementing search and filtering instead of a chatbot, especially if it’s an interface that more people are used to. It’s more important to get the job done and achieve outcomes, instead of doing the latest thing because it’s cool. ## Let’s be pragmatic Ultimately our approach to generative AI is like any other technology: we’re grounded in practicality, mindful of being responsible and ethical, and will pursue meaningful outcomes. It’s the best way to harness its potential effectively. Beware the AI snake oil.
boringmagi.cc
October 30, 2025 at 3:27 PM
Tips on doing show & tell well
## What is a show & tell? A show & tell is a regular get-together where people working on a product or service celebrate their work, talk about what they’ve learned, and get feedback from their peers. It’s also a chance to * bring together team members, management and leadership to bond, share success, and collaborate * let colleagues know what you’re working on, keep aligned, and create opportunities to connect and work together * tell stakeholders (including users, partner organisations and leadership) what you’ve been doing and take their questions as feedback (a form of governance). A show & tell may be internal, limited to other people in the same team or organisation, or open to anyone to join. Most teams start with an internal show & tell and make these open later. A show & tell might also be called a team review. ## How to run a great show & tell 1. **Don’t make it up on the spot** Spend time as a team working out what you want to say and who is going to share stories with the audience (1 or 2 people works best). 30 to 60 minutes of prep will pay off. 2. **Set the scene** Always introduce your project or epic. Who’s on the team? What are you working on? What problem are you solving? Who are your users? Why are you doing it? You don’t need to tell the full history, a 30-second overview is enough. 3. **Show the thing!** Scrappy diagrams, Mural boards, Post-it notes, screenshots, scribbles, photos, and clicking through prototypes bring things to life. Text and code is OK, but always aim to demonstrate something working – don’t just talk through a doc or some function. 4. **Talk about what you’ve learned** Share which assumptions turned out to be incorrect, or what facts surprised you. Show clips from user research and usability testing. Highlight important analytics data or performance measures. Share both findings and insights. Be clear on the methodology and any confidence intervals, levels of confidence, risky assumptions, etc. 5. **Be clear** Don’t hide behind jargon. Make bold statements. Say what you actually think! This helps everyone concentrate on the main point, and it generates discussion. 1. **Always share unfinished thinking** Forget about the polish and perfection. A show & tell is the perfect place to collect feedback, ideas and thoughts. It’s a complicated space. We’re all trying to figure it out! 2. **Rehearse** Take 10–15 minutes to rehearse your section with your team to work out whether you need to cut anything. If you’re struggling to edit, use a format like What? So what? Now what? to keep things concise. If you take up more time than you’ve been given, it’ll eat into other people’s section meaning they have to rush (or not share at all) which isn’t fair. 3. **Leave time for questions** The best show & tells have audience participation. Wherever possible, leave time for questions – either after each team or at the end. Encourage people to ask questions in the chat, on Slack, in docs, etc. If you do nothing else, follow tip number 3. You can read more tips on good show & tells from Mark Dalgarno, Emily Webber and Alan Wright. ## How to be a great show & tell audience member 1. **Be present and listen** There’s nothing worse than preparing for a show & tell only to realise that no one’s paying attention. Close Slack, close Teams, stop looking at email, and give your full attention to your team-mates. 2. **Smile, use emojis, and celebrate!** Bring the good vibes and lift each other up whenever there’s something worth celebrating. ## It’s ok to be halfway done The main thing to remember is that show & tell is not just about sharing progress and successes. It’s a time to talk about what’s hard and what didn’t work too. It’s ok to be halfway done. It’s ok to go back to the drawing board. Each sprint, try to answer these questions in your show & tell: * What did we learn or what changed our mind? * What can we show? How can we help people see behind the scenes? * What haven’t we figured out? What do we want feedback on?
boringmagi.cc
October 30, 2025 at 3:27 PM
You don’t have to do fortnightly sprints
In early 2024, we helped GOV.‌UK Design System design and implement a new model for agile delivery. It was a break away from traditional Scrum and two-week sprints towards an emphasis on iteration and reflection. ## Why change things? Traditional two-week sprints and Scrum provide good training wheels for teams who are new to agile, but those don’t work for well established or high performing teams. For research and development work (like discovery and alpha), you need a little bit longer to get your head into a domain and have time to play around making scrappy prototypes. For build work, a two-week sprint isn’t really two weeks. With all the ceremonies required for co-ordination and sharing information – which is a lot more labour-intensive in remote-first settings – you lose a couple of days with two-week sprints. Sprint goals suck too. It’s far too easy to push it along and limp from fortnight to fortnight, never really considering whether you should stop the workstream. It’s better to think about your appetite for doing something, and then to focus on getting valuable iterations out there rather than committing to a whole thing. ## How it works You can see how it works in detail on the GOV.‌UK Design System’s team playbook and in a blog post from the team’s delivery manager, Kelly. There’s also a graphic that brings the four-week cycle to life. There are a few principles that make this method work: * Fixed time, variable scope * Think in iterations: vertical not horizontal slices * Each cycle ends with something shippable or showable * R&D cycles end on decisions around scope * Each cycle starts with a brief, but the team has autonomy over delivery This gives space for ideas and conversations to breathe, for spikes and scrappy prototypes to come together, and for teams to make conscious decisions about scope and delivering value to users. ## How did it work out? In their first cycle, the team delivered three out of five briefs – which was higher than their completion rate at the time. As Kelly reported, ‘most team members enjoyed working in smaller, focused groups and having autonomy over how they deliver their work.’ A few months later, we analysed how often the team was releasing new software: **they were releasing twice as often in half the time.** Between October 2022 and October 2023, there were five releases. Between October 2023 and March 2024, there were 10 releases. One year on and the team has maintained momentum. Iterations have increased, they’ve built a steady rhythm of releasing GOV.‌UK Frontend more frequently, and according to a recent review the team is a lot happier working that way. ## Want to try something new? If you’re looking to increase team happiness and effectiveness, drop us a line and we can chat about transforming your team’s delivery model too.
boringmagi.cc
October 30, 2025 at 3:27 PM
Using quarters as a checkpoint
Breaking your strategy down into smaller, more manageable chunks can help you make more progress sooner. Some things take a long while to achieve, but smaller goals help us celebrate the wins along the way. Many organisations use a quarter – a block of 3 months – to do this. And it can be helpful to look back before you look forward, to celebrate the progress you’ve made and work out what to do next. Every 3 months, we encourage product teams to take the opportunity to step back from the day-to-day and consider the objectives they’re working towards. The quarterly checkpoint is a time to refocus efforts and double-down, change direction or move on to the next objective. There are 2 stages to using the quarterly checkpoint well: 1. Check on your progress 2. Plan how to achieve your new objectives Here are two workshops you can run at each stage, but you can combine them into one workshop if you like. Whatever works. ## Check on your progress First, check on the progress your team has made on your objectives and key results (OKRs). You can do this in a team workshop lasting 30 to 60 minutes. ### 1. List out the OKRs you’ve been working on (10 to 20 mins) Run through the OKRs you’ve been working on. Talk about the progress you made on each key result and celebrate the successes – big or small! ### 2. Think about what’s left to do (20 to 40 mins) For any OKRs you haven’t completed – where progress on key results isn’t 100% – discuss as a team which initiatives you have left to do to fully achieve the objective. For example, you may need to collect some data, run a test, build a thing or achieve an outcome. Consider whether you should change your approach, for example, by doing something smaller or using different methods, based on what you’ve learned over the last quarter. It’s OK to stick to the original plan if it’s still the best approach. Write down what initiatives your team has agreed to do. ## Plan how to achieve your new objectives Next, you’ll need to form a loose plan for how to achieve your new objectives. You can treat unfinished objectives from the previous quarter as a new objective. Run another workshop lasting 30 to 45 minutes for each objective. Everyone on the team will need to input on the plan using the outline below. Write it in a doc, a slide deck or on a whiteboard – whatever works. You will probably want to present these plans to the senior management team or service owner at the start of the new quarter. If it’s easier than starting with a blank page, team leads can fill in the outline and get feedback from the rest of the team. As long as everyone gets a chance to input, it doesn’t matter. It’s OK if you take less than 30 minutes, especially if you already have a plan. ### 1. Write down and describe the objective An objective is a bold and qualitative goal that the organisation wants to achieve. It’s best that they’re ambitious, not super easy to achieve or audacious in nature; they are not sales targets. Write down the problem you’re solving and who it’s a problem for. Discuss how you’ll know when you’re done. What are the success criteria? ### 2. Think about risks and unknowns What might be a challenge? What are the riskiest assumptions or big unknowns to highlight? Do you need to try new techniques? These might form the first initiatives in your plan. You can frame your assumptions using the hypothesis statement: **Because** [of something we think or know] **We believe that** [an initiative or task] **Will achieve** [desired outcome] **Measured by** [metric] Note down dependencies on other teams, for example, where you may need another team to do something for you. ### 3. Detail all the initiatives Write a sentence for all the initiatives – tasks and activities – you’ll need to do to achieve the objective. Consider research and discovery activities, which can help you gather information to turn unknowns into knowns. Consider alphas, things to prototype, spikes, and experiments that can help you de-risk or validate assumptions. Make sure to remember the development and delivery work too – that’s how we release value to users! ### 4. What will you measure? Review your success criteria. Define the metrics that will tell you when you’ve finished or achieved the objective. These should tell you when you’re done and will become your key results. Remember, metrics should be: * tangible and quantitative * specific and measurable * achievable and realistic ### 5. Prioritise radically What would you do differently if you only had half the time? How will you start small and build up? What’s the least amount of work you can do to learn the most? Use these thoughts to consider any changes to your initiatives. Go back and edit the initiatives if you need to. ## Don’t worry about adapting your plans A core tenet of agile is responding to change over following a plan, so don’t be afraid to change your plans based on new information. The quarterly checkpoint isn’t the only time you can look back to look forward – that’s why retrospectives are useful. You can use the activities above at any point. The best product teams build these behaviours into their regular practice. If you’d like help running these workshops or have any questions, get in touch and we’ll set up a chat.
boringmagi.cc
October 29, 2025 at 1:22 PM
Tips on doing show & tell well
## What is a show & tell? A show & tell is a regular get-together where people working on a product or service celebrate their work, talk about what they’ve learned, and get feedback from their peers. It’s also a chance to * bring together team members, management and leadership to bond, share success, and collaborate * let colleagues know what you’re working on, keep aligned, and create opportunities to connect and work together * tell stakeholders (including users, partner organisations and leadership) what you’ve been doing and take their questions as feedback (a form of governance). A show & tell may be internal, limited to other people in the same team or organisation, or open to anyone to join. Most teams start with an internal show & tell and make these open later. A show & tell might also be called a team review. ## How to run a great show & tell 1. **Don’t make it up on the spot** Spend time as a team working out what you want to say and who is going to share stories with the audience (1 or 2 people works best). 30 to 60 minutes of prep will pay off. 2. **Set the scene** Always introduce your project or epic. Who’s on the team? What are you working on? What problem are you solving? Who are your users? Why are you doing it? You don’t need to tell the full history, a 30-second overview is enough. 3. **Show the thing!** Scrappy diagrams, Mural boards, Post-it notes, screenshots, scribbles, photos, and clicking through prototypes bring things to life. Text and code is OK, but always aim to demonstrate something working – don’t just talk through a doc or some function. 4. **Talk about what you’ve learned** Share which assumptions turned out to be incorrect, or what facts surprised you. Show clips from user research and usability testing. Highlight important analytics data or performance measures. Share both findings and insights. Be clear on the methodology and any confidence intervals, levels of confidence, risky assumptions, etc. 5. **Be clear** Don’t hide behind jargon. Make bold statements. Say what you actually think! This helps everyone concentrate on the main point, and it generates discussion. 1. **Always share unfinished thinking** Forget about the polish and perfection. A show & tell is the perfect place to collect feedback, ideas and thoughts. It’s a complicated space. We’re all trying to figure it out! 2. **Rehearse** Take 10–15 minutes to rehearse your section with your team to work out whether you need to cut anything. If you’re struggling to edit, use a format like What? So what? Now what? to keep things concise. If you take up more time than you’ve been given, it’ll eat into other people’s section meaning they have to rush (or not share at all) which isn’t fair. 3. **Leave time for questions** The best show & tells have audience participation. Wherever possible, leave time for questions – either after each team or at the end. Encourage people to ask questions in the chat, on Slack, in docs, etc. If you do nothing else, follow tip number 3. You can read more tips on good show & tells from Mark Dalgarno, Emily Webber and Alan Wright. ## How to be a great show & tell audience member 1. **Be present and listen** There’s nothing worse than preparing for a show & tell only to realise that no one’s paying attention. Close Slack, close Teams, stop looking at email, and give your full attention to your team-mates. 2. **Smile, use emojis, and celebrate!** Bring the good vibes and lift each other up whenever there’s something worth celebrating. ## It’s ok to be halfway done The main thing to remember is that show & tell is not just about sharing progress and successes. It’s a time to talk about what’s hard and what didn’t work too. It’s ok to be halfway done. It’s ok to go back to the drawing board. Each sprint, try to answer these questions in your show & tell: * What did we learn or what changed our mind? * What can we show? How can we help people see behind the scenes? * What haven’t we figured out? What do we want feedback on?
boringmagi.cc
October 29, 2025 at 1:22 PM
Going faster with Lean principles
Software teams are often asked to go faster. There are many factors that influence the speed at which teams can discover, design and deliver solutions, and those factors aren’t always in a team’s control. But Lean principles offer teams a way to analyse and adapt their operating model – their ways of working. ## What is Lean? Lean is a method of manufacturing that emerged from Toyota’s Production System in the 1950s and 1960s. It’s a system that incorporates methods of production and leadership together. The early Agile community used Lean principles to inspire methods for making digital products and services. These principles have had influence beyond the production environment and have been adapted for business and strategy functions too. ## Books on Lean Four books on Lean principles have influenced the way I work. **1._Lean Software Development: An Agile Toolkit_ by Mary and Tom Poppendieck** The earliest of the four books. It really set the standard. **2._The Lean Startup_ by Eric Ries** This started a big movement for applying Lean principles to your startup, including testing out new business models or growth opportunities. **3._Lean UX_ by Jeff Gothelf and Josh Seiden** One of my favourites. This one really brought strategic goals and user experience closer together. It also shifted teams from writing problem statements to writing hypotheses. **4._The Lean Product Playbook_ by Dan Olsen** This is relatively similar to _The Lean Startup_ but is more of a playbook, showing the practice that goes with the theory. The highlight is its emphasis on MVP tests: experiments you can run to learn something without building anything. ## Lean principles All these books have some principles in their pages, all based on the original Lean principles from Toyota. They’re all pretty similar. Combining their approaches helps us apply Lean principles to business model development, strategy, user-centred design and software delivery. > A note on principles: Principles are not rules. Principles guide your thinking and doing. Rules say what’s right and wrong. ### 1. Eliminate waste Reduce anything which does not help deliver value to the user. So: partially done work; scope creep; re-learning; task-switching; waiting; hand-offs; defects; management activities. Outcomes, not outputs. ### 2. Amplify learning Build, measure, learn. Create feedback loops. Build scrappy prototypes, run spikes. Write tests first. Think in iterations. ### 3. Decide as late as possible Call out the assumptions or uncertainties, try out different options, and make decisions based on facts or evidence. ### 4. Deliver as fast as possible Shorter cycles improve learning and communication, and helps us meet users’ needs as soon as possible. Reduce work in progress, get one thing done, and iterate. ### 5. Empower the team Figure it out together. Managers provide goals, encourage progress, spot issues and remove impediments. Designers, developers and data engineers suggest how to achieve a goal and feed in to continuous improvement. ### 6. Build integrity in Agility needs quality. Automated tests and proven design patterns allow you to focus on smaller parts of the system. A regular flow of insights to act on aids agility. ### 7. Optimise the whole Focus on the entire value stream, not just individual tasks. Align strategy with development. Consider the entire user experience in the design process. ## Three simpler principles If those seem like too many to get started with, I want to introduce three simpler principles that can help you go faster. I came across these in a book about running, which doesn’t seem like the place you’d find inspiration about product management! Think easy, light and smooth. It’s from a man called Micah True who lived in the Mexican desert and went running with the local Native Americans. They called him Caballo Blanco – ‘White Horse’ – because of his speed. > “You start with easy, because if that’s all you get, that’s not so bad. Then work on light. Make it effortless, like you don’t give a shit how high the hill is or how far you’ve got to go. When you’ve practised that so long that you forget you’re practicing, you work on making it smooooooth. You won’t have to worry about the last one – you get those three, and you’ll be fast.” You can do this every cycle. Find one thing to make easier, one thing to make lighter, and one thing to make smoother. Fast will happen naturally.
boringmagi.cc
October 29, 2025 at 1:22 PM
You don’t have to do fortnightly sprints
In early 2024, we helped GOV.‌UK Design System design and implement a new model for agile delivery. It was a break away from traditional Scrum and two-week sprints towards an emphasis on iteration and reflection. ## Why change things? Traditional two-week sprints and Scrum provide good training wheels for teams who are new to agile, but those don’t work for well established or high performing teams. For research and development work (like discovery and alpha), you need a little bit longer to get your head into a domain and have time to play around making scrappy prototypes. For build work, a two-week sprint isn’t really two weeks. With all the ceremonies required for co-ordination and sharing information – which is a lot more labour-intensive in remote-first settings – you lose a couple of days with two-week sprints. Sprint goals suck too. It’s far too easy to push it along and limp from fortnight to fortnight, never really considering whether you should stop the workstream. It’s better to think about your appetite for doing something, and then to focus on getting valuable iterations out there rather than committing to a whole thing. ## How it works You can see how it works in detail on the GOV.‌UK Design System’s team playbook and in a blog post from the team’s delivery manager, Kelly. There’s also a graphic that brings the four-week cycle to life. There are a few principles that make this method work: * Fixed time, variable scope * Think in iterations: vertical not horizontal slices * Each cycle ends with something shippable or showable * R&D cycles end on decisions around scope * Each cycle starts with a brief, but the team has autonomy over delivery This gives space for ideas and conversations to breathe, for spikes and scrappy prototypes to come together, and for teams to make conscious decisions about scope and delivering value to users. ## How did it work out? In their first cycle, the team delivered three out of five briefs – which was higher than their completion rate at the time. As Kelly reported, ‘most team members enjoyed working in smaller, focused groups and having autonomy over how they deliver their work.’ A few months later, we analysed how often the team was releasing new software: **they were releasing twice as often in half the time.** Between October 2022 and October 2023, there were five releases. Between October 2023 and March 2024, there were 10 releases. One year on and the team has maintained momentum. Iterations have increased, they’ve built a steady rhythm of releasing GOV.‌UK Frontend more frequently, and according to a recent review the team is a lot happier working that way. ## Want to try something new? If you’re looking to increase team happiness and effectiveness, drop us a line and we can chat about transforming your team’s delivery model too.
boringmagi.cc
October 29, 2025 at 1:22 PM
Our positions on generative AI
Like many trends in technology before it, we’re keeping an eye on artificial intelligence (AI). AI is more of a concept, but generative AI as a general purpose technology has come to the fore due to recent developments in cloud-based computation and machine learning. Plus, technology is more widespread and available to more people, so more people are talking about generative AI – compared to something _even more_ ubiquitous like HTML. Given the hype, it feels worthwhile stating our positions on generative AI – or as we like to call it, ‘applied statistics’. We’re open to working on and with it, but there’s a few ideas we’ll bring to the table. ## The positions 1. Utility trumps hyperbole 2. Augmented not artificial intelligence 3. Local and open first 4. There will be consequences 5. Outcomes over outputs ### Utility trumps hyperbole The fundamental principle to Boring Magic’s work is that people want technologies to work. People prefer things to be functional first; the specific technologies only matter when they reduce or undermine the quality of the utility. There are outsized, unfounded claims being made about the utility of AI. It is not ‘more profound than fire’. The macroeconomic implications of AI are often overstated, but it’ll still likely have an impact on productivity. We think it’s sensible to look at how generative AI can be useful or make things less tedious, so we’re exploring the possibilities: from making analysis more accessible through to automating repeatable tasks. We won’t sell you a bunch of hype, just deliver stuff that works. ### Augmented not artificial intelligence Technologies have an impact on the availability of jobs. The introduction of the digital spreadsheet meant that chartered accountants could easily punch the numbers, leading to accounting clerks becoming surplus to requirements. Jevon’s paradox teaches us that AI will lead to more work, not less. Over time accountants needed fewer clerks, but increases in financial activity have lead to a greater need for auditors. So we will still need people in jobs to do thinking, reasoning, assessing and other things people are good at. Rather than replacing people with machines to reduce costs, technology should be used to empower human workers. We should augment the intelligence of our people, not replace it. That means using things like large language models (LLMs) to reduce the inertia of the blank page problem, helping with brainstorming, rather than asking an LLM to write something for you. Extensive not intensive technology. ### Local and open first Right now, we’re in a hype cycle, with lots of enthusiasm, funding and support for generative AI. The boom of a hype cycle is always followed by a bust, and AI winters have been common for decades. If you add AI to your product or service and rely on a cloud-based supplier for that capability, you could find the supplier goes into administration – or worse, enshittification, when fees go up and the quality of service plunges. And free services are monetised eventually. But there are lots of openly-available generative text and vision models you can run on your own computer – your ‘local machine’ – breaking the reliance on external suppliers. When exploring how to apply generative AI to a client’s problem, we’ll always use an open model and run it locally first. It’s cheaper than using a third party, and it’s more sustainable too. It also mitigates some risks around privacy and security by keeping all data processing local, not running on a machine in a data centre. That means we can get started sooner and do a data protection impact assessment later, when necessary. We can use the big players like OpenAI and Anthropic if we need to, but let’s go local and open first. ### There will be consequences People like to think of technology as a box that does a specific thing, but technology impacts and is impacted by everything around it. Technology exists within an ecology. It’s an inescapable fact, so we should try to sense the likely and unlikely consequences of implementing generative AI – on people, animals, the environment, organisations, policy, society and economies. That sounds like a big project, but there are plenty of tools out there to make it easier. We’ve used tools like consequence scanning, effects mapping, financial forecasting, Four Futures and other extrapolation methods to explore risks and harms in the past. As responsible people, it’s our duty to bring unforeseen consequences more into view, so that we can think about how to mitigate the risks or stop. ### Outcomes over outputs It feels like everyone’s doing something with generative AI at the moment, and, if you’re not, it can lead to feeling left out. But this doesn’t mean you have to do something: FOMO is not a strategy. We’ll take a look at where generative AI might be useful, but we’ll also recommend other technologies if those are cheaper, faster or more sustainable. That might mean implementing search and filtering instead of a chatbot, especially if it’s an interface that more people are used to. It’s more important to get the job done and achieve outcomes, instead of doing the latest thing because it’s cool. ## Let’s be pragmatic Ultimately our approach to generative AI is like any other technology: we’re grounded in practicality, mindful of being responsible and ethical, and will pursue meaningful outcomes. It’s the best way to harness its potential effectively. Beware the AI snake oil.
boringmagi.cc
October 29, 2025 at 1:22 PM
Metrics, measures and indicators: a few things to bear in mind
Metrics, measures and indicators help you track and evaluate outcomes. They can tell us if we’re moving in the right direction, if things aren’t going well, or if we’ve achieved the outcome we set out to achieve. If you’ve reported on key performance indicators (KPIs), checked progress against objectives and key results (OKRs) or looked at user analytics, you’ll have some experience with metrics, measures and indicators. These words are often used interchangably and, in general, the difference isn’t important. Not for this post anyway. We can talk about the difference between metrics, measures and indicators later. In this post we’ll cover some guiding principles for designing and using metrics, measures and indicators. A few things to bear in mind. ## Guiding principles 1. Value outcomes over outputs 2. Measures, not targets 3. Balance the what (quantitative) and the why (qualitative) 4. Measure the entire product or service 5. Keep it light and actionable 6. Revisit or refine as things change ### Value outcomes over outputs We acknowledge that outputs are on the path to achieving outcomes. You can’t cater for a memorable birthday party without making some sandwiches. But delivering outcomes is the real reason why we’re here. So we don’t measure whether we’ve delivered a product or feature, we measure the impact it’s having. ### Measures, not targets Follow Goodhart’s Law: ‘When a measure becomes a target, it ceases to be a good measure.’ There are numerous factors that contribute to a number or reading going up or down. Metrics, measures and indicators are a starting point for a conversation, so we can ask why and do something about it (or not). The measures are in service of learning: tools, not goals. ### Balance the what (quantitative) and the why (qualitative) Grown-ups love numbers. But it’s very easy to ignore what users think and feel when you only track quantitative measures. Numbers tell us what’s happening, but feedback can tell us why. There’s no point doing something faster if it makes the experience worse for users, for example – we have to balance quantity and quality. ### Measure the entire product or service If we can see where people start, how they move through and where they end, we can identify where to focus our efforts for improvements. The same is true for people who come back too, we want to see whether we’ve made things better than last time they were here. If you’re only measuring one part, you only know how one part is performing. Get holistic readings. ### Keep them light and actionable It’s easy to go overboard and start tracking everything, but too much information can be a bad thing. If we track too many metrics, we run the risk of analysis paralysis. Similarly, one measure is too few: it’s not enough to understand an entire system. Four to eight key metrics or indicators per team is enough and should inspire action. ### Revisit or refine as things change Our priorities will change over time, meaning we will need to change our indicators, measures and metrics too. It’s no use tracking and reporting on datapoints that don’t relate to outcomes. Measure what matters. We should aim not to change them too frequently – that causes whiplash. But it’s all right to change them when you change direction or focus. ## Are we on the way? Or did we get there? Those principles are handy for working out what to measure, but there’s two types of indicator you need to know about: leading and lagging. Leading indicators tell us whether we’re making progress towards an outcome. _Are we on the way?_ For example, if we want to make it easy to find datasets, are people searching for data? Is the number of people searching for data going up? Lagging indicators tell us whether we’ve achieved the outcome. _Did we get there?_ In that same example, making it easy to find datasets, what’s the user satisfaction score? Are they requesting new datasets?
boringmagi.cc
October 29, 2025 at 1:22 PM
Using quarters as a checkpoint
Breaking your strategy down into smaller, more manageable chunks can help you make more progress sooner. Some things take a long while to achieve, but smaller goals help us celebrate the wins along the way. Many organisations use a quarter – a block of 3 months – to do this. And it can be helpful to look back before you look forward, to celebrate the progress you’ve made and work out what to do next. Every 3 months, we encourage product teams to take the opportunity to step back from the day-to-day and consider the objectives they’re working towards. The quarterly checkpoint is a time to refocus efforts and double-down, change direction or move on to the next objective. There are 2 stages to using the quarterly checkpoint well: 1. Check on your progress 2. Plan how to achieve your new objectives Here are two workshops you can run at each stage, but you can combine them into one workshop if you like. Whatever works. ## Check on your progress First, check on the progress your team has made on your objectives and key results (OKRs). You can do this in a team workshop lasting 30 to 60 minutes. ### 1. List out the OKRs you’ve been working on (10 to 20 mins) Run through the OKRs you’ve been working on. Talk about the progress you made on each key result and celebrate the successes – big or small! ### 2. Think about what’s left to do (20 to 40 mins) For any OKRs you haven’t completed – where progress on key results isn’t 100% – discuss as a team which initiatives you have left to do to fully achieve the objective. For example, you may need to collect some data, run a test, build a thing or achieve an outcome. Consider whether you should change your approach, for example, by doing something smaller or using different methods, based on what you’ve learned over the last quarter. It’s OK to stick to the original plan if it’s still the best approach. Write down what initiatives your team has agreed to do. ## Plan how to achieve your new objectives Next, you’ll need to form a loose plan for how to achieve your new objectives. You can treat unfinished objectives from the previous quarter as a new objective. Run another workshop lasting 30 to 45 minutes for each objective. Everyone on the team will need to input on the plan using the outline below. Write it in a doc, a slide deck or on a whiteboard – whatever works. You will probably want to present these plans to the senior management team or service owner at the start of the new quarter. If it’s easier than starting with a blank page, team leads can fill in the outline and get feedback from the rest of the team. As long as everyone gets a chance to input, it doesn’t matter. It’s OK if you take less than 30 minutes, especially if you already have a plan. ### 1. Write down and describe the objective An objective is a bold and qualitative goal that the organisation wants to achieve. It’s best that they’re ambitious, not super easy to achieve or audacious in nature; they are not sales targets. Write down the problem you’re solving and who it’s a problem for. Discuss how you’ll know when you’re done. What are the success criteria? ### 2. Think about risks and unknowns What might be a challenge? What are the riskiest assumptions or big unknowns to highlight? Do you need to try new techniques? These might form the first initiatives in your plan. You can frame your assumptions using the hypothesis statement: **Because** [of something we think or know] **We believe that** [an initiative or task] **Will achieve** [desired outcome] **Measured by** [metric] Note down dependencies on other teams, for example, where you may need another team to do something for you. ### 3. Detail all the initiatives Write a sentence for all the initiatives – tasks and activities – you’ll need to do to achieve the objective. Consider research and discovery activities, which can help you gather information to turn unknowns into knowns. Consider alphas, things to prototype, spikes, and experiments that can help you de-risk or validate assumptions. Make sure to remember the development and delivery work too – that’s how we release value to users! ### 4. What will you measure? Review your success criteria. Define the metrics that will tell you when you’ve finished or achieved the objective. These should tell you when you’re done and will become your key results. Remember, metrics should be: * tangible and quantitative * specific and measurable * achievable and realistic ### 5. Prioritise radically What would you do differently if you only had half the time? How will you start small and build up? What’s the least amount of work you can do to learn the most? Use these thoughts to consider any changes to your initiatives. Go back and edit the initiatives if you need to. ## Don’t worry about adapting your plans A core tenet of agile is responding to change over following a plan, so don’t be afraid to change your plans based on new information. The quarterly checkpoint isn’t the only time you can look back to look forward – that’s why retrospectives are useful. You can use the activities above at any point. The best product teams build these behaviours into their regular practice. If you’d like help running these workshops or have any questions, get in touch and we’ll set up a chat.
boringmagi.cc
October 28, 2025 at 11:22 AM
Going faster with Lean principles
Software teams are often asked to go faster. There are many factors that influence the speed at which teams can discover, design and deliver solutions, and those factors aren’t always in a team’s control. But Lean principles offer teams a way to analyse and adapt their operating model – their ways of working. ## What is Lean? Lean is a method of manufacturing that emerged from Toyota’s Production System in the 1950s and 1960s. It’s a system that incorporates methods of production and leadership together. The early Agile community used Lean principles to inspire methods for making digital products and services. These principles have had influence beyond the production environment and have been adapted for business and strategy functions too. ## Books on Lean Four books on Lean principles have influenced the way I work. **1._Lean Software Development: An Agile Toolkit_ by Mary and Tom Poppendieck** The earliest of the four books. It really set the standard. **2._The Lean Startup_ by Eric Ries** This started a big movement for applying Lean principles to your startup, including testing out new business models or growth opportunities. **3._Lean UX_ by Jeff Gothelf and Josh Seiden** One of my favourites. This one really brought strategic goals and user experience closer together. It also shifted teams from writing problem statements to writing hypotheses. **4._The Lean Product Playbook_ by Dan Olsen** This is relatively similar to _The Lean Startup_ but is more of a playbook, showing the practice that goes with the theory. The highlight is its emphasis on MVP tests: experiments you can run to learn something without building anything. ## Lean principles All these books have some principles in their pages, all based on the original Lean principles from Toyota. They’re all pretty similar. Combining their approaches helps us apply Lean principles to business model development, strategy, user-centred design and software delivery. > A note on principles: Principles are not rules. Principles guide your thinking and doing. Rules say what’s right and wrong. ### 1. Eliminate waste Reduce anything which does not help deliver value to the user. So: partially done work; scope creep; re-learning; task-switching; waiting; hand-offs; defects; management activities. Outcomes, not outputs. ### 2. Amplify learning Build, measure, learn. Create feedback loops. Build scrappy prototypes, run spikes. Write tests first. Think in iterations. ### 3. Decide as late as possible Call out the assumptions or uncertainties, try out different options, and make decisions based on facts or evidence. ### 4. Deliver as fast as possible Shorter cycles improve learning and communication, and helps us meet users’ needs as soon as possible. Reduce work in progress, get one thing done, and iterate. ### 5. Empower the team Figure it out together. Managers provide goals, encourage progress, spot issues and remove impediments. Designers, developers and data engineers suggest how to achieve a goal and feed in to continuous improvement. ### 6. Build integrity in Agility needs quality. Automated tests and proven design patterns allow you to focus on smaller parts of the system. A regular flow of insights to act on aids agility. ### 7. Optimise the whole Focus on the entire value stream, not just individual tasks. Align strategy with development. Consider the entire user experience in the design process. ## Three simpler principles If those seem like too many to get started with, I want to introduce three simpler principles that can help you go faster. I came across these in a book about running, which doesn’t seem like the place you’d find inspiration about product management! Think easy, light and smooth. It’s from a man called Micah True who lived in the Mexican desert and went running with the local Native Americans. They called him Caballo Blanco – ‘White Horse’ – because of his speed. > “You start with easy, because if that’s all you get, that’s not so bad. Then work on light. Make it effortless, like you don’t give a shit how high the hill is or how far you’ve got to go. When you’ve practised that so long that you forget you’re practicing, you work on making it smooooooth. You won’t have to worry about the last one – you get those three, and you’ll be fast.” You can do this every cycle. Find one thing to make easier, one thing to make lighter, and one thing to make smoother. Fast will happen naturally.
boringmagi.cc
October 28, 2025 at 11:22 AM
Our positions on generative AI
Like many trends in technology before it, we’re keeping an eye on artificial intelligence (AI). AI is more of a concept, but generative AI as a general purpose technology has come to the fore due to recent developments in cloud-based computation and machine learning. Plus, technology is more widespread and available to more people, so more people are talking about generative AI – compared to something _even more_ ubiquitous like HTML. Given the hype, it feels worthwhile stating our positions on generative AI – or as we like to call it, ‘applied statistics’. We’re open to working on and with it, but there’s a few ideas we’ll bring to the table. ## The positions 1. Utility trumps hyperbole 2. Augmented not artificial intelligence 3. Local and open first 4. There will be consequences 5. Outcomes over outputs ### Utility trumps hyperbole The fundamental principle to Boring Magic’s work is that people want technologies to work. People prefer things to be functional first; the specific technologies only matter when they reduce or undermine the quality of the utility. There are outsized, unfounded claims being made about the utility of AI. It is not ‘more profound than fire’. The macroeconomic implications of AI are often overstated, but it’ll still likely have an impact on productivity. We think it’s sensible to look at how generative AI can be useful or make things less tedious, so we’re exploring the possibilities: from making analysis more accessible through to automating repeatable tasks. We won’t sell you a bunch of hype, just deliver stuff that works. ### Augmented not artificial intelligence Technologies have an impact on the availability of jobs. The introduction of the digital spreadsheet meant that chartered accountants could easily punch the numbers, leading to accounting clerks becoming surplus to requirements. Jevon’s paradox teaches us that AI will lead to more work, not less. Over time accountants needed fewer clerks, but increases in financial activity have lead to a greater need for auditors. So we will still need people in jobs to do thinking, reasoning, assessing and other things people are good at. Rather than replacing people with machines to reduce costs, technology should be used to empower human workers. We should augment the intelligence of our people, not replace it. That means using things like large language models (LLMs) to reduce the inertia of the blank page problem, helping with brainstorming, rather than asking an LLM to write something for you. Extensive not intensive technology. ### Local and open first Right now, we’re in a hype cycle, with lots of enthusiasm, funding and support for generative AI. The boom of a hype cycle is always followed by a bust, and AI winters have been common for decades. If you add AI to your product or service and rely on a cloud-based supplier for that capability, you could find the supplier goes into administration – or worse, enshittification, when fees go up and the quality of service plunges. And free services are monetised eventually. But there are lots of openly-available generative text and vision models you can run on your own computer – your ‘local machine’ – breaking the reliance on external suppliers. When exploring how to apply generative AI to a client’s problem, we’ll always use an open model and run it locally first. It’s cheaper than using a third party, and it’s more sustainable too. It also mitigates some risks around privacy and security by keeping all data processing local, not running on a machine in a data centre. That means we can get started sooner and do a data protection impact assessment later, when necessary. We can use the big players like OpenAI and Anthropic if we need to, but let’s go local and open first. ### There will be consequences People like to think of technology as a box that does a specific thing, but technology impacts and is impacted by everything around it. Technology exists within an ecology. It’s an inescapable fact, so we should try to sense the likely and unlikely consequences of implementing generative AI – on people, animals, the environment, organisations, policy, society and economies. That sounds like a big project, but there are plenty of tools out there to make it easier. We’ve used tools like consequence scanning, effects mapping, financial forecasting, Four Futures and other extrapolation methods to explore risks and harms in the past. As responsible people, it’s our duty to bring unforeseen consequences more into view, so that we can think about how to mitigate the risks or stop. ### Outcomes over outputs It feels like everyone’s doing something with generative AI at the moment, and, if you’re not, it can lead to feeling left out. But this doesn’t mean you have to do something: FOMO is not a strategy. We’ll take a look at where generative AI might be useful, but we’ll also recommend other technologies if those are cheaper, faster or more sustainable. That might mean implementing search and filtering instead of a chatbot, especially if it’s an interface that more people are used to. It’s more important to get the job done and achieve outcomes, instead of doing the latest thing because it’s cool. ## Let’s be pragmatic Ultimately our approach to generative AI is like any other technology: we’re grounded in practicality, mindful of being responsible and ethical, and will pursue meaningful outcomes. It’s the best way to harness its potential effectively. Beware the AI snake oil.
boringmagi.cc
October 28, 2025 at 11:22 AM
Tips on doing show & tell well
## What is a show & tell? A show & tell is a regular get-together where people working on a product or service celebrate their work, talk about what they’ve learned, and get feedback from their peers. It’s also a chance to * bring together team members, management and leadership to bond, share success, and collaborate * let colleagues know what you’re working on, keep aligned, and create opportunities to connect and work together * tell stakeholders (including users, partner organisations and leadership) what you’ve been doing and take their questions as feedback (a form of governance). A show & tell may be internal, limited to other people in the same team or organisation, or open to anyone to join. Most teams start with an internal show & tell and make these open later. A show & tell might also be called a team review. ## How to run a great show & tell 1. **Don’t make it up on the spot** Spend time as a team working out what you want to say and who is going to share stories with the audience (1 or 2 people works best). 30 to 60 minutes of prep will pay off. 2. **Set the scene** Always introduce your project or epic. Who’s on the team? What are you working on? What problem are you solving? Who are your users? Why are you doing it? You don’t need to tell the full history, a 30-second overview is enough. 3. **Show the thing!** Scrappy diagrams, Mural boards, Post-it notes, screenshots, scribbles, photos, and clicking through prototypes bring things to life. Text and code is OK, but always aim to demonstrate something working – don’t just talk through a doc or some function. 4. **Talk about what you’ve learned** Share which assumptions turned out to be incorrect, or what facts surprised you. Show clips from user research and usability testing. Highlight important analytics data or performance measures. Share both findings and insights. Be clear on the methodology and any confidence intervals, levels of confidence, risky assumptions, etc. 5. **Be clear** Don’t hide behind jargon. Make bold statements. Say what you actually think! This helps everyone concentrate on the main point, and it generates discussion. 1. **Always share unfinished thinking** Forget about the polish and perfection. A show & tell is the perfect place to collect feedback, ideas and thoughts. It’s a complicated space. We’re all trying to figure it out! 2. **Rehearse** Take 10–15 minutes to rehearse your section with your team to work out whether you need to cut anything. If you’re struggling to edit, use a format like What? So what? Now what? to keep things concise. If you take up more time than you’ve been given, it’ll eat into other people’s section meaning they have to rush (or not share at all) which isn’t fair. 3. **Leave time for questions** The best show & tells have audience participation. Wherever possible, leave time for questions – either after each team or at the end. Encourage people to ask questions in the chat, on Slack, in docs, etc. If you do nothing else, follow tip number 3. You can read more tips on good show & tells from Mark Dalgarno, Emily Webber and Alan Wright. ## How to be a great show & tell audience member 1. **Be present and listen** There’s nothing worse than preparing for a show & tell only to realise that no one’s paying attention. Close Slack, close Teams, stop looking at email, and give your full attention to your team-mates. 2. **Smile, use emojis, and celebrate!** Bring the good vibes and lift each other up whenever there’s something worth celebrating. ## It’s ok to be halfway done The main thing to remember is that show & tell is not just about sharing progress and successes. It’s a time to talk about what’s hard and what didn’t work too. It’s ok to be halfway done. It’s ok to go back to the drawing board. Each sprint, try to answer these questions in your show & tell: * What did we learn or what changed our mind? * What can we show? How can we help people see behind the scenes? * What haven’t we figured out? What do we want feedback on?
boringmagi.cc
October 28, 2025 at 11:22 AM
You don’t have to do fortnightly sprints
In early 2024, we helped GOV.‌UK Design System design and implement a new model for agile delivery. It was a break away from traditional Scrum and two-week sprints towards an emphasis on iteration and reflection. ## Why change things? Traditional two-week sprints and Scrum provide good training wheels for teams who are new to agile, but those don’t work for well established or high performing teams. For research and development work (like discovery and alpha), you need a little bit longer to get your head into a domain and have time to play around making scrappy prototypes. For build work, a two-week sprint isn’t really two weeks. With all the ceremonies required for co-ordination and sharing information – which is a lot more labour-intensive in remote-first settings – you lose a couple of days with two-week sprints. Sprint goals suck too. It’s far too easy to push it along and limp from fortnight to fortnight, never really considering whether you should stop the workstream. It’s better to think about your appetite for doing something, and then to focus on getting valuable iterations out there rather than committing to a whole thing. ## How it works You can see how it works in detail on the GOV.‌UK Design System’s team playbook and in a blog post from the team’s delivery manager, Kelly. There’s also a graphic that brings the four-week cycle to life. There are a few principles that make this method work: * Fixed time, variable scope * Think in iterations: vertical not horizontal slices * Each cycle ends with something shippable or showable * R&D cycles end on decisions around scope * Each cycle starts with a brief, but the team has autonomy over delivery This gives space for ideas and conversations to breathe, for spikes and scrappy prototypes to come together, and for teams to make conscious decisions about scope and delivering value to users. ## How did it work out? In their first cycle, the team delivered three out of five briefs – which was higher than their completion rate at the time. As Kelly reported, ‘most team members enjoyed working in smaller, focused groups and having autonomy over how they deliver their work.’ A few months later, we analysed how often the team was releasing new software: **they were releasing twice as often in half the time.** Between October 2022 and October 2023, there were five releases. Between October 2023 and March 2024, there were 10 releases. One year on and the team has maintained momentum. Iterations have increased, they’ve built a steady rhythm of releasing GOV.‌UK Frontend more frequently, and according to a recent review the team is a lot happier working that way. ## Want to try something new? If you’re looking to increase team happiness and effectiveness, drop us a line and we can chat about transforming your team’s delivery model too.
boringmagi.cc
October 28, 2025 at 11:22 AM
Metrics, measures and indicators: a few things to bear in mind
Metrics, measures and indicators help you track and evaluate outcomes. They can tell us if we’re moving in the right direction, if things aren’t going well, or if we’ve achieved the outcome we set out to achieve. If you’ve reported on key performance indicators (KPIs), checked progress against objectives and key results (OKRs) or looked at user analytics, you’ll have some experience with metrics, measures and indicators. These words are often used interchangably and, in general, the difference isn’t important. Not for this post anyway. We can talk about the difference between metrics, measures and indicators later. In this post we’ll cover some guiding principles for designing and using metrics, measures and indicators. A few things to bear in mind. ## Guiding principles 1. Value outcomes over outputs 2. Measures, not targets 3. Balance the what (quantitative) and the why (qualitative) 4. Measure the entire product or service 5. Keep it light and actionable 6. Revisit or refine as things change ### Value outcomes over outputs We acknowledge that outputs are on the path to achieving outcomes. You can’t cater for a memorable birthday party without making some sandwiches. But delivering outcomes is the real reason why we’re here. So we don’t measure whether we’ve delivered a product or feature, we measure the impact it’s having. ### Measures, not targets Follow Goodhart’s Law: ‘When a measure becomes a target, it ceases to be a good measure.’ There are numerous factors that contribute to a number or reading going up or down. Metrics, measures and indicators are a starting point for a conversation, so we can ask why and do something about it (or not). The measures are in service of learning: tools, not goals. ### Balance the what (quantitative) and the why (qualitative) Grown-ups love numbers. But it’s very easy to ignore what users think and feel when you only track quantitative measures. Numbers tell us what’s happening, but feedback can tell us why. There’s no point doing something faster if it makes the experience worse for users, for example – we have to balance quantity and quality. ### Measure the entire product or service If we can see where people start, how they move through and where they end, we can identify where to focus our efforts for improvements. The same is true for people who come back too, we want to see whether we’ve made things better than last time they were here. If you’re only measuring one part, you only know how one part is performing. Get holistic readings. ### Keep them light and actionable It’s easy to go overboard and start tracking everything, but too much information can be a bad thing. If we track too many metrics, we run the risk of analysis paralysis. Similarly, one measure is too few: it’s not enough to understand an entire system. Four to eight key metrics or indicators per team is enough and should inspire action. ### Revisit or refine as things change Our priorities will change over time, meaning we will need to change our indicators, measures and metrics too. It’s no use tracking and reporting on datapoints that don’t relate to outcomes. Measure what matters. We should aim not to change them too frequently – that causes whiplash. But it’s all right to change them when you change direction or focus. ## Are we on the way? Or did we get there? Those principles are handy for working out what to measure, but there’s two types of indicator you need to know about: leading and lagging. Leading indicators tell us whether we’re making progress towards an outcome. _Are we on the way?_ For example, if we want to make it easy to find datasets, are people searching for data? Is the number of people searching for data going up? Lagging indicators tell us whether we’ve achieved the outcome. _Did we get there?_ In that same example, making it easy to find datasets, what’s the user satisfaction score? Are they requesting new datasets?
boringmagi.cc
October 28, 2025 at 11:22 AM