Moving Faster with Less: An Iterative and Incremental Development Toolkit (Part 1 — Planning Tools)

Kaila Corrington
Klaviyo Engineering
24 min readMar 12, 2024

--

When asked to describe the culture of the Klaviyo engineering team in several adjectives, it’s not uncommon to find “fast-paced” somewhere in the mix. We’re often moving quickly and diving into projects where there are more unknowns than knowns, resulting in a high degree of uncertainty in everything we do, whether that’s what we build or how we build it. As engineers, we have to not only be comfortable with uncertainty, we have to be efficient with it.

I’m a part of the SMS Market Expansion team, whose charter has us navigating across team and domain boundaries to strategically tackle high priority projects that will give us a competitive edge when it comes to the SMS sending channel. Identifying those opportunities means weathering a continuously shifting market landscape from a product perspective. Additionally, we need to be flexible building on top of areas of a codebase that might be shifting out from under us from an engineering perspective.

Over the past several years, my team has been experimenting with making the way we plan for and execute upon work more iterative and incremental. The strategies that we’ve developed have served us well when it comes to building high quality features in a constantly evolving environment and delivering them to customers for feedback as quickly as possible.

A group of five cartoon meerkats sit around a table collaborating over architectural plans.
Meet Marty the Meerkat! He’s the SMS Market Expansion team “mascot,” and in addition to popping up in our Slack channels, you’ll see him throughout this blog post series acting as a champion of iterative and incremental development as well as collaboration.

In this technical blog post series, I’ll dive deeper into the motivation behind our experimentation, and then provide a sampling of the practices we’ve adopted that have worked well. It’s important to note that this is just a snapshot on our journey as we continue to inspect and adapt how we operate as a team; regardless, it is my hope that the tools here will inspire you and your teams in adopting iterative and incremental development mentalities to navigate inevitable uncertainty with confidence.

Where We Started

In mid-2021, the SMS Market Expansion team realized that we had started gravitating toward waterfall tendencies in our product development lifecycle (PDLC) that were causing our team friction.

The waterfall methodology states that one phase of the PDLC should be completed in its entirety before progressing to the next; for example, only after the planning phase has been wrapped up can development begin. There’s nothing inherently wrong with using waterfall development practices in software engineering, but like with any toolkit, you should be intentional about which tools you’re reaching for and when.

A flow diagram depicts the states of a waterfall product development lifecycle. A product feature specification document must be written and approved before a technical specification document can be written and approved before development can begin. Once development is completed, then a release can happen.
In waterfall development, each phase should be completed before the next can begin. For the SMS Market Expansion team, this meant waiting until the product feature specification was written until starting any technical specification documents, and then completing those before development could start.

Waterfall development works incredibly well when rigid requirements are clearly outlined at the start and unlikely to change over the course of the project. This may be true for small projects in mature systems or even for large projects in highly regulated environments, but in most other cases, the realities of software development will more often give advantage to flexible workflows.

As the SMS Market Expansion team started taking on larger projects, we found ourselves commonly crossing team and domain boundaries. We also recognized that we needed to be nimble enough to respond to changes in the market after a project had already begun. It quickly became clear that it was time to update our toolbox.

To do so, we evaluated our team’s current PDLC with a special focus on how we planned and executed upon work, and we had several realizations:

  • We could deliver business value to customers faster if we were less focused on defining the end vision down to the minute details.
  • Priorities can change quickly at a fast-paced company, meaning any upfront planning work might be wasted entirely.
  • We used a highly technical breakdown of work that often made it difficult to get a pulse check on overall progress and identify opportunities for end-to-end testing as early as possible.
  • Working on multiple projects at once might give the appearance of forward progress across the board, but in actuality, we were making it difficult to manage timeline expectations and were slowing ourselves down in the long run due to context-switching and knowledge siloing.

We hypothesized that we could reduce all of the above pain points by focusing our effort on two thematic anti-patterns that were present in our PDLC:

  • Our planning and execution process had become too waterfall (or sequential) in nature.
  • We were relying too heavily on siloing within the team to build knowledge and execute.

Both of these themes made it difficult for our team to be flexible in response to change — whether that was a change in prioritization, a change in feature requirements, a change in expected timeline, a teammate winning the lottery and jetting off to live out their wildest dreams, etc.

Given that change was something that we wanted to optimize for, it made sense to look toward the iterative and incremental mindsets at the core of the Agile project management methodology, which focuses on embracing change, uncertainty, and rapid delivery in a very customer-focused way. We were already partially operating as an Agile Scrum team, with two week sprint timeboxes and other ceremonies and artifacts (grooming, retros, a product backlog, etc.), but like so many other teams in the industry, waterfall tendencies continued to prevail under the hood. Taking an intentional step back gave us an opportunity to look further and incorporate techniques from various other Agile frameworks without ending up with a heavyweight or prescriptive process.

The Planning Toolkit

The first area where we focused our efforts was revamping our planning process, and we knew the most crucial aspect would be identifying a better approach to determining the scope of a project. We wanted to see if we could cut back on the time it took before Product and Engineering were aligned enough for development to begin.

This requires working with Product to identify the core, table stakes functionality that will still provide value to customers, and then focusing on driving consensus around any major open questions that could drastically impact those table stakes.

To achieve this, we now use the following tools:

  • Phased release scopes
  • Demoable milestones
  • Thin, vertically sliced stories
  • Abstract story points and planning poker
  • Technical spikes

Over the past several years, these tools have worked well when it comes to negotiating a minimal foundational scope, organizing the work within that scope in a way that enables faster feedback for end-to-end functionality, and developing confidence in building a solution with minimal upfront planning.

The sections that follow will explain each tool in more detail with concrete examples.

Phased Release Scopes

Engineers are great at breaking things down. We do it every day when we’re abstracting and modularizing code to identify the common themes that can be reused to build up larger algorithms and systems. Why shouldn’t we also apply this break-it-down-to-build-it-back-up mentality not just to the code we write but to the scope of the projects we take on?

One of the first ways we wanted to disrupt our waterfall planning patterns was by reducing the scope of what we were committing to in the first place. We started working with Product earlier during the very start of the feature specification process. The goal was to encourage the breakdown of a potentially lofty end vision requirements document into phased release scopes that would identify multiple iterations viable for customer release. We could still include all of the functionality that had been documented for the end vision, but we would organize its release in a way that would enable faster delivery into customer hands by defining incremental deliverables that could be released independently.

The first release phase of a project should always be scoped to deliver the minimum viable product (or MVP), which is the most minimal but still useful version of a product that can garner some amount of feedback from our customers. The term “MVP” is such a common phrase within startup culture that sometimes it’s easy to forget that even larger, established companies can find immense value in the underlying mindset. By focusing our efforts on delivering the MVP first, we’re able to validate our assumptions with real customers as early as possible. As engineers, we’re validating that our foundational systems can handle real customer data and usage patterns before we build anything larger, while product managers are simultaneously validating that what we’ve built is on the right track towards what customers actually want. If any of our assumptions turn out to be incorrect (and some definitely will; that’s part of the risk of moving quickly that we want to be comfortable with!), then releasing in this minimally scoped fashion enables us to learn more and pivot faster. As a side benefit, this also reduces wasted planning and execution work!

Negotiating the scope of this initial phased release can be tricky but is a critical starting point for any project. We’ve found that describing project scope in terms of the crawl-walk-run mentality can be a useful communication technique here. This builds on the simple idea that you have to be able to crawl before you can walk before you can run; in other words, you should invest in building your MVP foundation and learning from that before adding on additional features or complexity.

Let’s look at the SMS Conversations page for an example. This page was launched in 2021 and offers users the ability to navigate their brand’s text message threads much like a chat app.

A screenshot of the SMS Conversations page in Klaviyo with a list of active conversation threads between the brand and customer on the left side. The top thread is selected and opened, showing the SMS history between the brand and a customer. The brand is using a text input at the bottom of the screen to draft a reply to a customer question.
The SMS Conversations page allows users to click on a specific thread and see the history of SMS messages between the brand and that customer.

The original Product requirements included infinite scrolling within the message history of a conversational thread. This was based on the hypothesis that not having the context of the entire message history available would be a dealbreaker for customers. When presented with this requirement, Engineering recognized that building infinite scrolling would not be trivial — at the time, there was no precedent for infinite scrolling within the Klaviyo application and there were complexities with the APIs that we used to populate the message window. As part of negotiating our phased release scopes, we proposed the following crawl-walk-run approach:

  • Crawl: Load the 50 most recent messages in the message window without giving users any ability to load more. Behind the scenes, Engineering would commit to architecting the API in a way that could be extended to support pagination in the future.
  • Walk: Add a manual “Load More” button in the UI that would make use of the foundation we’d laid for API pagination to add another 50 messages.
  • Run: Implement infinite scrolling by automating the request for more messages via lazy loading as users scroll through the message history.

We worked with Product to align that implementing the crawl phase would be enough for the Conversations page’s minimum viable functionality. This meant that a solution could be delivered to customers faster than if we needed to invest upfront in the full infinite scrolling architecture.

It’s been nearly three years since that initial release, and if you visit the Conversations page today, you’ll see that we haven’t implemented anything beyond the crawl phase despite the original requirements calling for infinite scrolling. Once we released Conversations to our customers, we were able to use their feedback to prioritize the next features to build. Those features included things like inbox management tools and help desk integrations but not additional message loading functionality.

Successfully building alignment around a targeted incremental scope enables this sort of discovery. You will be able to maximize the work not done, because sometimes it just might turn out that the minimal solution is all you’ll ever need to build.

Demoable Milestones

The next tool we use is demoable milestones. These are checkpoints defined within our release phases that hold us accountable to validating our learnings and ensuring that we’re on the right track with stakeholders along the way. Just as with our release phases, Product and Engineering work together to negotiate and define these milestones, which builds alignment on how each milestone demonstrates business value and proves clear progress towards end objectives. The definition of a milestone should include a testable increment that exhibits end-to-end functionality so that you can test how multiple smaller pieces come together, potentially in a production environment.

A screenshot of an early design mock of the Conversations page with colored boxes grouping related functionality. A red box surrounds the list of active conversation threads; a green box surrounds a search input for the active conversation threads; ; a blue box surrounds the history of SMS messages between the brand and customer in the active conversation window; a yellow box surrounds a header for the active conversation; a purple box surrounds the text input for reply message composition.
Milestones should include the end-to-end functionality required to bring a feature to life; for example, the red box encompasses a milestone of “displaying the conversational thread list.”

Let’s revisit the SMS Conversations page for another example. The screenshot above was taken from an early technical planning document with the original designs for the Conversations page. Each colored box annotates a different piece of functionality; e.g. the red box is responsible for loading the list of message threads, the blue box contains the message history for a single thread, the green box highlights a search feature, etc.

The required functionality within the scope for the initial MVP release of Conversations could be broken down into the following milestones summarized by their testable increments:

  • Display Conversational Thread List: The 10 most recent conversation thread previews are displayed on the screen.
  • Display Conversation Message History: The 50 most recent messages are displayed in the message history window for the active conversation thread.
  • Send Conversation Message: A response can be composed and sent out as a text message within the active conversation thread.

By focusing on building one milestone at a time rather than distributing our efforts across all three tracks of work at once, we lessened the time it took to drive each milestone to completion. This is yet another way we reduce the turnaround time for feedback.

Typically at this stage of development, we’re leaning on Product and Design team members as internal stakeholders, but as Engineers, we’re also acting as stakeholders ourselves by unlocking the ability to test end-to-end performance. Feedback for a testable increment could include the discovery of a minor edge case bug that Product is okay deprioritizing until after the initial release. It could include noticing that the designs need slight optimizations, but we can still easily make those tweaks while our knowledge of the code is fresh. Or it could include the realization that retrieving certain data in a production environment is subject to different latencies than we saw in development, and we’d need to prioritize additional work to be able to provide a seamless experience for real customers (this one actually did happen).

Whatever the feedback is, the critical point is that these end-to-end demoable milestones enable us to receive it much earlier than if we just started building a little from each milestone at once. For example, imagine if we first built the entirety of the API layer for the SMS Conversations page. We might implement the endpoint for sending messages, then move on immediately to an endpoint for retrieving a conversation’s message history, and so on. We’d be able to test the endpoints in isolation, but we wouldn’t be able to validate how they actually fit into the larger picture until we implemented the corresponding UI. Because there isn’t anything to demo to stakeholders until the very end, we risk incurring much more wasted work if our eventual demos and testing uncover a false assumption that we made early on.

By instead prioritizing the completion of the entire end-to-end functionality of a demoable milestone, we ensure that we’re able to validate that any separate layers connect as expected as soon as possible and enable ourselves to pivot much more nimbly if they don’t.

Thin Vertically Sliced Stories

We’ve so far carved out a minimum viable scope from an end vision for product functionality and isolated individual feature milestones within that scope that we’ll be able to test along the way. Now it’s time to break those demoable milestones down further into their individual requirements, and this is where our next tool comes in. We’ll express those requirements using thin, vertically sliced stories which identify clear increments of business value that can be potentially shipped within a single sprint.

This concept of “vertical slicing” isn’t new even if it’s the first time we’re explicitly putting a name to it. Vertical slicing is just another term for focusing on end-to-end functionality, and we’ve applied this mentality at every level so far. Our MVP was the minimum slice that constituted a viable product. Within that, our milestones were slices of related functionality. Finally, we’re going to define our stories as the thinnest possible vertical slices that still provide customer value. Pick your poison here with user stories or job stories — both are common ways of expressing functionality with an Agile mindset.

Let’s revisit the SMS Conversations page to see a thin, vertically sliced story in action.

After the initial MVP release, we used customer feedback to prioritize a set of inbox management features. We translated individual features into demoable milestones, including the “Archive/Unarchive” milestone, which was defined by the following testable increment:

Archiving a conversation removes it from the main Inbox tab and instead displays it on a new tab for archived threads. From the Archived tab, it can be unarchived and returned to the main Inbox tab.

A screenshot depicting a design prototype for the Archive/Unarchive milestone within the SMS Conversations page. At the top of the list of conversation threads, there are two tabs labeled Inbox and Archived. The Inbox tab is currently active. The screenshot shows a user clicking into the available actions for a conversation thread, including an Archive action.
An early design prototype for a new Archive action and the Archived tab in the SMS Conversations page.

Within that milestone, we identified even smaller increments of end-to-end functionality:

  1. The ability to remove a conversation from the main (inbox) list of threads
  2. The display of archived conversations in a separate Archived tab
  3. The ability to return a conversation to the Inbox tab from the Archived tab

Here’s a thin, vertically sliced story that captures the first increment above:

When I no longer need to respond to a conversation, I want to be able to archive it and remove it from my Inbox so that it doesn’t distract me from active conversation threads.

This is written in a common job story format, which expresses a situation (when), a motivation (I want), and an expected outcome (so that) to align both Product and Engineering on what is being built.

The story ticket should also include Product-driven acceptance criteria that define the specific functionality that must exist upon delivery of the story, e.g.:

  • A new Archive option appears in the action menu for the conversation threads.
  • When selected, the conversation will no longer be visible in the thread list.

Design collateral can also convey acceptance criteria and should be linked to the ticket if applicable.

A screenshot of the conversation threads in the Inbox tab with the list of options for a given conversation. The Archive option is highlighted.
An example of design collateral that might be included on the story introducing the Archive thread functionality to the SMS Conversations page.

If there were internal metrics or SLOs that Engineering felt were imperative, those could be added as non-functional acceptance criteria on the ticket as well.

The goal when defining our stories is to enable faster feedback by capturing the minimum functionality that enables some sort of end-to-end test. When implementing this sample Archive story, we touched multiple layers of the development stack — for example, maybe a pull request in our frontend repository for UI changes, maybe some API changes on the backend, maybe a database migration — but the crucial point is that those different pieces came together to form a complete top-to-bottom vertical slice. We knew that we had identified an appropriately thin vertical slice for this story because 1) taking any one of those layers away would have prevented an end-to-end test, and 2) the work provided potentially shippable value.

“Potentially shippable” is an important distinction. There might be constraints that prevent us from actually releasing the functionality to live customers such as an organizational release schedule, product marketing requirements, etc. Maybe in this particular case, Product (and any reasonable person!) believed it would be a less confusing customer experience if we held off releasing until we had also implemented the story that introduced the display of the Archived tab so that archived conversations didn’t just disappear into the ether. Regardless, delivering this singular story meant that this increment of value was fully tested and production ready, so if there was a business reason, we could ship it to customers. In the meantime, we could stick it behind a feature flag for internal or beta customer testing.

The ability to validate assumptions as we go is a critical benefit of organizing work in vertical slices. Look for these vertical slices at every level, from your phased release scopes to your demoable milestones to your stories. Since your stories should be scoped to fit within a single sprint, you’re enabling end-to-end testing and feedback loops as early and often as your sprint cadence.

Abstract Story Points & Planning Poker

Now that our MVP is split into demoable milestones with the required functionality further broken down into individual stories, our team has everything it needs to take an initial pass at understanding the overall effort required for the feature. This is where our next pair of tools come in: abstract story points are used in a game of planning poker to align on relative effort without committing to a particular solution.

Here, effort is measured in abstract units that intentionally do not have an explicit correlation to time. Instead, they capture the volume, complexity, and uncertainty of the work. We want to enable ourselves to use points as a rough measure that allows us to make generalized statements such as “a five point story will take roughly twice as much effort as a two point story” but not statements like “a five point story will take five hours.”

Consider applying the dimensions of relative effort to the work items below:

Executing a routine command across a variety of accounts might be low complexity, low uncertainty, but high volume. A one time migration on a traditionally tricky database table might be low volume, but high complexity and high uncertainty. Changing the style of a button to align with design system updates might be low complexity, uncertainty, and volume.

Classifying our stories with these dimensions allows us to do quick mental inventories and make projections without getting bogged down in precise time estimates that depend on implementation details that we haven’t decided yet. The goal here is to get a gut check, not a binding commitment.

There has been plenty written elsewhere about why measuring in time tends to be a bad idea, including that humans are simply bad at it. One specific pitfall worth calling out here is that time estimates will fluctuate depending on who is doing the work, so the value could vary wildly if the ticket is assigned to a new junior hire versus a tenured lead engineer. The focus on “exactly how many hours would it take me personally if I was working on this story?” is less productive than focusing on the effort dimensions and asking broader questions like “where exactly is the uncertainty in this story? What questions would we need to answer to reduce that uncertainty?”

We can use planning poker as a means to facilitate discussion around these types of broader questions, honing in on the different dimensions and pooling knowledge to reach consensus on relative effort.

Before we can “play” a game of planning poker, though, we need to determine a number scale for our abstract story points and baselines for what those numbers mean to us as a team. It’s common to use a number scale where the spread between the numbers increases as the numbers get bigger, like the Fibonacci sequence or a geometric sequence. This is so that the numbers aren’t too close together, which makes distinguishing the meaning between them as rough estimates impossible.

Then, we can work to determine a baseline for several of those numbers, expressed in terms of the work that our team typically sees.

For example, the SMS Market Expansion team uses the Fibonacci sequence for estimation and completed a exercise that determined the following baselines:

Now we can “play”! Here’s how:

  1. Present a story to a team
  2. Ask everyone to provide estimates
  3. Discuss the spread of estimated numbers (typically asking high and low end estimates to explain the reasoning behind their choices)
  4. Repeat steps 2 and 3 until there is convergence on a single estimate
An animated gif depicting team members estimating a story using an online planning tool. Members submit story points privately, and once all members have submitted an estimate, the estimates are revealed. Kaila, Matt, and Andrew have estimated 2 points. Klaida and Nithin have estimated 3 points. Rob has estimated 5 points. The estimates are then deleted so the team can submit points again.
After our team is presented with a story, we submit our estimates using an online planning poker tool (there are a variety of free ones; the one here is Scrum Poker Online). We’d then discuss the differences, perhaps asking Rob to share his opinion why the story seems like 5 points and asking Andrew why he pointed 2. Next, we’d clear the estimates and use that context to submit estimates again.

Because the end goal is that anyone on the team should be able to work on a story, it’s important that the full team participates in this planning poker exercise. This ensures that the discussion that occurs during planning poker will incorporate diverse opinions from the start, which will oftentime make the discoveries from discussion more valuable than the estimates themselves.

Don’t focus too much on technical implementation during discussion; in fact, avoid prescribing a specific solution as much as possible. This can be a difficult pitfall to avoid because many engineers will naturally latch onto a technical solution in their heads once they hear the requirements. Our technical vision should always contribute to our estimates and the discussion that follows, but we should avoid treating any particular implementation as an absolute commitment. It’s crucial to remember that at this stage, planning poker is just meant to be an early effort assessment tool. Priorities could change, and the team may not actually end up working on the project in the near future, so we want to avoid fixating on a specific implementation that may be outdated by the time the work is actually picked up. Instead, use the discussion to shed light on areas of uncertainty that different perspectives can bring to the table; for example:

  • “We need to make a change to a particular page in the frontend that’s notorious for being intimidating because it has little test coverage.”
  • “We don’t store this data anywhere that’s easily referenced by profile id, yet, so we’ll probably need to change or add new data models.”
  • “Alice’s team gave a tech talk a couple of weeks ago — they recently implemented a service that provides the exact functionality we’ll need to leverage here. We don’t have to use it, but it proves that there’s precedent.”

It’s important to ensure that the valuable contributions during planning poker discussion aren’t lost (we typically document them as comments or in a “technical notes” section on our stories). The discussion notes can help jog memories when it does come time to pick up a story, and in the meantime, the single number estimate represents validation that the team aligns on the relative effort of a solution even without knowing exactly how that solution will be implemented.

This estimate alone is an incredibly powerful tool that can be used to suggest timelines for a project before we’ve started writing any code. For example, if our team has been using story points enough to establish a velocity of average points completed per sprint and we also have estimated story points for all of the stories within our MVP, it becomes a simple formula to project out how many sprints it might take our team to complete the MVP scope:

A formula depicting that the sum of estimated story points for a feature divided by the team’s average velocity (or story points completed) per sprint gives a projection of the number of sprints needed to complete the feature.

Because we’re at the very beginning of our journey for this project and we know the least we’re ever going to know about the technical implementation details, we should factor in buffer room for discovery by extending the projected delivery date. This is how we can arrive at a rough time estimate without ever asking the team to talk about the project in terms of time.

Of course, this assumes we’re able to put estimates on all of the stories within our MVP, but sometimes the team simply won’t have enough context to align on an estimate for a story, or maybe the agreed estimate is too large to fit into a single sprint due to high uncertainty. This type of discovery while playing planning poker means that it’s time to pull out the final tool in our planning toolkit: technical spikes.

Technical Spikes

The purpose of technical spikes is to reduce implementation uncertainty in targeted areas without committing to a specific outcome. Technical spikes should be tracked work items that show up in a sprint or backlog the same way that stories and bugs do, but the goal for a spike isn’t to deliver business value to customers. Instead, the goal is to identify and evaluate options in a particular problem space and gain confidence that will eventually allow us to choose between those different solutions. A spike isn’t a thin vertical slice of business value but rather an incredibly targeted deep dive with the goal of answering a particular question that could inform one or more stories’ estimations. Explicitly stating the goal of the spike is crucial since we’re not trying to lay out a detailed overall plan for the entire project but rather want to make sure we focus on de-risking a single risky area. Because of the explorative nature of spikes, it’s also useful to treat the estimate as a time box to help guide the level of effort that should be invested in researching the solution.

Technical spikes can manifest in a backlog in different ways — for example, maybe before any milestone or story breakdown for a new project occurs, the technical lead already knows there will be a major open question about which technology should be used. In this case, it would be fine to create, estimate, and begin work on a technical spike before any other sort of estimation exercise with the team. However, it would also be fine to create a technical spike during a planning poker session as the result of a too-large estimate or the inability to arrive at a consensus when estimating a story.

The goal of the technical spike is to answer a question but how we answer that question could also manifest in a variety of different ways. Below are several examples of real technical spikes we’ve used:

  • Our projects will often require crossing team boundaries and implementing or requesting functionality in areas that our team doesn’t officially own, so at the onset of the project, “We don’t know what we don’t know” is a commonly uttered phrase. An initial spike for our projects that fall into this category has been a simple proof-of-concept that tries to implement a hacky happy-path solution in a local environment as quickly as possible. This is code that never makes it to production, but is still key in uncovering major risks and discovering cross-team touchpoints that can result in follow-up spikes being created.
  • We created a spike that encapsulated one such cross-team touchpoint and had the goal of getting sign off on the service interfaces for necessary interactions between our teams. The end result of this spike was a diagram representing the flow of data back and forth between team boundaries throughout a customer’s journey.
  • Another of our spikes resulted in a Postman collection simulating all of the interactions with a beta third party API that would eventually be embedded into the Klaviyo system and automated based on user interactions. This helped us gain confidence interacting with a relatively new and undocumented API to understand the data that would need to be available for every API call during the customer journey.
  • The RFC process is critical at Klaviyo when it comes to vetting major technical and architectural decisions. If the project warrants an RFC for cross-team buy-in or additional eyes on a technical decision, we will typically capture its authorship as a technical spike.

When technical spikes are identified with care, your team will be able to gain confidence in diving into execution for a particular project while simultaneously minimizing the amount of technical planning work done in advance.

The Planning Toolkit in Action

Let’s take a look at the full planning toolkit in action. Consider the following scenario:

Your team has designated an engineer as the lead of a project. They work with your product manager to agree on an initial MVP scope for the first release phase. Even though there’s concern the scope might still be too large, they work together to break down some proposed milestones and the stories within them. These stories can be taken to your full team for a pulse check during an initial estimation session.

Healthy debate occurs during a game of planning poker, and the engineers all come to the consensus that one particular story is too large for a single sprint. Through discussion, it becomes apparent that you’ll actually be able to unblock multiple parallel streams of work if one foundational component is released first. As a result, you’re able to break down that larger story into two smaller stories that still capture vertical slices and are able to confidently estimate those.

Back to planning poker. There’s another story that everyone is still stumped on, and after a round or two of discussion, there’s still too many variables to be able to converge on a single estimate. You table the discussion for now and decide to carve out a technical spike instead. Next sprint, you pull in the technical spike, evaluate several different options, and with this new information, you’re able to align on an estimate for the original story.

Through the course of several estimation sessions, your team now has abstract story point estimates associated with all of the stories that comprise your MVP scope. You take the timeline projection formula and plug in the sum of these story points as well as your team’s recent velocity, then you factor in some buffer room. Now you’ve got a potential timeline for the delivery of your MVP.

That suggested timeline can be used as an incredibly powerful negotiation tool that will help you be ruthless about tightening the scope of a project before a single line of code has even been written. For example, you could use a milestone’s estimated effort as a bargaining chip and determine that the cost of delaying the initial customer release until its completion is more expensive than the value of the milestone’s functionality itself. It could make more sense to move the entire milestone out of the MVP scope and into the second phase of a project, which is a decision that you’ve enabled yourselves to make without wasting any upfront technical planning effort on the work for that milestone.

This is the power of the planning toolkit as a whole.

Conclusion

To sum it all up, our team brings an iterative and incremental mentality into project planning by using these tools:

  • Phased release scopes
  • Demoable milestones
  • Thin, vertically sliced stories
  • Abstract story points and planning poker
  • Technical spikes

It’s important to note that there is nothing inherently sequential about them, and they certainly are not intended to be utilized in complete isolation or as a prerequisite to implementation work. These activities create an important feedback loop with each other and with the overall execution of a project.

Operating with iterative tools will enable your team to navigate the unknowns and make educated decisions without stalling unnecessarily in the planning phase of a project. By embracing a mindset that values change, uncertainty, and getting features into the hands of customers as soon as is responsible, your team will be able to move faster (faster validated learnings, faster delivery of business value) with less (less upfront planning, less wasted effort).

Getting a project ready for implementation is just one part of the larger picture, though. In the next post in this series, we’ll explore the execution toolkit and how we can apply the same iterative and incremental mentalities to organizing a full team around the common goal of bringing a project’s requirements to life.

Two cartoon meerkats wearing hard hats collaborate over architectural plans.
Faster delivery of business value? Less wasted effort? Marty approves!

--

--