Close this search box.
Project to Product: How to Survive and Thrive in the Age of Digital Disruption with the Flow Framework

Vital Epiphanies in Software Delivery from Dr. Mik Kersten

Dr. Mik Kersten, CEO of Tasktop, recently published a book titled Project to Product: How to Survive and Thrive in the Age of Digital Disruption with the Flow Framework. In his book Dr. Kersten discusses The Flow Framework – a new approach for connecting the business to technology, which bridges the gap between business strategy and technology delivery.

Dr. Kersten discusses three vital epiphanies he had that revolutionized how he thinks about software delivery. In this excerpt, he discusses these epiphanies and how his revised angle of attack benefited his business.

Excerpt from Project to Product: How to Survive and Thrive in the Age of Digital Disruption with the Flow Framework by Dr. Mik Kersten

My career has been dedicated to understanding and improving how large-scale software is built. I spent nearly two decades working on new programming languages and software development tools, and have had a chance to work with some of the best technologists in the world. But I have come to realize that, due to where we are in the Turning Point, technology improvements are no longer the bottleneck.

Technology improvements will be relevant but incremental, yielding productivity gains of less than ten percent to organizations via new programming languages, tools, frameworks, and runtimes.

In contrast, the disconnect between the business and IT is massive, as are the disconnects within IT organizations. The common approach to enterprise architecture is wrong, as it tends to focus on the needs of the technologists and not on the flow of business value.

For me, the realization that technologists’ pursuits were bringing diminishing returns did not come as a single eureka moment. Rather, I had separate realizations, each of which caused me to make a major pivot in my career. Each of these “epiphanies” involved a collection of experiences that reframed my view of software delivery and kept me awake through the night as I slowly digested how many of my previous assumptions were flawed.

The first epiphany came from my first job as a developer working on a new programming language. During that time, I realized the problem we were solving spanned well beyond the source code. The second epiphany came from a culmination of hundreds of meetings with enterprise IT leaders that made it clear to me that the approach to managing software delivery and transformations was fundamentally broken. The third epiphany came during my visit to the BMW plant and revealed that the entire model that we have for scaling software delivery is wrong. Each epiphany is connected by our trying—and failing—to apply concepts from previous technological revolutions to this one. My three epiphanies were:

  • Epiphany 1: Productivity declines and waste increases as soft- ware scales due to disconnects between the architecture and the value stream.
  • Epiphany 2: Disconnected software value streams are the bottleneck to software productivity at scale. These value stream disconnects are caused by the misapplication of the project management model.
  • Epiphany 3: Software value streams are not linear manufacturing processes but complex collaboration networks that need to be aligned to products.

The first epiphany—that software productivity declines and waste increases when developers are disconnected from the value stream— came as the result of a personal crisis. While on the research staff at Xerox PARC, I was an open-source software developer and consistently worked seventy to eighty hours per week. Most of that time was spent coding, plus regularly sleeping under my office desk to complete the cliché. The number of hours at the mouse and keyboard resulted in a seemingly insurmountable case of repetitive strain injury (RSI). It grew progressively worse, along with the heroics and coding required to get release after release out, and my boss repeatedly cautioned me that he’d seen several PARC careers end in this way. With the staff nurse offering little help beyond advising caution and providing ibuprofen, I realized that every single mouse click counted.

This led me to do PhD research by joining Gail Murphy and the Software Practices Lab that she created at the University of British Columbia. As mouse clicks became my limiting factor, I started tracking the events for each click by instrumenting my operating system, and I came to realize that the majority of my RSI-causing activity was not producing value; it was just clicking between windows and applications to find and refind the information I needed to get work done.

I then expanded my research to six professional developers working at IBM, and I extended the monitoring and added an experimental developer interface for aligning coding activity around the value stream. The results were surprising to both Gail and I, so we decided to extend the study to “the wild” by recruiting ninety-nine professional developers working within their organizations and having them send before-and- after traces of all of their development activity.

The conclusion was clear: as the size of our software systems grew, so did the distance between the architecture and the effort it took to add one of the hundreds of features being requested by our end users.

The number of collaboration and tracking systems we used grew as well, causing yet more waste and duplicate entry. These findings were the inspiration for Gail and I to found Tasktop, a software company dedicated to better understanding this problem.

Several years later, while getting an overview of a large financial institution’s toolchain, I had the second epiphany. This problem of thrashing was not unique to developers; it was a key source of waste for any professional involved in software delivery, from business analysts to designers, testers, and operations and support staff. The more software delivery specialists involved, the more disconnects formed between them and the more time was spent on thrashing, duplicate data entry, or the endless status updates and reports.

The challenges I was personally facing from my declining productivity and increased thrashing were being mirrored, at scale, across thousands of IT staff. The more staff, the more tools, and the more software scale and complexity, the worse this problem became. For example, after conducting an internal study on one bank’s software delivery practices, we determined that, on average, every developer and test practitioner was wasting a minimum of twenty minutes per day on duplicate data entry between two different Agile and issue-tracking tools. In some cases, that grew to two hours per day, and the overhead for first-line managers was even higher. When we dug deeper into how developers spent their time, we found that only 34% of a developer’s active working time at the keyboard went to reading and writing code. Yet this is what developers are paid to do and what they love to do. This was a deep and systemic problem.

As Gail and I started working more with enterprise IT organizations, we realized just how different this world was from the much simpler and more developer-centric world of open source, startups, and tech companies, but we lacked empirical data on enterprise IT delivery. Unfortunately, no data was available on how work flows across the tools that form a value stream across enterprise IT organizations. But we now had a broad enterprise IT customer base, including close to half of the Fortune 100, and realized that we had a very unique data set, as the majority of those organizations had shared with us all the tools involved in their value stream and the artifacts that flow across those tools. We collected and analyzed 308 Agile, Application Lifecycle Management (ALM), and DevOps toolchains from these organizations. We started calling these tool networks once we saw how the tools were interconnected. In the process, I personally met with the IT leaders of over two hundred of those organizations to better understand what we were seeing in the data.

With those 308 value stream diagrams in mind, I felt the kernel of the third epiphany form. The entire model for how we think about a software value stream is wrong. It is not a pipeline or a linear manufacturing process that resembles an automotive production line; it is a complex collaboration network that needs to be connected and aligned to the internal and external products created by an IT organization, and to business objectives.

This is what the data was telling us, yet this approach is completely at odds with the project- and cost-oriented mentality with which enterprise organizations are managing IT investment. The ground truth (that is, the truth learned through direct observation) of these enterprise tool networks is telling us that all the specialists in IT are already starting to work in this new way by adopting Agile teams and DevOps automation, but these specialists lack the infrastructure and business buy-in to do so effectively.

On the flip side, the business is further losing the ability to see or manage the work that the technologists are doing. Leadership seems to be using managerial tools and frameworks from one or two technological ages ago, while the technologists are feeling the pressure to produce software at a rate and feedback cycle that can never be met with these antiquated approaches. The gap between the business and technologists is widening through transformation initiatives that were supposed to narrow it. We need to find a better way.

Leave a Reply

Your email address will not be published. Required fields are marked *