”Divide & Conquer” in Software Development
Software development has a rich history spanning several decades, dating back nearly two centuries if we consider the contributions of Augusta Ada King, Countess of Lovelace (née Byron). Throughout this period, we have consistently rediscovered a timeless lesson that goes beyond technological advancements. Let's examine this lesson from the lens of the present day.
Balancing Act
Read the following paragraph:
“Divide lengthy sections of code into reusable subroutines. Separate the human-readable code expression from the process of translating it into machine language. Split subroutines into self-contained and reusable categories, such as classes, modules, units, or packages. Opt for creating small, specialized yet reusable programs that excel in one specific task.”
Does it sound familiar? Have you heard it many times? Do we collectively appreciate what these teach us? They share a common theme: division of something larger into something smaller. Let’s now rephrase that:
“Put together related sections of code into subroutines. Move everything related to writing machine code into higher programming languages. Bring related subroutines into classes, modules, and units. Package together high-quality code around related functions into dedicated programs.”
This version of the statement focuses on combining, not dividing, yet seems to talk about the same concepts. Which one is better? “Divide & Conquer” or “Combine & Conquer”?
In the real world, never-ending division yields annihilation – think nuclear fission. Never-ending fusion causes implosion - think black holes. We experience diminishing returns with each. How do we decide when to stop? Balancing those forces against something else may help. Look at the other words appearing frequently in the above paragraphs.
The division paragraph was about “reusability”. At some point, the reusability disappears as the “part” is either too small to justify the associated overhead or too specific to matter. The combination paragraph was about “relatedness”. Combining unrelated functionality forces users to cope with the unexpected, reducing the usability. Specialized tool, like a screwdriver, is better than a generic one, say Swiss army knife, for its intended application. On the flip side, most tasty meals have more than a single ingredient.
Forget the code for a moment and apply this thinking to people and their expertise. Should the expertise and teams be endlessly divided into the smallest reusable units? Should it always be combined so that everyone knows everything? Do you think that’s possible? If it can’t be either extreme, we need a balancing act between division and combination.
At this point, we need to introduce a famous bit of reality:
“Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization's communication structure.”
— Melvin E. Conway
This tells us that the balancing act we’re thinking of affects both the system (technical) and the business (team) organization. If we try to establish different structures create inefficiencies that will eventually bring the system’s organization to match what the communication structure is, whether we like it or not.
That isn’t really a problem. Non-technical business leaders have similar challenges when deciding how to organize companies. They just have different “logistical” and other challenges than technical people are. In any case, the problem we’re solving is similar – both business and technical “logistics” need to be considered. What usually ends up happening is having units for each distinct, decided “domain”. Even when geographically distributed, each individual domain is similar across locations. We can apply the same thinking and, via “Domain Driven Design”, arrive to “Domain Oriented Architecture”.
Breaking down significant problems into smaller components allows for efficient, specialized solutions and promotes reusability. Combining related solutions can enhance productivity, although the outcomes may surprise us. It's crucial to strike the right balance in division; too little harms reusability and focus, while excessive division results in unnecessary effort and overhead. Constantly evolving tools help mitigate these challenges. This process of problem-solving has historically been challenging, but advancements in programming languages and network technologies have eased the complexities.
Not too long ago a significant advancement occurred with the emergence of microservices. While they offered various advantages, they posed challenges as clients were burdened with understanding all services. Decisions about service breakdown affect all clients, leading to performance and reliability issues. To tackle this, businesses either treated all clients uniformly or invested in costly dedicated services specialized for each client, (e.g. BFF: Back-End for Front-End). Ironically, those frequently become effective per-client monoliths.
Today we can do better. Just as functional programming gained popularity, the realization that service APIs had to become functional also occurred and enabled collaboration of services via functional composition. With it component services can remain component services without the need to invest into development of countless BFFs. I’ll dig into more details in a dedicated post.
We’re facing two opposing forces here:
Desire to plan as much as possible upfront to avoid choosing the wrong path.
Desire to start as soon as possible to get ahead, thus plan less.
Of course, neither extreme is good. The first is rigid. The second yields endless random walkarounds. Before we continue, I’d like to add another complication here. I’ve observed that those indecisive often think of saving time by deferring decisions. If you think about it, deciding to defer something is a decision in its own right. Does that mean that there is only option (1)? It does not as they don’t deal with the same level of detail or risk.
I’ve observed another fallacy here as well: some think that they can model a good decision directly after the simplest possible one and go for it. There’s another term for this: “wishful thinking”, often yielding regret of the “I wish it were that simple” kind.
My point is that we can’t hide important questions under a rug or wishfully assume the simplest solutions. We have to be smarter about it, yet trying to be fully prepared yields “analysis paralysis”. This isn’t helping. What should we do? It is obvious that we can’t eliminate all decisions and that we shouldn’t attempt to fully form all of them either. As each decision is about evaluating between alternatives, we have the following tools to help us:
Identifying plausible alternatives and knowing how they are different
Managing the risk associated within each and/or having to shift to another
Both require effort. However, we can reduce the effort we need to invest in (1) by leveraging (2). In our case “managing” risk isn’t only about recognizing it but decoupling or abstracting the system components in a way that reduces the cost associated with the approach taken being wrong. We must embrace the change and accept being humanly wrong. They are guarantees, not risks. Accounting and looking out for them regularly allows us to react in a timely manner and leverage new solutions and evolving market needs.
Think of this journey as a hike. Recognize that the world isn’t as convoluted as a Labyrinth. You can see some paths around you. The clarity is great nearby and it reduces with the distance but you do roughly know where you would like to get to, allowing you to pick one of those paths. They may turn out to be dead-ends or otherwise wrong and you’ll need to back-track or jump over a stream here and there. That has to be OK, it’s a part of the deal. How do we make it OK, less painful? Perhaps:
Appropriate (de)coupling, not tightly coupling with the simplest assumptions. Leaving some space for different, better parts helps.
Automating more, everywhere, from development environment setup and builds via all testing and deployments. More effective workforce can also more effectively cope with changes.
I want to get back to main topic, though: divide and conquer. We have to accept that we will make wrong division/combination decisions. Functionality that we put in one system component needs would be better suited to another. Accommodating that the way I often see done requires all clients of these components, including gateways to adapt. Their code must change to account for different breakdown/locations of the component services. Would it not be nice if we didn’t have to do this, if there was some magical piece that will shield the clients from the results of refactoring of services? A codeless gateway that just works? That brings us back to the commend I made in the previous section: collaboration of services via functional composition. That same solution makes us more agile in the face of changes, too.