Mitigating Rigidity with Dependency Inversion

An important characteristic of good design is the ability to contain the ripple effects of code changes. The inability to do so leads to rigidity where changes in one module necessitate adjustment in others. Rigidity makes code change difficult, risky, and expensive. Dependency inversion is an architectural design pattern that helps minimize rigidity and promotes highly flexible software.

Typically, a caller invokes a function by providing the necessary arguments and handling the return. When a function is designed independently from how it will be called, callers are forced to follow (i.e. depend on) the function's API. The dependency flows from caller (depends on) to callee (depended on).

Changing the callee's API, in this case, will break the caller. This means whenever the callee API changes, a corresponding change in the caller is necessary. To make things worse, the callee could be an external/3rd party library (no control over change) and there are several callers using it (widespread impact). When breaking changes are unhedged in this way, it typically results in more work and a higher risk of regression.

Dependency inversion helps in this situation by decoupling the caller and callee. Rather than the caller depending on callee API, both are made to depend on a common interface. This involves a subtle but crucial shift in the callee design which must now consider how it will be called. The callee must depend on the interface rather than being depended upon, reversing the dependency flow.


The introduction of a common interface ensures usability and prevents breaking changes. It provides a boundary that prevents changes from rippling uncontrolled from callee to caller. This has profound effects on the flexibility of the system allowing lower level details (e.g. I/O processing), which are typically callees, to freely change without affecting the structure of higher level policies (e.g. codified business rules), the callers. Dependency inversion protects the most valuable areas of the system concerned with the business from changes in more peripheral areas concerned with I/O or business-agnostic processes.

When Readability is a Skill Issue

For some time, I’ve been noticing a rather concerning belief that readability (ease of understanding or clarity of intention) is the primary measure of good software. While readable code is good and most arguments for it are relatable, readability is not remotely the most important quality to strive for. Readable code is better than unreadable, messy code -- that is really all.

As with many things, the pursuit of “readability” is not just about benefits but also costs. I see this pursuit getting so misguided that developers completely confuse it with its purpose. An important value of software is its ability to be changed, and to this end, readability is valuable. Unfortunately, the value of readability can be so elevated that vital aspects of software can get sacrificed. As Einstein is quoted to say, “... as simple as possible but not simpler”. Readability, your ability to read and understand, is not so important that essential software qualities must be traded off.

Readability is often brought up in 2 situations: (1) when faced with messy code or (2) when faced with complex code. Inability to comprehend what can be is a skill deficiency, remediable by a fair bit of study and perseverance. The increase of “over-engineering” critique appears to be, at least in part, also driven by the self-serving demand for code to be readable. There is a widespread preference for “readable” under-engineered code. Yet, over-engineering (or under-engineering for that matter) does not even necessarily relate to complexity or comprehensibility. Over-engineering is improving a solution when it costs more than it benefits, while under-engineering is neglecting to improve a solution when it benefits more than it costs. This means, declarations of over-engineering due to inability to comprehend are in fact misguided claims.

Readability serves a purpose, but using it as a cop-out from proper software design and sensible architectural decisions can lead to poor quality software. Rather than hastily conclude over-engineering, take the time to really assess whether it is so. Is it difficult to read because you don’t understand, or do you understand it is difficult to read?


Does Clean Code Matter?

In his book Clean Code, Uncle Bob quotes Kent Beck about how fragile a premise it is that good code at all matters. Both authors write their respective books on the faith that it is although I suspect it is perhaps more accurate to call it conviction than faith.

It is not uncommon to come across code bases that are undoubtably dirty, messy, or unclean. It is also not entirely uncommon to come across successful projects with a terribly unclean code base. You can’t help but think whether clean code is plain vanity after all.

However, I think clean code ultimately matters because messy code is hard to read and hard to change. When necessary care is not taken writing code, you get messy code and mess tends to multiply and escalate. Messiness can initially be cosmetic but over time (and not a long time is required at all) more material aspects become messy including design and architecture. Small messes don’t stay small for long.

I think an excellent analogy is the Broken Window Theory. The idea is that visible signs of disorder encourages more of it. One story talks about a building which was well maintained for many years. It never got intruded or vandalized. Then one day, an accident happened and one window was broken. The rest of the building was still pristine but, for some reason, the broken window was not immediately repaired. In the following weeks, the building, that for years never got intruded or vandalized, got intruded and vandalized resulting in more broken windows.

Code that looks neglected tends to encourage neglect. Messy code tends to become the foundation for even more messy code. The mess bleeds into design and even escalates into architecture. The mess snowballs.

Between clean code and messy code, we know clean code is better. Yet, we also know projects succeed despite having messy code. Perhaps projects even initially succeed precisely because of quick messy coding. Clean code is practiced on the conviction that it is the right way to succeed. While it may ruffle some feathers, clean code reflects our professionalism and a low barrier to entry is not an excuse for low professional standards.

Dependency Inversion

One of the most useful techniques I’ve found when building software is dependency inversion. It seems to be largely misunderstood by most programmers and confused with dependency injection. Yet, it is a cornerstone of Clean Architecture and is the main principle behind decoupling a system from its lower level dependencies.

The central idea in DI is that high level policies must not depend upon lower level details. Typically, the flow of dependency follows the same direction as the flow of control as a program executes. The inversion in DI refers to the inversion of dependency flow against control flow.

The inversion of dependency flow happens on the source code and not on runtime. This seems to be a common point of confusion when trying to grok DI. In the source code, dependency is expressed by the presence/utilization of a module (dependency) in another (dependent). To perform the inversion, an indirection of the dependency must be done and this is typically done using interfaces. Although, having written applications in JS which does not natively support interfaces, I think the interface language construct is not strictly necessary to achieve DI.

Rather than depending on a specific module, we depend on an interface. An interface consist of public functions, arguments, returns, and exceptions thrown. An interface must express the purpose of a module using these elements. Put differently, an interface is a precise and formal specification of purpose and structure.

To be usable for its purpose, modules must fulfill i.e. implement the interface. Without interfaces, the higher level module must “know how to use” the dependency. With interfaces, the lower level modules must “know how they will be used”. This is a big shift of responsibilities. Now, the dependencies must “look up” at the interface and they must be implemented to its specification. In essence, they now depend on the interface.

In a very fundamental way, an interface is a boundary. It is the point where dependency flow reverses and it is a boundary that separates higher level modules from lower level ones. Crossing an interface is crossing an abstraction layer. On one side, we have high level policies embodying the high value business logic while, on the other, we have lower level details serving as a business-agnostic foundation of the system.

When dependency flow is reversed, lower level modules can now be freely changed without affecting the structure of high level ones. Hence, we refer to lower modules as details. We expect them to change and vary in implementation while still providing the same “service” e.g. calculation, storage, retrieval, etc. This gives us a more flexible system, highly decoupled, and robust to changes in lower level dependencies.