Back in June, I did a video titled "SOLID Principles: Do You Really Understand Them?", which turned out to be a bit ironic considering there is one principle I apparently don't understand as well as I thought.
This week I had someone comment on my video explaining that the reason behind interface segregation isn't what I explained in the video at all:
Thankfully my wife keeps my ego in check and I am not too proud to admit when I am wrong about something.
However, I never take anything at face value and need to dig into these things myself.
So it seems we have 2 different interpretations of the Interface Segregation Principle (ISP).
- Keep your interfaces small so that implementing classes do not need to implement methods they do not need.
- Keep your interfaces small so that calling classes are not aware of methods they do not use.
To make this simpler I am going to call interpretation 1 the "Implementation Version" and interpretation 2 the "Calling Version".
First point of call, Google searching "Interface Segregation Principle" and see what comes up.
The top post is from Digital Ocean who seem to sit on the fence with the definition:
A client should never be forced to implement an interface that it doesn’t use, or clients shouldn’t be forced to depend on methods they do not use.
However, their example reiterates the Implementation Version and doesn't cover the Calling Version.
If we look down the first page of Google and see how ISP is described we have:
- Stackify - Implementation Version
- Wikipedia - Calling Version
- Refactoring - Implementation Version
- Dot Net Tutorials - Implementation Version
- MethodPoet - Implementation Version
- Baeldung - Implementation Version
- TutorialsTeacher - Calling Version
- C Sharp Tutorial - Implementation Version
- ByteHide - Implementation Version
Out of the 10 results on the first page of Google:
- 8 say, "A client should never be forced to implement an interface that it doesn’t use".
- 2 say, "A client shouldn’t be forced to depend on methods they do not use".
I even found a Microsoft article on SOLID which uses the implementation version for their explanation.
So which one is correct?
After a bit of digging, I managed to find the original paper written by Robert C. Martin that describes the Interface Segregation Principle.
Which states the following:
In this article we will examine yet another structural principle: the Interface Segregation Principle (ISP). This principle deals with the disadvantages of “fat” interfaces. Classes that have “fat” interfaces are classes whose interfaces are not cohesive.
In other words, the interfaces of the class can be broken up into groups of member functions. Each group serves a different set of clients. Thus some clients use one group of member functions, and other clients use the other groups.
The paper goes on to say that in some cases the client, which is the user of the interface, can force an interface to change by requesting new functionality.
In these cases, all clients of that interface will be affected even if it is just the need to recompile. Each client should therefore have its own interface containing only the methods that will be used.
So it seems that myself and every other developer with whom I have discussed ISP has been completely wrong on the reason behind creating small interfaces.
The paper does briefly cover the implementation issue around fat interfaces but it only makes it as a small section at the bottom of page 3.
Does this still make sense?
Even when you manage to find the original source it is important to question what has been written and not just blindly accept it.
These are the main points written by Robert C. Martin.
- Fat interfaces cause a compile-time dependency on code that isn't used by the client.
- Fat interfaces lead to inadvertent couplings between clients that ought otherwise to be isolated.
When you put the SOLID principles into the context of when they were written it starts to make more sense.
Compile-Time Dependency
This is what Martin has to say about compile time dependency:
But recompiles can be very expensive for a number of reasons. First of all, they take time. When recompiles take too much time, developers begin to take shortcuts. They may hack a change in the “wrong” place, rather than engineer a change in the “right” place; because the “right” place will force a huge recompilation.
Secondly, a recompilation means a new object module. In this day and age of dynamically linked libraries and incremental loaders, generating more object modules than necessary can be a significant disadvantage. The more DLLs that are affected by a change, the greater the problem of distributing and managing the change.
Martin came up with the SOLID principles in 2000. Back then the fastest Intel processor was a Pentium 4 (released November 2000) with a maximum of 4GB of RAM but in reality, most systems had at most 1GB.
Compare that to modern-day systems there is quite a difference. I currently own a 2016 MacBook Pro with a 2.9GHz Dual-Core Intel Core i5 and a 2023 Mac Mini with an M2 Pro.
To compare the systems we can look at the PassMark CPU benchmarks. These are the scores below:
- Intel Pentium 4, 1.5GHz: 228
- Intel Core i5 (i5-6267U): 1,886 (8x faster than Pentium 4)
- Apple M2 Pro: 4,135 (18x faster than Pentium 4)
This is just the CPU and doesn't go into the fact we now have fast SSDs and faster RAM. I can't remember the last time I had to wait more than a couple of minutes for my code to recompile.
His second point gives me PTSD back to a time when we were manually copying DLLs onto servers into production. Thank you CI/CD!!
Nowadays, having a client call only a few methods of an interface doesn't cause any issues for the client. If the signature of other methods in that interface were to change it doesn't cause any issues for the client.
Inadvertent Couplings
Trying to keep your code as decoupled as possible makes sense and I agree with not including anything in an interface that doesn't logically belong there.
However, in the case of creating an interface per client this of course isn't always going to be possible.
If you are producing a library that others will be using then you have no way of knowing what methods that client will use. The only way of satisfying ISP in this case is to have single method interfaces which is obviously an anti-pattern.
If your code is only being used internally then you have more control over how you structure things. In that case, it could make sense to create a different interface per client. Although chances are, if each of your clients is using your class for different purposes, you are probably breaking the Single Responsibility Principle anyway.
A good example of ISP is splitting up Read and Writes to an application into separate interfaces. This is so common that it usually goes by Command Query Responsibility Segregation (CQRS) but it is just the Interface Segregation Principle at work.
Question everything
Overall both interpretations of ISP are complimentary and no matter which one you choose you end up with the same outcome of small cohesive interfaces.
I still think that having fat catch-all interfaces is more problematic for those implementing them than those calling them. Adding an additional method to an interface doesn't cause any issues for clients but a world of pain for those implementing them if you have a lot of them.
It should also be noted that when ISP was written Robert was talking about C++ which you can see from his examples. C++ doesn't actually have interfaces like we know them in C# or Java.
It does however have abstract classes and unlike C# or Java, C++ supports multiple inheritance from classes. So they are very similar to interfaces in that sense.
Which interpretation of ISP do you prefer? Let me know in the comments.
❤️ Picks of the Week
📝 Article - TDD with GitHub CoPilot. It is always interesting to see how other developers are utilising AI tools. This post on Martin Fowler's website covers how his team is using GitHub Copilot to help with TDD.
📝 Article - Microsoft is bringing Python to Excel. If this doesn't cause an influx of people to learn Python I don't know what will. I still remember coding macros in VBA which was fine in the 90s but I wouldn't recommend it anymore.
📝 Article - The New Rules of Money. I have been a fan of Chris Guillebeau writing for a while. He has a new book out which is oddly named Gonzo Capitalism but it looks interesting. People often have a weird relationship with money but I do tend to lean more towards Chris' view on it.
🐱 Code Repo - GitHub - chrieke/prettymapp: 🖼️ Create beautiful maps from OpenStreetMap data in a streamlit webapp. I have always liked the look of maps and these look really good, I am definitely going to have to give it a try.
💬 Thread - I only lost 10 minutes of data, thanks to ZFS. My backup strategy mainly consists of backing up things to Dropbox and iCloud and very occasionally syncing them to my NAS drive. I have been thinking about using Syncthing but I may need to take a look into ZFS as well.
💬 Quote of the Week
“We question all of our beliefs, except for the ones we really believe in, and those we never think to question.” - Orson Scott Card
From $100M Offers (affiliate link) by Alex Hormozi. Resurfaced with Readwise.
P.S. +1 to anyone who noticed that the title of this post was a homage to The Big Bang Theory.
📨 Are you looking to level up your skills in the tech industry?
My weekly newsletter is written for engineers like you, providing you with the tools you need to excel in your career. Join here for free →