Craftsmanship - Part 4
This series of posts was first published at the Fortnox developer blog during 2014. The articles where collected, refinished and put on display here after Jonas moved on from Fortnox in 2015.
The series consists of five blog posts:
Software craftsmanship - Part 1 - Covers the main points of Agile, the original version.
Software craftsmanship - Part 2 - Cover the rest of the Agile manifesto as well as talking about TDD and testing in general.
Software craftsmanship - Part 3 - Rundown of software craftsmanship.
Software craftsmanship - Part 4 - Spending some time on the SOLID principles.
Software craftsmanship - Part 5 - Refactoring, with actual code and stuff …
As promised in part 3 of this series we are going to talk about architecture today. More specifically we are going to talk about why it matters as much as what it is.
Intro
Many software developers, especially when starting out, feel that architecture is this abstract, philosophical pursuit that elderly developers with white neck beards do. That it’s about large scale integrations of different parts of the system.
Architecture is really the wiring together of any parts of your system, something you do all the time. Every object you design is a piece of architecture, it’s public and private methods. How it delegates functionality internally. What classes it inherits from or is composed of.
The object graph of your system is also architecture. How your objects talk and collaborate. The messages they send. The external interfaces your modules expose to other parts of the system.
Basically everything that’s not on a method level of complexity is architecture in some respect. There is architecture in the small and large, but it’s all architecture and it all works together to achieve several goals you should have in your system.
Your architecture should give you a system where responsibility is clearly separated. Where change is planned for and therefore easy to accommodate. It should encapsulate and isolate external libraries and systems from your internal code. It should further reuse of code within your modules and in some cases between them. In essence the architecture is what makes a program either good or not. Not mainly from an end user perspective but from a developer perspective, and from a product owner perspective.
Are we solid?
The smallest parts in an object oriented system, and that is what I will focus on here1, is the method. A method is a message receiver, a hook in your object that handles a specific message from the outside.
Each method should do one thing and one thing only. If you have a domain object that represents an entity of some sort then updating a property on that object and logging the update to an audit log are two separate concerns and should not be handled in the same method. You can have a composed method called logged_attribute_update
that then calls both the internal log
and update_attribute
methods, but it shouldn’t be composed to do both inline.
Why is this? Why do we care? Isn’t this up to the author of the object? After all it makes, or should not make, any difference to the caller of the method.
It matters because these two things, updating the attribute and logging it’s update, need to change for different reasons. The update method might have to change is some constraint on the attribute changes or if the internals of how we represent the value changes. The log method will have to change if we change our logger in the future. These are two separate concerns in the system and should therefore be separated. The overarching principle here is, not surprisingly, called “separation of concerns2” and the “single responsibility principle3”.
As a side effect of this we can also reuse the log method for every other attribute we have to update and log in our object. But even if we only have one use of the internal log method it should still be it’s own method.
The principle of single responsibility is important because it affords change. It reduces internal coupling and makes each method small and independent and therefore easy to change in the future. If our needs for logging changes we can easily change it in one place instead of having duplicated code in every update method in every object that logs it’s updates in the system.
As you might guess we should have a separate logger object to do the actual logger and give an instance of that to the entity object so that it can use that for logging. That is: we should compose our system of small, independent objects that collaborate to solve a larger task. The single responsibility principle is part of the SOLID architectural guidelines4. A set of principles that guide good system design. I have described it in terms of methods here but it applies as well to objects that collaborate and at a even higher level to modules in the system or subsystems in an enterprise architecture.
Every part of your system should be small, focused on one task and have clear integration points with other parts. These architectural boundaries are the areas of largest concern when designing since they are the parts where your design is exposed to other parts of the system. But let’s move on to the other principles in SOLID …
Test driven development helps enormously here. You will naturally design smaller objects with TDD and keep their methods small also. It’s easier to test a method that obeys the single responsibility principle and if you try to make it do more than one thing you will probably have to go back and change previous test cases, which should sound about all the alarms you have. So with TDD as a tool we can help enforce the single responsibility principle and, with very little training learn, to listen to what the tests are telling us by seeing how many we have to write for a specific method or class.
Open and closed?
Of all the SOLID principles this one is probably the one that is hardest to get your head around. I’ll do my best to help you thought. The wording of it is usually something like this: “software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification”. Which sounds weird since extending the functionality of something often means modifying it. But it goes back to when this was written5, 1988. This is in an era of waterfall development and the general context for the principle was that you had a specification for the entire system when you started coding and it’s that part that is closed for modification. That is you can not just change a methods signature or purpose to achieve some new functionality at a later date, say for version two.
The rule about closing for modification is only invoked once your class, method, module, whatever, is finished. In the olden days that would be when the specification was finished, implementation still remained but the API for the class was set and could not be changed. In modern times this is more likely to be after a sprint or project phase. Regardless of how you define finished while the entity it’s under active development you can change it however you please. And you can of cause fix bugs in it later, but not change the API …
The reasoning behind this is sound. If you want to change some function to solve some new problem in a later version of the code you risk introducing bugs since other parts of the system might rely on the way the method originally worked. Instead you will have to find another way to change the functionality while preserving code reuse, which is also an important aspect. Most commonly this is done in OOP by mixing in some concern into a new class and modifying the behaviour there, delegating back to the mixed in methods for code reuse, or by inheritance. If you inherit from the old class you can overwrite it’s method definitions in the new class and still call the parent for the original functionality.
Another way to achieve the same result is to use interfaces6. In Java you can do this directly with the Interface language construct, but even if your language doesn’t have a notion of interfaces you can use purely abstract base classes to get the same result. The basic idea is that you create a class (or interface) that defines the signature of every method in the interface, but has no implementations. You can then create any number of concrete classes based on the interface class and use them interchangeably in the system. They all share the interface, that is frozen and doesn’t change, but their implementations can be very different and you can add new functionality easily by creating a new class based of the same interface at a later date.
Both these techniques7 achieve the goal of upholding the open/closed principle. We get code reuse and we don’t change the original class. By treating our code like this we can confidently build on our previous work while making sure future you don’t break the stuff current you spent a lot of time and effort to make perfect. This means we have to design our systems with reuse in mind and leaving allowances for future extension that we don’t know anything about yet. We have other principles that aid us in this job and we’ll get back to that when we talk about the dependency inversion principle later …
Not a sexy title
I hope my explanation of the open/closed principle was clear? Because now we have to talk about the Liskov substitution principle8 and that’s not any easier :) Luckily we have the original authors help to express the core of this principle in an updated version from 19949:
“Let φ(z) be a property provable about objects x of type T. Then φ(y) should be true for objects y of type S where S is a subtype of T.”10
All with me, good let’s move on … :)
Well, one thing that says is that any call such as method(x)
should also work as method(y)
if y
is an instance of a subclass to the x
class. So if we remember what we just talked about with the open/closed principle where, we wanted to extend functionality and created subclasses in some fashion, it seems to fit well. If we later create a new subclass we probably want to be able to use this where we previously used it’s superclass. It’s the basis for normal, object oriented design, subclasses can be used in place or their superclass. And this statement is also named the “Subtype Requirement” in the paper and is one of their base assumptions or requirements for the rest of the paper.
So while this might sound like soft and obvious stuff when boiled down like that it’s nonetheless an important principle and something that you have to keep in mind when building your classes. You can, for example, change the signature of a method that exists in the parent in the child. You could add or change the order of arguments. This would break the substitution principle since you can no longer use the child in place of the parent in every case.
It’s ok to do extra work in an overloaded method, but the signature should be kept intact. If you need to change to signature you should create a new method on the child instead, leaving the original method in place and calling it from the new one if you have to.
Other things that could break this principle is things like changing visibility of methods in the child, especially making previously public parent methods private. Also some changes to the pre or post conditions of overloaded methods will break the principle. If the method you give an instance of your class to expect that the object has a method called foo that takes an array or nil and returns a sum and you overloaded method fails on nil or doesn’t return a result the program will crash.
In general it’s ok to add stuff, create new methods, add optional arguments onto overloaded methods, be more forgiving in your preconditions. But it’s not ok to restrict or remove, make inherited methods private, be more restrictive about your accepted input, withhold or change the type of output, are all things that might introduce bugs in the existing code when given an instance of your new class instead of an instance of it’s parent.
This property is also nicely enforced by testing. In unit tests you will naturally want to set up a subclass or alternative interface implementation to act as a mock for the test. And in the integration tests you will interact with the real actors in the system and make sure they are not breaking previously made promises in the system. It’s not a guarantee, even with tests you can have code that violates Liskov substitution principle, but it’s less likely and you therefore reduce the risk.
Segregation, the good kind
Next in our voyage through the SOLID principles comes the interface segregation principle. It states:
“No client [class, module, etc] should be forced to depend on methods it does not use.”11
Or stated whitfully by Joe Armstrong: “You wanted a banana but you got a gorilla holding the banana”12. The root problem is when objects have to know more about their collaborators than they need to solve the problem they are responsible for. Such as if your object want to log itself and implements the loggable interface and has to stub out 7 other methods for log level and log file name and what not, just to get to implement the log method that was all it really needed. This is a case where you want to split the logger interface into two interfaces, one for instances of classes that actually do the logging and need log levels and files and such and one for classes that want to log themselves containing only definitions such as log, info, warn, error etc for methods that relate to logging a message.
One could go further and split the loggable interface into separate interfaces with only one method definition in each. Loggable, Warnable, Infoable and Errorable maybe? Well, as with all principles it’s important to identify what amount of application is appropriate. In this case splitting away the backend of the logger from the consumer part is probably a good idea, but having 200 flyweight interfaces in your system will probably not make things better in any case.
The main thrust here is to reduce the coupling of the system. If every object that includes the logger interface has to stub a lot of methods and you make a change to the interface you will have to do a lot of changes to every consumer as well. And the changes to the consumers are completely unrelated to what they do, they don’t even implement the method, only stub it since it’s required by the interface.
There are several ways around this in different languages. We talked about abstract classes before, in javascript you don’t really have abstract classes (or classes at all, but that’s besides the point for now) so you simply implement an empty object with the methods you want, but no implementation (or that throws an exception if you want to force the children to implement the method). Any call to a child that doesn’t have the method implemented will then fall through to the parents noop implementation. In Ruby you use duck typing instead of interfaces so any class that implements a log method is loggable. In Go it’s similar but you can actually define the interfaces and any class that implements every method defined in an interface is said to fulfill it.
So with different thoughts around how to solve interfaces, or similar contracts of method availability, in different languages we still have to plan and implement our interfaces, classes and modules in such a way that we get small, contained, semantically cohesive and loosely coupled parts in our system. Again we do this to afford change further on, change we can’t predict and therefore have to alot for with minimal information. Decoupling seems to be a good way to do that. Small objects that collaborate over small, clearly defined interfaces are easier to change than large objects with wide interfaces. The interface segregation principle helps us identify coupling via to wide interfaces so that we can split our concerns accordingly and reduce the future risk.
If we testdrive classes we will naturally tend to get smaller classes with tighter responsibilities and we will see, very early, that an interface is requiring a class to bend over backwards. Actually we will probably never get the fat interface in the first place since it will be driven by tests and we won’t add needless methods to an existing interface. We’ll see the need for a second interface clearly and add that directly instead of having to extract it later.
Switching it up
One of the most important principles in SOLID for me is the dependency inversion principle. It’s important because breaking it so clearly creates some really big problems, quickly. It’s easy to design code that adheres to it and the payoffs are clear in a more immediate manner than with many of the other principles. It’s also one that people break a lot :) So let’s see what the definition looks like:
“A. High-level modules should not depend on low-level modules. Both should depend on abstractions.” “B. Abstractions should not depend on details. Details should depend on abstractions.”
Yeah, clear as coal as usual … Let’s try to bone it out. A says that your business logic should not be tied to the particulars of the underlying system, more details in just a second. B says that neither your business logic or the underlying system should depend on particulars but instead depend on abstractions, more on that after we tackle A in detail.
An example of violating A would be a a logger class that explicitly writes to a file on disk. If we later wanted to make a logger that logs to a local instance of syslogd or some REST logger API we could not reuse the core logic in our logger class in any meaningful way before we remove the file writing into something more abstract, such as a logsink interface. With that interface in place we could implement a file sink, a syslogd sink and a REST sink with equal ease and the generalised logger class would then take an instance of a class that implements logsink and use some method in the interface to write the log message, without having to care about how it was written or where it was written. The higher level logger class no longer depend on a concrete sink, and the sinks doesn’t depend on a concrete caller either. The sinks can be reused by other parts of the system (so maybe the interface name is not so well chosen :)).
We have also fulfilled B with this simple refactoring. The concrete classes for the interface depends on the interface but the interface knows nothing about it’s implementations.
So we generally want our business logic to depend on some kind of abstraction instead of being dependant on a particular implementation detail of how something is stored or read or represented on screen. We want this because it will isolate our higher level consumers from changes in the lower level objects and we want to do this to increase the reusability of the modules in our system.
One could argue that pragmatically we do not have to separate the logger from it’s reliance on a concrete file sink until we have the need for a second sink. But if we look at this from a testability perspective we can clearly see that if we test drive the logger we will want to send in the backend anyway, it’s easier to test than to mock out the file system or network (if the logger had depended on a syslogd instance as it’s first sink). We get some good design for free when we make our model easy to test. And we reduce risk by making the module less coupled and easier to adapt in the future. All to the cost of an extra interface and an extra class, feels like it’s worth it to me.
In closing
“Most software engineers don’t set out to create “bad designs”. Yet most software eventually degrades to the point where someone will declare the design to be unsound. Why does this happen? Was the design poor to begin with, or did the design degrade like a piece of rotten meat?
- Robert C. Martin, “The Dependency Principle” - The C++ Report, 1996
We are the ones responsible for good design. We are responsible for keeping the code clean and tidy. We leave a legacy in code when we leave a project, a job or a contribution to open source. We have to acquire the tools to achieve that design, no one will do it for us. In this post I have laid out some of the principles that are used today to measure what is good design and what is bad design.
I try to learn the core of these and other principles to help guide me when I create and modify. I always try to leave the code tidier than when I got there. Sometimes the problems in the code are too large for a quick fix. Sometimes you find that there are problems but you can’t see how to solve them. Sometimes you need some help. That’s why we all have to care. That’s why we have to educate our teams and work together to keep the code, our working environment, as well designed as possible.
Just as it’s difficult to find things in a messy room. Just as it’s difficult to keep track of the main thread in a story full of regression and sidetracking. Just as it’s hard to remember every details of even yesterday. Just like that it’s hard to work well in messy, poorly designed code. If we all work towards something common we all get the benefits from it and our customers to.
The quote at the beginning of this section originally ends with:
At the heart of this issue is our lack of a good definition of “bad” design.
The SOLID principles are an attempt to create some common measure of good/bad design. They are not absolute, they are not all-encompassing and they will not solve every problem. But they are good, tried, tested, hailed by many. They are, well, solid. Let’s use that as a starting point and if that’s not enough we can create more principles to help us get on. But for now we have some ways to go before we run out of SOLID so get back to work and write some solid code, for me, for your colleagues and most of all for you.
See you next week and until then, refactor mercilessly.
Image courtesy of IQRemix at Flickr under a Creative Commons - Attribution, Share Alike license.
-
Even though there are a lot of functional languages out there the object oriented paradigm is still the one in widest use in production code today. I might write some thoughts on architecture of functional programs in the future, but to be honest I don’t know enough about it to make any convincing arguments at the moment. But there is some viewing on the subject if you are functionally inclined. ↩
-
From the 1974 paper “On the role of scientific thought.” or read the “Wikipedia article for an overview. ↩
-
From the 1972 paper “On the Criteria To Be Used in Decomposing Systems into Modules” by David L. Parnas ↩
-
Read more about SOLID on Agile Software Development, Principles, Patterns, and Practices” ↩
-
The principle was originally described by Bertrand Meyer in his 1988 book “Object Oriented Software Construction”. ↩
-
Robert C. Martin is one of the proponents of this technique and famously wrote about it in the article “The Open Closed Principle” at 8th light’s blog. ↩
-
As always Wikipedia is a good reference to the basics here. ↩
-
Originally introduced by Barbara H. Liskov in her 1987 keynote and subsequent paper “Data abstraction and hierarchy”. ↩
-
Barbara H. Liskov and Jeannette M. Wing in their paper “A Behavioral Notion of Subtyping” from 1994. ↩
-
Page two of the aforementioned paper. ↩
-
From Robert C. Martin’s book “Agile Software Development: Principles, Patterns and Practices” from 2002. ↩
-
From Peter Seibel’s book “Coders at Work: Reflections on the Craft of Programming” from 2009, quoting Joe Armstrong, the creator of Erlang, on software reusability. ↩