I originally intended this as a short comment to SRP, as easy as 123, but since that blog won't let me post it as comment for some reason, I decided to put it here. Go read the original article for context. Essentially, the author wishes to enlighten us about the proper interpretation of the Single Responsibility Principle, which is one of the many "tried and true" pieces of wisdom promoted by OO gurus.
While the considerations described in the article are all fine and a nice example of the kinds of reasoning involved in building software, it also exposes a real problem with the "principle" as such, and it's a pity that it doesn't hit the target more directly.
It turns out plainly that the principle cannot be interpreted or applied without a good deal of informed speculation about what is going to happen in the future. Whether or not a particular structuring of code is good (enough) depends heavily on that future stream of events and interactions, some of which are more likely to occur than others (and their probabilities better be conditioned on individual case circumstances). However, notice how the principle misleads people to think solely in terms of some mystical absolute characteristics of the programming language classes. The result are pointless and confusing discussions on that shallow "ontological" level (is my class more responsible than yours? and how exactly do you count "reasons for change"?). The same kind of insidious error also hides in other OO principles ("is class X more abstract than class Y?").
In conclusion, it would be much easier to discuss matters not in terms of perceived (or misperceived) structural properties, but simply by addressing the known future risks and possible strategies of risk avoidance, some of which may be implemented in structure, others in the engineering process or tools. For example, there is also a risk associated with dividing code into unnecessarily many interacting modules (or deep class hierarchies), especially if the future maintainers lack adequate tools to piece together what the developer has pulled apart. Furthermore, there is also an immediate cost to making the structure more amenable to change or extension; if the expected kinds of change never happen in the future, you have wasted effort. The problem with SRP is that it tends to focus your mind on one thing and forget these alternate risks (at least for a while), and the problem with principles in general is that they may foster irresponsibility or magical thinking ("if only we obey the Rules, everything will go well; and if something hasn't gone well, we must have disobeyed the Rules").
Software design comes down to decision making under uncertainty, and software engineering principles are just an attempt to capture some aspects of that process using short catchphrases. They may serve us well as reminders if we're already capable of the kind of complex decision making required, but they are not a substitute for careful thinking and a poor learning aid. In the end, the narrator of the original article has demonstrated his usefulness to his colleague, while SRP has not.