Monday, July 20, 2020

Onion Architecture: An Unstable Equilibrium

The Onion architecture (Figure 1) is a well-known architectural pattern, although we often call it something else: Hexagonal architecture, Ports and Adapters, Clean architecture. The goal of all these patterns is the same: Decoupled layers and a clear separation of concerns. With that in place, high maintainability comes naturally.



Figure 1

If business rules change, application and infrastructure layers shouldn't change. If a technology stack changes, the domain model shouldn't change, etc. Over the years, I had a chance to create and maintain codebases organized in this manner. In my experience, the processes of development and maintenance can be challenging, to say the least. In physics, there is a great analogy that can describe that: 
Unstable Equilibriuma state of equilibrium in which a small disturbance will produce a large change” (Figure 2).


Figure 2

In his excellent talk, Functional architecture: The Pits of Success, Mark Seemann describes the process of maintaining the Onion architecture as a Sisyphean task. One of his main arguments is that developers must read a lot of thick books to be able to understand it. Undeniably, only a well-educated team can bring the Onion architecture to its full potential.

Is there something more to it besides the fact that you need to be a well-read developer?

No architecture can save you from poorly implemented object-oriented code, complex procedures, etc. I will not use these as arguments. Besides the Dependency inversion principle, benefits of the Onion architecture rely heavily on the proper separation of concerns, and in that sense, the main problem that can happen is a leak of the domain logic across the other layers.

In his book, Patterns of Enterprise Application Architecture, Martin Fowler states:
One of the hardest parts of working with domain logic seems to be that people often find it difficult to recognize what is domain logic and what is other forms of logic.

One of the most important aspects of code quality is that code should be exemplary. Every line of code, every pattern that you use will probably influence someone working on the same codebase to do similar or the same thing as you did. So, even if a small leak happens here and there, it's only a matter of time before that leak replicates, and the whole effort of achieving the precious equilibrium goes to waste. Having logic inappropriately mixed between layers can be even worse than a monolith. To be fair, all layered architectures subject to this potential issue, and the Onion architecture is not an exception.

The goal of this article is not to state that the Onion architecture is bad, but to outline the difficulties that one can face during the processes of implementation and maintenance.

Mixing domain and presentation logic

The domain model is the central part of the Onion architecture. It encapsulates business logic and holds entities, value objects, aggregates, and domain services. Or, to put it another way, this layer is about the problem that you are solving. 

The presentation layer, on the other hand, is for presentation logic. Its concern is a presentation of domain/business rules to the user.

Even if domain and presentation logic don't have anything in common in theory, sometimes it's not so easy to distinguish between the two. Let's look at the example:

We have a task to create an application for a student's ranking. The algorithm for a calculation of points sums grades. The student with the most points receives the highest rank and a reward. The students are sorted by rank in increasing order. If the two or more students have the same number of points, they are receiving the same rank ( 1224 ranking algorithm). In the case of the same rank, we should sort the students by the personal number in increasing order.


Figure 3

The question is: where to implement these requirements? The calculation of the points is the obvious business logic. What about ranking? It affects the presentation of the students to the user, right? The rank determining algorithm is a business concept, and it belongs to the domain model. Is the sorting of the students by the personal number in case of the same rank a business rule? No. The only concern of that rule is the presentation to the user. A similar request in that context could be sorting of the students in decreasing order or presenting the rank with letters.

In practice, of course, it usually gets much more complicated, and every case needs to be analyzed on its own to eliminate potential leaks.

Mixing domain and application layer logic

The application layer connects the business layer and the boundary technology (database access, HTTP framework, etc.). It depends only on the domain layer. For communication with the outer world, the application layer supplies ports and DTO objects. 

In the example that we mentioned above, the application layer would send a message through a port to get students. It would then pass the students to the ranking domain logic, and finally, it would send a message (again through the port) to create a side effect with the received results. It can call a web service, persist, or return the results to the UI. In the best-case scenario, the cyclomatic complexity of this layer should be 1. This is also a good sign that there is no leak of the business logic to the application layer and that the domain layer is well-isolated. Ports in the domain layer are a clear sign of the application logic leak. This should be avoided because the separation of side effects and the domain logic reduces complexity.

Let's add a new business requirement:
If there are multiple students with the highest rank, we need to rank them by the last year's rank. If the students also had the same rank last year, all of them will receive a reward. 

To get the last year's rank of a student, we need to call an external service (through a port). The API receives a student's personal number as a parameter.

Now, we need to make a design decision. If we want to keep the domain model isolated, we have to put some domain logic in the application layer (calling the API only if there are multiple students with the highest rank). On the other hand, to keep the wholeness of the domain model, we need to break the domain model isolation (calling the API from the domain model). Often, there is a tension between pureness and the wholeness of the domain. A decision usually varies from case to case. I always try to keep the domain model pure, but again, we need to be careful that the leak of domain logic caused by pureness will not replicate in the future. In our example, one more option is to pass both the data needed for ranking and a last year's rank of all the students in a single domain model call. Sometimes, this can also be an option if there are no technical boundaries (performance, a large amount of data, etc.).   

Mixing domain and persistence logic

Often, we can find a lot of business logic in stored procedures, views, etc. However, SQL has limited structuring mechanisms, which can lead to code that is hard to maintain in cases of increasing business complexity.

Still, there are some scenarios when business logic is appropriate for SQL. For example, potential performance issues. Of course, we should always be careful not to apply premature optimization. We should sacrifice the wholeness of the domain model only if there is an actual need for that. This decision should be based on the system requirements, and not something that accidentally happened. 

Again, let's change the business requirements: Only the final year students can participate in the competition.

This requirement can easily slip to a store procedure, view, or even the infrastructure layer. As we mentioned earlier, a store procedure or a view can be a reasonable decision (a large amount of data, system requirements). Even if this decision is justified, we should also express that rule in the domain layer because we want it to be independent of the outside world. Of course, this will break the DRY principle. One of the potential solutions for that issue could be the Specification pattern.

On the other hand, the filtering of the students in the infrastructure layer will cause the domain logic leak without any benefits.

Conclusion

The majority of the arguments from this article apply to any layered architecture. I argue that the Onion architecture heavily subjects to these because it requires a well-isolated rich domain model to provide real value.
We should also be aware that the examples were simplified. The fact is that in the real world with deadlines, inexperienced developers in a team (which is a natural thing), and much more complex business requirements, it can get challenging to achieve and maintain the equilibrium. One of the main reasons is that even a small disturbance in the system tends to replicate itself. The best way to improve this is to educate a team on how to be dogmatic in different contexts and then to practice the making of pragmatic decisions.

Monday, April 20, 2020

Functional Mars Rover kata: Railways on Mars

In the previous article, we discussed and implemented domain types for the Mars Rover kata. Now, we will go through the rest of the requirements and implement the command execution functionality.

First, we need to parse an input sequence of characters in order to determine which movement functions should be executed:

The output of the parseInput function is a list of commands (Command discriminated union type), so we need to map that output to appropriate functions before the execution of a rover movement:

Signature of the toFunc function is:

We could easily execute the resulting list of functions just by applying List.fold because every function in the list has the following signature:

The output of the first function matches the input of the second function. The output of the second function matches the input of the third function, etc. These functions are easily composable.

But, things are not so simple. If we look at the requirements again, we can see that a rover can hit an obstacle. In that case, the rover should stop and return the position of the obstacle. A movement can be either successful or not:

Signature of the move function is:

Now, imagine that we have a sequence of movement functions that we want to compose together. The output of the first movement function does not match the input of the second movement function. The output of the second movement function does not match the input of the third movement function, etc. The first two parameters (command and the list of obstacles) can be baked into the move function but, the issue then is the signature of the partially applied function:

Luckily, the solution to this problem already exists in functional programming: Railway Oriented ProgrammingI strongly suggest you read the original explanation by Scott Wlaschin.

I will introduce you to the basics and then we will try to apply the theory to our problem.

So, why railways? Well, railways have tracks. A track can be a good metaphor for a function. Functions are transformations that turn input into an output (Figure 1). The composition then comes naturally as long as the output of one function matches the input of some other function.


Figure 1.

When we try to execute a movement function, we either hit an obstacle or we proceed to the next position and execute the next function (if there is one). Railways have another great analogy for this scenario: switches (Figure 2):


Figure 2.

Now it's obvious why we can not glue movement functions together. The move function may end up in one of the two possible tracks. The correct way of applying the composition, in this case, looks like this:


Figure 3.

The top track is the happy path and the bottom track is the failure track. In order to achieve this in the code, we need a function that converts a switch function into a proper two-track function. 
Here's the implementation of that function:

It takes a switch function (move in our case) as a parameter and returns a new function that takes a two-track input. If the input is Ok it calls the switch function with the appropriate value and if the input is an Error, the switch function is omitted.

With everything in place, we can implement the main command execution function:

And here's the implementation with the F# bind operator:

I decided that I will not use the "M-word" in this article but you can see that monads are not so scary after all.

The only thing left is to show the implementation of the formatOutput function:

We implemented all the requirements, but there is one more thing to mention. The actual signature of the execute function doesn't match the required signature:

There is a good reason for that and we will cover it in the next post.

Wednesday, December 4, 2019

Functional Mars Rover kata: Domain modeling

In the following article series, we are going to explore a functional approach to the Mars Rover kata. It is a well-known kata and you can find a lot of different solutions implemented in object-oriented programming languages like Java or C#. We are going to use F# for our implementation. Even if F# is a multi-paradigm programming language, the goal is to implement the kata in a purely functional manner.

The rules of the Mars Rover kata are:
  • You are given the initial starting point (1,1, N) of a rover.
  • 1,1 are the X, Y coordinates on a grid of (10,10).
  • 'N' is the direction it is facing (i.e. N, S, E, W).
  • 'L' and 'R' allow the rover to rotate left and right.
  • 'M' allows the rover to move one point in the current direction.
  • The rover receives a character array of commands e.g. RMMLM and returns the finishing point after the move e.g. 2:1:N.
  • Implement wrapping from one edge of the grid to another (planets are spheres after all).
  • The grid may have obstacles. If a given sequence of commands encounters an obstacle, the rover moves up to the last possible point, aborts the sequence and reports the obstacle e.g (O:2:2:N).
The first step after the initial domain analysis should be to create domain types. The great thing about F# is that it has a built-in algebraic type system. That means that we can build new types from smaller types using composition. The composition of types is possible by "AND-ing" or "OR-ing" them together. With that approach, we can define the following domain types:

Coordinate: One OR Two OR Three...OR Ten.
Location: X coordinate AND Y coordinate.
Direction: North OR South OR East OR West.
Rover position: Location AND Direction.
Command: Rotate left OR Rotate right OR Move

The specified domain types are easy to implement with the F# type system. 
The "OR" types are implemented with discriminated unions:

For the "AND" types we are going to use records:

Rover's movement can be modeled as a sequence of states. A state transition can be generated from an input command and the current state of the rover. This is known as a state machine. The interesting thing about discriminated unions is that we can observe each case of a union as a state. We already implemented the needed states, so the next step should be to define the state transitions. For the implementation of the state transitions, we are going to use pattern matchingAlso, this is a good opportunity to express "Implement wrapping from one edge of the grid to another (planets are spheres after all)" requirement. We can do this with the explicit state transitions (Coordinate.Ten -> Coordinate.One and Coordinate.One -> Coordinate.Ten):

Next, we define the transitions for the rotation in the same manner:

With the appropriate types and transitions in place, we can finally define functions that are going to generate the next position from the current position of the rover:

In the next article, we are going to implement the execution of a command sequence and explore the error handling mechanisms in functional programming.

Monday, October 28, 2019

A software developer as a software user

In the first few years of my career, I had an opportunity to work on projects which included mechanical design. I studied mechatronics, so besides writing software, I was also involved in the design of mechanical components until later on, when I completely shifted my focus on programming and software design. During this period, I mainly worked on the development of vending and parking ticket machines. The company that I worked for, as a service provider, had the task of designing and developing prototypes that should be later on produced and distributed on the market.

To put things into context, let's assume that our task is to develop a prototype for a parking ticket machine. As always, the goal should be to provide value to the users. So, the first question should be: Who are the users of the parking ticket machine?

Before giving an answer, let's take a look at some definitions of a user:

"Someone who uses a product, machine, or service"

"A person who uses or operates something."

So, I guess the obvious answer is that users are persons who are buying parking tickets.

Unfortunately, this is not a complete answer.

The parking ticket machine needs maintenance:
- Collecting the money,
- Refilling the parking tickets,
- Repairing and replacing the mechanical parts.

So, in this context, a mechanic that will repair the machine is also a user. A person who is collecting the cash and refilling the parking tickets in the machine is also a user.

In order to provide a quality product, we need to think of all kinds of users. We need to design the machine in a way that a mechanic can access the mechanical components and repair or replace them. We need to make refilling the machine easy and efficient. My experience is that the team that was working on the design naturally started thinking about all of these aspects and after discussions with the client, all these aspects became the acceptance criteria for the prototype. 

I love the analogies between different engineering disciplines and I believe that there are a lot of similarities between them, even at the implementation level. Here's an example:


Figure 1.

In Figure 1. we can see a socket. When you search for an analogy in the software design, it's an interface. I can plug anything into the socket that respects "the contract" of it. The ability to change one part of the system without affecting the other one is the core idea of the loosely coupled systems. There are a lot of similar examples. 

In that sense, let's switch the context from the development of a parking ticket machine to the development of a web application and see how our existing analysis of the users applies. The ticket machine has a user interface, but as we saw, it also has internal components that need maintenance. The same can be said about the web application. Who is going to maintain the web application? A software developer, of course. So, if we apply the analogy with the ticket machine, a software developer is also a user. For me, this is even more important in software engineering because a mechanic will probably repair or replace some parts but in the context of software engineering maintainability, it is not just about repairing but also adding new or changing the existing features. One of the main reasons why the software was invented is to enable easier and cheaper changes to the systems. If we create software that is rigid and hard to change, we are defeating its initial purpose.

As already stated, our goal should always be to provide value to the users. In the context of the ticket machine, it includes users that are maintaining the machine. The same applies to the software. If we want to provide value for the users, we should pay attention to the internal quality (the quality of the codebase) of the software as well. Developers are users, too. The point that I'm trying to make is not about terminology. It's about the mindset. We should treat developers as users because that will lead to the investment of time in the maintainability, which should increase the internal quality of the product.

So, if this makes sense, why do we often have legacy codebases that are hard to maintain? Why are we still trying to convince managers that the internal quality of the software is an important asset? Why don't many developers believe that providing internal software quality is equally as important as providing business value? Why don't we treat developers as users?
The shortest answer is: It's complicated. Many factors can be treated as causes, but for me, the main cause is the lack of useful measurements. This is what separates software engineering from other engineering disciplines like civil engineering or mechanical engineering.
Lines of code, code coverage, and cyclomatic complexity are measurements that we often use as a tool to get feedback about our codebases but, from experience, we all know that this information is really context-dependent and that it hardly provides any useful value. Even so, this is no reason to give up. We know from the pioneers that came before us about the best practices, antipatterns and different approaches to achieving a good software design. Internal software quality IS achievable and should be treated equally as important as the external one.

If we do not treat developers as users and focus only on the external quality of the software, we will not have good results because the only constant thing in software development is that future changes are inevitable. The internal and external qualities of the software are not mutually exclusive. You need both in order to provide real value. 
Treating a developer as a user should provide more sense when it comes to investing time in the internal software quality. 

Tuesday, July 2, 2019

Messaging - the essence of OOP

Note: This article is highly influenced by the work of Sandi Metz. Her books and lectures helped to fill in the missing pieces in my current understanding of the object-oriented design.

Probably we all heard about following job interview question:
"What are the four pillars of object-oriented programming?"

The expected answer usually is: Encapsulation, Polymorphism, Abstraction, and Inheritance.

These are really important concepts and I'm sure that every developer knows something about them.
Still, when we open some legacy code base, the code usually doesn't look object-oriented at all.
The fact that we are programming in C#, Java or some other object-oriented programming language does not guarantee that we are writing object-oriented code. In my experience it's the opposite: developers are often wrapping procedures in .cs or .java files.

The main problem with procedures is that they don't scale. Adding new features or changing behavior in large and complicated procedural code can be really time-consuming and stressful. I'm sure you already faced this kind of nightmare in your career.

So, if we consider that we know a lot about OOP and that we are practicing it for decades now, why are we still writing procedural code? Does that mean that OOP is bad? Well, maybe. But maybe we are missing something important.

Let's go back to the roots and look at a couple of quotes by Allan Key, the inventor of OOP:

“I thought of objects being like biological cells and/or individual computers on a network, only able to communicate with messages”

“The key in making great and growable systems is much more to design how its modules communicate rather than what their internal properties and behaviors should be.”

“I’m sorry that I long ago coined the term “objects” for this topic because it gets many people to focus on the lesser idea.”

“The big idea is “messaging””

Why is messaging so important and what did Allan Kay have in mind exactly?



Figure 1.

On Figure 1. we can see the message passing process. Object A is sending a message to the object B. Object A is called the sending object and object B the receiving object.
So, you may ask yourself: what is the difference between a method call and sending a message? There are a lot of discussions on the internet about this topic. To me, technically, there is no difference, but the term of sending a message represents much better the idea that the correct method will be selected and we do not know in advance which one will be executed.
One more subjective reason in favor of message passing terminology is that it makes me focus on communication between objects rather than focusing on types and internal properties of those objects.

Open closed principle (OCP)

OCP is one of the main principles of OOP which states that a class should be open for extension and closed for modification. In terms of reducing the cost of change, this is the perfect scenario. However, in practice, it can be really hard to achieve. The reason for this is simple: we can't predict the future. In order to be able to extend the class, we need to detect points of variation but we can't be sure which changes will come in the future. To me, detection of points of variation is one of the most, if not the most important part of the design process, because it leads us to more stable and maintainable solutions. Focusing on messages can't help us to predict the points of variation, but it can help us to detect them in the process of design and implement them at the appropriate moment.

Enough with the theory, let's look at an example.

The domain of the problem that we are solving is the party organization process. Our main goal is to organize parties for our customers.

The starting point could look like this:

The Party class is receiving Technician dependency as the parameter of the OrganizeBy method.
If we look closer we can see that Party class sends three messages to the Technician. Party knows that Technician is setting up speakers, mixer and lighting. Based on these messages we can conclude that the Party class knows too much about Technician. It knows how Technician is doing his job. Whenever Technician changes the way of setting up the equipment, Party needs to change also. This is bad design and we should hide how Technician is preparing a party. When we want to hide details, we introduce an abstraction, right? So, we can introduce ITechnician interface and the code now looks like this:

Is that better? No, because interfaces are not abstractions by themselves. We are still sending messages that are too concrete, only now, we are sending them to ITechnician. We should probably try to hide details by sending a more abstract message:

Notice that I removed the ITechnician interface and returned to the concrete Technician as an argument. This is better, but let's make our domain more interesting. We have a new requirement for a chef that needs to make food for the party and a disc jockey (DJ) that should create a song list.


By implementing new requirements, we broke OCP. We changed the implementation of the Party class in order to add new behavior. This is often hard to avoid, but at least every change that we are making should take us closer to that goal. One more time, let's examine the messages. Based on the message examination, we can conclude that Technician, Chef, and DiscJockey are participating in the organization of a party. So, we can rename the messages in the following manner:

Now, we are sending Organize message to all dependencies. All three dependencies are organizers of a party. This was an important step because we just discovered a role: the Organizer role. We can implement that role with the role interface IOrganizer. Now, OrganizeBy method can look like this:

OrganizeBy method is receiving a collection of the organizers as the parameter. Next, we are iterating through the collection of organizers and depending on the type of the organizer, we are sending the correct message. This is again really bad design. Whenever we want to add a new organizer we need to again change Party class so we didn't achieve anything in terms of reducing the cost of change. We should again do the closer examination of the messages that we are sending. There are two different messages: organizer.Organize(Equipment) and organizer.Organize(Guests). Instead of passing Equipment and Guests, we should try to pass the Party class as the parameter of the Organize message. The Party class becomes:

What we achieved here is that now we can introduce any new organizer for a party and we don't need to change the Party class. In the context of organizing the party, OCP is respected. This example does not suggest any kind of a pattern where you should pass a message sending object as a parameter to its receivers. It's just an example of how concentrating on messages that are passed between objects, we can achieve a proper object-oriented design.

In the context of the Party class, everything looks fine. But let's take a closer look at one of the organizers.

There is a problem here. Technician is sending Equipment message to the Party but it can also send Guests or even send the OrganizeBy message. Interface segregation principle is broken, and also, Technician and other organizers are forever bound to the organization of the Party class. In Ruby, for example, we can say that we have interface segregation out of the box, but in C# we need to put more effort in order to achieve it. Remember that we discovered the Organizer role earlier? Well, the Organizer is a role, but there is one more hidden role: Organizable. Party is organizable in the context of Equipment and Guests. So we can define two interfaces: IEquipmentOrganizable and IGuestsOrganizable.

Now we can implement the Party class like this:

The organizer interface becomes:

And organizers are:

Interface segregation is respected and organizers can organize basically everything that is organizable.

Abstractions

There are many definitions of what abstraction really is, but my favorite is by Robert C. Martin: 

"Abstraction is the elimination of the irrelevant and the amplification of the essential". 

Even if this may not sound clear at first, it's actually quite simple. Remember when we introduced IOrganizer interface? We eliminated the irrelevant details and amplified the essential in the context of party organization. Abstractions are about hiding irrelevant details. Let's go a few steps backward and look at the implementation of Technician and DiscJockey classes.


The usual process that I encountered is that people are trying to extract an abstraction out of objects. I'm also guilty of that process in the past. So, when we look at Technician and Chef, can we define a proper abstraction? Well, it depends on what we want to abstract. In the context of the party organization, in my opinion, it's impossible to extract a proper abstraction.

Remember that IOrganizer abstraction was resulting from analysis of the message passing process between Party and its dependencies and not as a consequence of some extraction of common behavior from objects. The main point here is that abstractions are in messages, not in objects. When you take into account that abstraction discovery is probably one of the most critical (and hardest) parts of the OOP design, then we can realize how extremely important this approach is. 

Conclusion

Software design is about how we organize our code. Object-oriented programming should make our life easier in terms of responding to constant changes. OOP design should be more maintainable and flexible compared to the procedural one. The behavior of the system in the OOP world should be changed by the composition of objects and not by changing procedural parts of the code.

With all this said, by respecting OOP principles, we should be able to provide greater value to our stakeholders in terms of ability to respond to change.

In this article, we didn't cover some design pattern that we should use on a daily basis. The intention was to try to understand how switching our mindset to focus on messages between objects can help us deal with the complex parts of the object-oriented design. 

I will sum up with a quote by Sandi Metz: "You don't send messages because you have objects, you have objects because you send messages."


Friday, October 5, 2018

"Mockist" or "Classicist"? Both!

From time to time, an article or a talk with an interesting name pops up on the internet:
TDD is dead, Mocking sucks, Object-oriented programming is embarrassing, ...
In my opinion, these kinds of articles are good for the industry because it encourages discussion and inspires us to re-examine our views.
However, one of the things that is common for most of these articles and talks is a placement of paradigms and methodologies into the wrong context:

  • Trying to see the benefits of OOP (Object-oriented programming) over procedural programming on toy examples,
  • Trying to apply mocking on a procedural code,
  • Mocks are confused with stubs, etc.
This lack of understanding can be very harmful because it can lead to the wrong impression and premature rejection of abovementioned methodologies.

“What I have seen is novices deciding to be “classicist” or “mockist” instead of learning how to pick a right tool for the job at hand”
Nat Pryce

Let us examine both TDD styles so we can understand the benefits and drawbacks of each one.   

“Classicist” style

“Classicist” style is described in Kent Beck's book Test Driven Development: By Example.

In short, the TDD cycle happens in the following steps:

  • Red: Create a test and make it fail,
  • Green: Make the test pass by any means necessary,
  • Refactor: Change the code to remove duplication and to improve the design while ensuring that all tests still pass.
An important thing to notice here is that design decisions are made in the blue phase of the cycle.

This style of testing is based on state verification. It means that we check the correctness of the unit by examining its state.

My first contact with unit tests and TDD was in the environment where isolation of a unit from all dependencies, mocking and stubbing were a standard way of writing tests. At the time, I’ve heard a talk from Kent Beck where he said that he doesn’t use mocks (with exception of things like communication with a database, filesystem, etc.). Frankly, I was not sure how is that even possible. Where is the isolation of the unit? The answer is simple: A unit uses real dependencies. There is no isolation that I was used to.

One more thing that I didn’t get at the time is that unit of work isn’t necessary one class.

Roy Osherove in “The Art of Unit Testing" describes a unit of work as:
“A unit of work is the sum of actions that take place between the invocation of a public method in the system and a single noticeable end result by a test of that system. A noticeable end result can be observed without looking at the internal state of the system and only through its public APIs and behavior.”

This may result in tests that are dealing with a cluster of objects. Because of that, we might need a large number of tests to cover all the important parts of our code. J. B. Rainsberger explained this in his brilliant talk: Integrated tests are a scam.

Another issue is that, in my experience, “classicist” style doesn’t create useful pressure on design as “mockist” style does.

“Mockist” style:  

The best resource for this style of TDD is Growing Object-Oriented Software, Guided by tests.

Usual statement on the internet: Mocks will result in brittle tests that bind to implementation details and will lead to destructive decoupling (test induced damage).

Is this really true?

“The key in making great and growable systems is much more to design how its modules communicate rather than what their internal behaviors should be.”
Alan Kay

Majority of us came to the object-oriented world from procedural languages and most of us retained our procedural habits. Trying to apply mocks on procedural code can(will) cause abovementioned problems.

Key to the good object-oriented design lies in messaging. Whether we are going to create a good design depends on our ability to make correct decisions in terms of what belongs to the inside and what to the outside of an object. “Mockist” style TDD can be of great help in this complicated process.

Let's say that I'm practicing “mockist” style TDD during the implementation of some class Foo. I create a failing test and to make it pass, I detected that I'm going to need a collaborator to do something for me. I create a mock of IBar interface and I'm asserting that mock has received a matching call. An important thing to point out is that I'm going to assert only on command messages to my collaborators (for reference please watch Sandi Metz talk “Magic tricks of unit testing”). The great thing here is that even if I need a collaborator, I can concentrate on the implementation of the class Foo (thanks to mocks), and when I'm done, I can start implementing its collaborators. That should be straightforward because I already defined contracts that my collaborators must adhere to.

SOLID

The behavior of the OOP system should be modified by object composition and not by changing the implementation of methods. IBar interface that we introduced during the implementation of Foo class enables us to change behavior with composition (we can inject another implementation of IBar interface, or maybe decorator or composite).
This also allows us to comply with the Open/Closed principle.
Interface segregation principle is all about the process of creating interfaces from a client's perspective. This is exactly what we are doing here. We are creating interfaces from Foo's perspective.
In my experience, this kind of interfaces (created by client’s needs) are helping with Liskov substitution principle also and by thinking of what should be the responsibility of a Foo class and what should be its collaborator’s role, we are getting feedback about Single Responsibility Principle.
Implementation of IBar interface is injected into Foo class using dependency injection, so we are respecting Dependency inversion principle also.    

I'm not claiming that TDD leads to a better design mechanically, a developer does that! If you mechanically apply the Single Responsibility Principle and Dependency injection everywhere without deep domain analysis and thinking, you will produce incohesive design and brittle tests. If you are not following Law of Demeter, you will probably end up with deeply nested mocking which is also bad for design, etc.

My point is that this methodology puts us into the position where we'll be able to make better design decisions. You can create a good and modular design with “classicist” approach of course, but to me, "mockist" style provides a better environment for this.

When I mentioned the Open/Closed principle, I made the strong statement. One of the biggest problems with this principle is that we cannot predict the future. When is the right time to introduce abstraction? There is no concrete answer to this. It all depends on the context. However, wrong abstractions can be (are) painful (Sandi Metz: Wrong Abstraction). If I'm working in the environment where I'm familiar with domain and I can detect (guess with enough confidence) the abstractions, “mockist” is the right tool for the job. However, if I need to explore, I'll start with “classicist” and then switch to “mockist” when I'm confident enough.

Summary

Classicist style can lead to the problem with a large number of integrated tests. Mocks are not for procedural code. “Mockist” style creates positive pressure on design. In order to give us best possible results, “mockist” style requires deep domain knowledge about a problem that we are solving (deep domain knowledge is always preferable, but I think that it’s crucial here in order to detect correct abstractions as early as possible) and deep understating of Object-oriented principles.

To be able to pick a right tool for the job means that you have a choice. To have a choice means that you know multiple ways to do one thing. If you don't, then there's no choice. If you know only one design pattern or style of TDD, that's the way you will always do things. Only if you truly understand different approaches you can choose the most suitable one for you and the problem you are currently solving.