Friday, September 18, 2009

Business Logic Patterns

All of the patterns here are taken from Martin Fowler’s excellent book Patterns of Enterprise Application Architecture.  The following is my interpretation of these patterns and the ramifications of each one.  I highly recommend that you refer back to PoEAA for a full explanation of each pattern.

I presented a session on the first day of TechDays in Vancouver called “Layers: The Secret Language of Architects.”  As part of that presentation we discussed some of the different patterns that are used for coding our business logic.  After the session was over, several people commented that they particularly liked this part of the session and encouraged me to blog about it, so without further ado…

The Patterns

Transaction Script

Transaction script is the simplest of the three patterns.  It is represented by simple procedural code that executes business logic with simple and straight forward programming constructs and does not use object oriented techniques.

Transaction Script should be used when the application has very simple business logic and is not expected to grow much beyond the initial development effort.  It is very quick and easy to get going with Transaction Script because it does not require much in terms of supporting infrastructure.  However, I strongly recommend that if you decided to use Transaction Script that you make explicit seams around it so that when (yes, when, not if) you decide that your application has outgrown the limits of the pattern it will only mean a rewrite of the business logic and not a rewrite of the entire application.  As the complexity of the system grows, the Transaction Script breaks down quite fast.  It causes a lot of duplication and often results in rigid and brittle systems.

Here is a sample of some Transaction Script code.   I’ll be using the canonical example of a funds transfer where a given amount is transferred from one account to another.  The logic is intentionally kept very simple so that it’s easier to talk about the different responsibilities being addressed in the code.

public class FundsTransferService
{
public void TransferFunds(int fromAccountID, int toAccountID, decimal amount)
{
AccountDataAccess dataAccess = new AccountDataAccess();

decimal fromAccountBalance = dataAccess.GetAccountBalance(fromAccountID);
decimal toAccountBalance = dataAccess.GetAccountBalance(toAccountID);

fromAccountBalance -= amount;
toAccountBalance += amount;

dataAccess.SetAccountBalance(fromAccountID, fromAccountBalance);
dataAccess.SetAccountBalance(toAccountID, toAccountBalance);
}
}

The code is very simple and straight forward.  A data access class is used to retrieve the current balance from each account, perform the logic and then save the balances back to the data access class.  Note that of the 7 lines of code in the sample, only 2 are actually business logic.  The rest are infrastructure concerns.

Table Module

Table Module uses one instance to represent all of the rows in a database table.  Each class wraps some representation of the database table (eg. a DataSet) and will pull out a single row to operate on a single item.  The distinction of this pattern is that the business layer is written in an object oriented manner, but instead of storing the data in the objects, the data is stored in the DataSet.

Although DataSets were quite popular in the .NET world, very few architectures made use of the Table Module pattern.  DataSets were usually passed around to represent state, but the object oriented representation of the business logic was not present.  DataSets were instead used in Transaction Script patterns where classes were simply used to group related methods of procedural code.  Table Module does scale in complexity better than Transaction Script because it can take advantage of object oriented techniques, but it still gets increasingly more difficult to implement new functionality because of the heavy infrastructure concerns that remain in the code.

Here is the same logic rewritten in the Table Module pattern.


public class FundsTransferService
{
public void TransferFunds(int fromAccountID, int toAccountID, decimal amount)
{
Account account = Account.Load();
account.TransferFunds(fromAccountID, toAccountID, amount);
account.Save();
}
}

public class Account
{
private const string AccountTable = "Account";
private const string IDColumn = "ID";
private const string BalanceColumn = "Balance";

private readonly DataSet dataSet;

public static Account Load()
{
AccountDataAccess accountDataAccess = new AccountDataAccess();
DataSet dataSet = accountDataAccess.GetAccountTable();
return new Account(dataSet);
}

public Account(DataSet dataSet)
{
this.dataSet = dataSet;
}

public void TransferFunds(int fromAccountID, int toAccountID, decimal amount)
{
DataRow fromAccountRow = GetAccountRow(fromAccountID);
DataRow toAccountRow = GetAccountRow(toAccountID);

decimal fromAccountBalance = (decimal) fromAccountRow[BalanceColumn];
decimal toAccountBalance = (decimal) toAccountRow[BalanceColumn];

fromAccountBalance -= amount;
toAccountBalance += amount;

fromAccountRow[BalanceColumn] = fromAccountBalance;
toAccountRow[BalanceColumn] = toAccountBalance;
}

private DataRow GetAccountRow(int accountID)
{
foreach (DataRow accountRow in dataSet.Tables[AccountTable].Rows)
{
if ((int) accountRow[IDColumn] == accountID)
{
return accountRow;
}
}
return null;
}

public void Save()
{
dataSet.AcceptChanges();
}
}

There is a lot more code in this example, but note how a subclass of Account could override the TransferFunds method if the business logic required specialized logic.  The subclass could reuse all of the infrastructure code and just change the business logic.

Domain Model

Domain Model is an object model that encapsulates both the data and the behaviour.  It takes full advantage of object oriented principles such as encapsulation and polymorphism.  The Domain Model pattern is the best of the three patterns at representing complex domains.  It is the power of isolating the domain from the infrastructure combined with the modeling power of object oriented languages that allows the complexity to scale well when using this pattern.

Business logic implemented with a Domain Model requires significant effort to isolate it from infrastructure concerns.  Because of this additional effort it initially takes longer to develop systems using the Domain Model pattern.  However, due to the powerful methods for representing business logic, it becomes relatively easier (compared to the other patterns that is) to develop as the system grows in complexity.  An initial effort to set up the surrounding infrastructure is rewarded later on by allowing the developers to maintain a constant rhythm and speed of development.

Here is the funds transfer logic as represented using the Domain Model pattern.


public class FundsTransferService
{
public void TransferFunds(Account fromAccount, Account toAccount, decimal amount)
{
fromAccount.Debit(amount);
toAccount.Credit(amount);
}
}

public class Account
{
private decimal balance;

public void Debit(decimal amount)
{
balance -= amount;
}

public void Credit(decimal amount)
{
balance += amount;
}
}

Note the simplicity of this solution and that every single line of code is directly representing business logic.  It is this isolation and focus on business logic that allows it to scale well with complexity.  There is no persistence in this code sample, which is intentional, and persistence techniques will be discussed next.

Persistence in a Domain Model

In the TechDays presentation we presented Active Record and Domain Model as two separate patterns.  This was a conscious diversion from Fowler’s patterns because this is the way that we have observed systems were being built in the wild, that and the fact that none of us had actually ever seen a system that used Table Module.  The prevalence of Active Record tools and frameworks has caused it to be considered a different pattern than Domain Model.  If you go by Fowler’s definition though, Active Record is a persistence pattern of a Domain Model.

Active Record

Active Record uses a one to one mapping between Domain Model classes and tables in the database.  Each class is mapped to a table, each instance is mapped to a row, and each field is mapped to a cell.  Classes are also responsible for loading and saving themselves to the database.

When using Active Record for persistence, we must add some more code to our Domain Model.


public class FundsTransferService
{
public void TransferFunds(int fromAccountID, int toAccountID, decimal amount)
{
Account fromAccount = Account.Load(fromAccountID);
Account toAccount = Account.Load(toAccountID);

fromAccount.Debit(amount);
toAccount.Credit(amount);

fromAccount.Save();
toAccount.Save();
}
}

public class Account
{
private decimal balance;

public void Debit(decimal amount)
{
balance -= amount;
}

public void Credit(decimal amount)
{
balance += amount;
}

public static Account Load(int accountID)
{
// TODO: Implement this method
throw new NotImplementedException();
}

public void Save()
{
// TODO: Implement this method
throw new NotImplementedException();
}
}

This technique combines the responsibility of persistence and business logic.  Note that we have added some persistence code to business logic, so there is some mixing of concerns.  There are several Active Record frameworks that will allow you to remove much of this code from the entities and let the framework handle it, but the concepts remain the same.

Object Relational Mapper

Object Relational Mapper is a pattern that puts a high value on Persistence Ignorance in the Domain Model.  The Domain Model should know nothing about how, or even if, it is persisted to the database.  An Object Relational Mapper is used to map between the Domain Model and the relational database.  Unlike Active Record, the two models can be quite different and take advantage of the powers of each paradigm.  In order to isolate the Domain Model from persistence knowledge, it is usually required to use a Service Facade layer to coordinate the usage of the Object Relational Mapper.

Let’s have a look at the added infrastructure required to use the Object Relational Mapper pattern.


public class FundsTransferFacade
{
private readonly IAccountRepository accountRepository;
private readonly IFundsTransferService fundsTransferService;

public FundsTransferFacade(IAccountRepository accountRepository, IFundsTransferService fundsTransferService)
{
this.accountRepository = accountRepository;
this.fundsTransferService = fundsTransferService;
}

public void TransferFunds(int fromAccountID, int toAccountID, decimal amount)
{
Account fromAccount = accountRepository.Get(fromAccountID);
Account toAccount = accountRepository.Get(toAccountID);

fundsTransferService.TransferFunds(fromAccount, toAccount, amount);
}
}

Here we have added a Service Facade layer that handles the translation into the Domain Model "language". I am also assuming that an Object Relational Mapper is being used, and Aspect Oriented Programming to wrap calls to the Facade Layer which initiates and cleans up the ORM. This is write once code and is not worth showing here.  This is the added infrastructure that is required to get going with a Domain Model, but once it is in place, we can focus more on the business logic.

Summary

We had a look at three difference patterns for representing business logic.  We examined some code samples to illustrate how each pattern will handle increased complexity.  Finally we looked at some of the infrastructure options that we need to implement when using the Domain Model pattern.  I hope this was a valuable exercise, and if not, please leave a comment so that I can improve it.

Tuesday, September 15, 2009

TechDays 2009 Retrospective

I’ve just arrived home after the end of TechDays Vancouver and I feel compelled to write a retrospective on my experience.  I have to say that TechDays was nothing like I expected.  I want to write this post with complete honesty to best tell the full story.  I hope that this transparency is appreciated.

About 6 weeks ago there was a flurry of activity in the blog-o-sphere about the lack of fundamentals covered in TechDays sessions.  I replied to this with my own comments as I felt at the time that, in conferences like TechDays, Microsoft was so intent on marketing that they would not allow any fundamentals sessions.  This blog post got me in trouble because no more than a day later John Bristowe (Developer Evangelist for western Canada) contacted me to ask if I was interested in presenting at TechDays in Vancouver.  I was a little hesitant at first, especially given my thinking at the time, but I decided to explore the opportunity and see where it went.  John sent me a list possible sessions and one session jumped out at me right away.  It was entitled “Test-Driven Development Techniques” but I didn’t really have a lot more to go on.  This was a session that was originally presented at TechEd, so I was given the slides, demo and a video of the original presentation.  I thought the session was alright, but decent enough that it delivered content that would be valuable to the community and did not focus on marketing a Microsoft production.  I have since heard 2nd, 3rd, 4th hand accounts of how John and team had to fight to keep this and another session in TechDays because they did not market a Microsoft product enough.

A little while later Justice Gray contacted me with a very cryptic message, but hinting at the fact that there might be opportunity to present some content that was more centered around development foundations.  I responded to him with my ideas, and sure enough, it was soon confirmed that Microsoft had agreed to add an additional Developer Foundations track to TechDays in Vancouver.  I was quite excited about this opportunity because I had a pretty good idea that Justice and I were of similar mind and that this session would be quite aligned with my ideals about software development.  Once the abstracts had been made available to me, I selected the session about the S.O.L.I.D. design principles, which I was quite excited about.  I did feel quite under the gun given that TechDays was not far away and I had already agreed to present another session that I needed to prepare for.  Given that I was already feeling swamped, not to mention I was in the middle of a home renovation, I asked Justice if I could give my input into the vision of the presentation, but ultimately I did not feel that I had the time to fully write it.

Over the next few weeks I was preparing madly, rewriting the demos and changing up the first couple slides for the TDD talk.  I was collaborating with Justice about the content for the SOLID talk and still trying to find some time to drywall in the evenings.  Once the planning for the Developer Foundations track started coming together, Justice and Peter Ritchie decided on the order of the sessions and it meant that my TDD and SOLID talks were back-to-back on the first day which I wasn’t comfortable with because I felt I would not be able to give either talk my full effort if they were so close.  Given this conflict, it was decided that I switch talks in the Developer Foundations track on the first day and present the Layers session instead.  It wasn’t supposed to be a big deal given that Adam Dymitruk was responsible for the content of the session.  So now I had somehow gone from zero to three sessions in no less than the span of about 3 weeks and was wondering how the hell I had got myself into such a mess!

At this point in time my home renovation got put on hold.  I told my wife that I was not allowed to complete the next step until after TechDays was over.  I spent a lot of time preparing for all of the sessions and have since decided that it is at least as much work to prepare for a session that someone else writes as it is to prepare your own.  The only difference is that you tend to get the content later if you don’t prepare the content yourself.

I arrived at the first day of TechDays with a little trepidation, this was after all my first time speaking at a conference of any size.  I watched Adam present S.O.L.I.D. since I was going to present it the next day.  My first session was my TDD talk in the Core Fundamentals and Best Practices track.  Looking back on it now, I think I was so focused on the presentation that I barely noticed that there were over 200 people in the room watching me.  I do feel that it went quite well and that people got the point of what I was trying to convey.  I had an attendee come up to me after and comment on how expressive my test method names were.  I used English readable sentences and he told me that he usually liked to try to keep his method names under 6 characters, but thought it was pretty cool to see some expressive method names.  Now this was not a focus of the presentation at all, but if this is all that he takes home from the session then I think we have to count that as a win.

After lunch I presented on Layers and I think it went alright.  Of the three presentations that I was presenting, I felt the least comfortable with its content.  Reception to the talk was quite positive, so I’m fairly pleased with how it turned out.  We weren’t quite sure what to expect in the Developer Foundations track given its late addition to the conference, but we were relatively pleased with the 30 or 40 people that were in attendance.  We had roughly the same level of attendance for all four sessions in the track that day.

I arrived early on day two since I was presenting S.O.L.I.D. in the first time slot of the day.  Justice and I were talking as we were setting up and wondering ifwe  could realistically expect anyone to show up.  All of the sessions in this track had been presented already the day before so we figured that attendance would have to be less than the previous day.  Needless to say that we were both absolutely shocked when there were not only more people in attendance than the previous day, but the room was packed!  Seriously!  There were people standing in the back!  I was quite excited about this talk since it was my favourite of the three I was doing, and I was really happy with how it went.  After I finished, an attendee came up to me and told me that he had flown in from Calgary just to attend the Developer Foundations track.  Think about that for a minute.  TechDays is happening in Calgary in November, but he felt that the content of the Developer Foundations track was important enough to spend the extra money to fly to Vancouver just to attend this track.  This absolutely floored me and is probably why I have felt the need to ramble on here for so long.

The rest of the day I spent relaxing and taking in the other speakers in the Developer Foundations track and the room was just as packed for each one of them.  Something happened between the first and second day of TechDays.  Something compelled a lot more people to attend this track the second day and I’m still at a loss to explain it.  I’m hoping that when we get to read the evaluations that it will shed some light on this.

The whole experience was much more than I imagined!  I will freely admit that there was about a 0% chance that I would have attended TechDays if I hadn’t had the opportunity to speak, so I am so glad that John and Justice took a chance on me.  I do mean that they both took a chance on me because I knew both of them only by reputation and by their blogs, and I’m pretty sure they knew even less about me.  I understand the risk that they both took putting their faith in me and I only hope that I met their expectations.

I want to publically thank John for putting together the Core Fundamentals and Best Practices track, which I have a sneaking suspicion had something to do with the low attendance in the Developer Foundations tracks on the first day.  Whatever battles you had to fight to put this together were worth it.

I also want to publically thank Justice and Peter for putting together the Developer Foundations track on such short notice.  I know that it far exceeded my expectations and I’m certain that it exceeded the expectations of everyone involved.  The biggest thanks goes out to all of the attendees that chose to come to check out my sessions and other sessions in the Developer Foundations track!  Microsoft has said that it was an experimental track and that, if the response was positive, that it would be continued in the future.  All I can say is that I’m looking forward to this track next year!

Sunday, September 6, 2009

Why I Love NUnit

I have to admit that I very little experience with other unit testing frameworks.  I’ve never felt the urge to switch because NUnit has served me so well over the years.  NUnit is simple, effective and does not create additional friction.  NUnit does take some flack for not having changed much over the years, but I think that is a strength because it hasn’t had to.

Recently I took the plunge into MSTest.  I hadn’t heard many good reviews, but from the outside it looked “the same”.  I mean, all I need to do is replace [TestFixture] with [TestClass] and [Test] with [TestMethod] and it would all work, right?  Wrong.  I was about to post “Why I Hate MSTest” but in the interest of staying positive, I’m posting “Why I Love NUnit” instead.

NUnit stays out of the way.  It does not impose any structure on the project structure I choose.  I can choose to put test classes in the same assembly as the classes they test, or I can choose to put test classes in a separate assembly.

Asserts just work.  This is especially evident when comparing collections.  I can compare collections of different types, it doesn’t matter if it is a Collection<T>, an IEnumerable<T> or an IList, NUnit is smart enough to compare each of the items in each collection.  I had not even thought of this until I switched frameworks and it didn’t work. 

When a test fails, NUnit tells me why the test failed with a very descriptive message.  This is extremely important because I want to maintain the short cycle rhythm of TDD.  Some frameworks rely more on providing links to the code that failed rather than providing a helpful message, but when running tests from the command line or with TestDriven.NET I can only rely on the error message.

TDD is a practice that strives to create a constant rhythm for the developer in order to maintain constant progress.  NUnit is a framework that just stays out of the way and allows the developer to maintain that rhythm without fuss or interruption.  Simple, concise, effective.  What else would I want?