Sunday, August 19, 2012

Dependency Injection: How Do You Find the Balance?

As a developer, I am constantly trying to find the right balance -- to figure out the right level of abstraction for the current project.  If we add too much (unneeded) abstraction, we end up with code that may be more difficult to debug and maintain.  If we add too little (needed) abstraction, we end up with code that is difficult to extend and maintain.  Somewhere in-between, we have a good balance that leads to the optimum level of maintainability for the current environment.

Technique 1: Add It When You Need It
I'm a big fan of not adding abstraction until you need it.  This is a technique that Robert C. Martin recommends in Agile Principles, Patterns, and Practices in C# (Amazon Link).  This is my initial reaction to abstractions in code -- primarily because I've been burned by some really badly implemented abstractions in the past.  After dealing with abstractions that didn't add benefit to the application (and only complicated maintenance), my kneejerk reaction is to not add abstractions until really necessary.

This is not to say that abstractions are bad.  We just need to make sure that they are relevant to the code that we are building.  The bad implementations that I've run across have generally been the result of what I call "white paper architecture".  This happens when the application designer reads a white paper on how to architect an application and decides to implement it without considering the specific implications in the environment.  I'll give 2 examples.

Example 1: I ended up as primary support on an application that made use of base classes for the forms.  In itself, this isn't a bad thing.  The problem was in the implementation.  If you did not use the base class, then the form would not work at all.  This led to much gnashing of teeth.  In a more useful scenario, a base class would add specific functionality.  But if the base class was not used, the form would still work (just without the extra features).

Example 2: I helped someone out on another project (fortunately, I didn't end up supporting this application myself).  This application was abstracted out too far.  In order to add a new field (meaning, from the data store to the screen), it was necessary to modify 17 files (from data storage, through ORM, objects on the server side, DTOs on the server side, through the service, DTOs on the client side, objects on the client side, to the presentation layer).  And unfortunately, if you missed a file it did not result in a compile-time error; it would show up as a run-time error.

After coming across several application like these, I've adopted the YAGNI principle (You Aren't Gonna Need It).  If you do need it later, then you can add it.

Technique 2: Know Your Environment
Unfortunately, Technique 1 isn't always feasible.  It is often time consuming to go back into an application and add the abstractions as you need them.  When we're asked as developers to keep a specific delivery velocity, we're not often given time to go back and refactor things later.  So, a more practical option comes with experience: know the environment that you're working in.

As an example, for many years I worked in an environment that used Microsoft SQL Server.  That was our database platform, and every application that we built used SQL Server.  Because of this, I didn't spend time doing a full abstraction of the data layer.  This doesn't mean that I had database calls sprinkled through the code.  What it means is that I had a logical separation of the database calls (meaning that DB calls were only made in specific parts of the library) but didn't have a physical separation (for example, with a repository interface that stood in front of the database).

Was this bad design?  I know several folks who would immediately say "Yes, that's terrible design."  But I would argue that it was good design for the application environment.

Out of the 20 or so applications that I built while at that company, a grand total of one application needed to support a different database (an application that pulled data from a vendor product that used an Oracle backend).  For that one application, I added a database abstraction layer (this was actually a conversion -- the vendor product was originally using SQL Server and was changed to Oracle during an upgrade).  So what makes more sense?  To add an unused abstraction to 20 applications?  Or to add the necessary abstraction to the one application that actually needed it?

Now, if I was building an application for a different environment that needed to support different data stores (such as software that would be delivered to different customer sites), I would design things much differently.  You can see a simple example of how I would design this here: IEnumerable, ISaveable, IDontGetIt: Understanding .NET Interfaces.

Unfortunately, this type of decision can only be made if you know your environment well.  It usually takes years of experience in that environment to know which things are likely to change and which things are likely to stay the same.  When you walk into a new environment, it can be very difficult to figure out how to make these distinctions.

Dependency Injection: Getting Things "Just Right"
My current project is a WPF project using Prism.  Prism is a collection of libraries and guidance around building XAML applications, and a big part of that guidance is around Dependency Injection (DI).  I've been doing quite a bit of programming (and thinking) around dependency injection over the last couple months, and I'm still trying to find the balance -- the Goldilocks "Just Right" between "Too Loosely Coupled" and "Too Tightly Coupled".

Did I just say "Too Loosely Coupled"?  Is that even possible? We're taught that loose coupling is a good thing -- something we should always be striving for.  And I would venture to guess that there are many developers out there who would say that there's no such thing as "too loosely coupled."

But the reason that loose coupling is promoted so highly is that our problem is usually the opposite -- the default state of application developers is to have tight coupling.  Loose coupling is encouraged because it's not our instinctual reaction.

I'm currently reading Mark Seemann's Dependency Injection in .NET (Amazon Link).  This is an excellent book (disclaimer: I've only read half of it so far, but I don't expect that my evaluation will change).  Seemann describes many of the patterns and anti-patterns in Dependency Injection along with the benefits and costs (which helps us decide when/where to use specific patterns).

An important note: Seemann specifically says that the sample application that he shows will be more complicated than most DI samples he's seen.  He does this because DI doesn't make sense in a "simple" application; the value really shines in complex applications that have many functions that should be broken out.  With the functions broken out into separate classes, it makes sense to make sure that the classes are loosely coupled so that we can add/remove/change/decorate implementations without needing to modify all of our code.  This means that not all applications benefit from DI; the benefits come once we hit a certain level of complexity.

So, now we have to decide how much dependency injection is "Just Right".  As an example, Seemann describes the Service Locator as an anti-pattern.  But Prism has a built-in Service Locator.  So, should we use the Prism Service Locator or not?  And that's where we come back to the balance of "it depends."

In the application I'm working on, we are using the Service Locator pattern, and it seems to be working well for those parts of the library.  I have run into a few interesting issues (specifically when writing unit tests for these classes), and it turns out that Seemann points out exactly the issues that I've been thinking about.

I don't really have time to go into the details here.  As an example, when using the Service Locator, it is difficult to see the specific dependencies for a class.  As we have been modifying modules during our build, sometimes the unit tests are breaking because a new dependency was added (which is resolved by the Service Locator), but it doesn't stop the code from compiling.  We then need to modify our unit tests by adding/mocking the new dependency.

[Editor's Note: I've published an article talking more about the pros and cons of the Service Locator pattern: Dependency Injection: The Service Locator Pattern.]

As with everything, there are pros and cons.  For the time being, I'm content with using the Service Locator for our application.  There are some "gotchas" that I need to look out for (but that's true with whatever patterns I'm using).  Seemann also notes that he was once a proponent of Service Locator and moved away from it after he discovered better approaches that would eliminate the disadvantages that he was running across.  It may be that I come to that same conclusion after working with Service Locator for a while.  Time will tell.

How Do You Do Dependency Injection?
Now it's time to start a conversation.  How do you use Dependency Injection?  What has worked well for you in different types of applications and environments?  Do you have any favorite DI references / articles that have pointed you in a direction that works well for you?

As an aside, Mark Seemann's book has tons of reference articles -- most pages have some sort of footnote referring to a book or article on the topic.  It is very evident that Seemann has researched the topic very thoroughly.  I'm going to try to read through as many of these references as I can find time for.

Drop your experiences in the comments, and we can all learn from each other.

Happy Coding!

2 comments:

  1. How are you unit testing your code if you aren't injecting a repository into your classes? It seems that having the ability to mock a repository is one of the primary benefits of using the repository pattern.

    ReplyDelete
    Replies
    1. Dependency Injection and the Repository pattern definitely facilitate unit testing. Since this article was written last year, I put together a presentation on Dependency Injection that shows unit testing with a mock repository (http://www.jeremybytes.com/Demos.aspx#DI).

      But there are a number of options if we don't use a repository. For example, if we have entity classes (just properties with no behavior), there really isn't much to unit test, so we may not need an additional layer. Another option is to have a loader/saver object that is outside of the class itself; in this case the class is passed to the loader/saver instead of the class having a reference to its own repository. This way the class can be easily tested since it is isolated from the load/save behavior.

      I recently wrote an article about whether we need a repository. Like most questions, the answer is "It depends": http://jeremybytes.blogspot.com/2013/08/do-i-really-need-repository.html

      Delete