Saturday, October 20, 2012

Abstraction: The Goldilocks Principle

Abstractions are an important part of object-oriented programming.  One of the primary principles is to program to an abstraction rather than a concrete type.  But that leads to a question: What is the right level of abstraction for our applications?  If we have too much abstraction, then our applications can become difficult to understand and maintain.  If we have too little abstraction, then our applications can become difficult to understand and maintain.  This means that there is some "sweet spot" right in the middle.  We'll call this the Goldilocks level of abstraction: "Just Right."

So, what is the right level?  Unfortunately, there are no hard-and-fast rules that are appropriate for all development scenarios.  Like so many other decisions that we have to make in software development, the correct answer is "it depends."  No one really likes this answer (especially when you're just getting started and are looking for guidance), but it's the reality of our field.

Here are a few steps that we can use to get started.

Step 1: Know Your Tools
The first step to figuring out what abstractions to use is to understand what types of abstractions are available.  This is what I focus on in my technical presentations.  I speak about delegates, design patterns, interfaces, generics, and dependency injection (among other things).  You can get information on these from my website: http://www.jeremybytes.com/Demos.aspx.  The goal of these presentations is to provide an overview of the technologies and how they are used.  This includes examples such as how to use delegates to implement the Strategy pattern or how to use interfaces with the Repository and Decorator patterns.

We need to understand our tools before we can use them.  I try to show these technologies in such a way that if we run into them in someone else's code, they don't just look like magic incantations.  If we can start looking at other people's code, we can get a better feel for how these tools are used in the real world.

A while back, I wrote an article about this: Design Patterns: Understand Your Tools.  Although that pertains specifically to design patterns, the principle extends to all of the tools we use.  We need to know the benefits and limitations to everything in our toolbox.  Only then can we make an informed choice.

I remember seeing a woodworking show on television where the host used only a router (a carpentry router, not a network router).  He showed interesting and unintended uses for the router as he used it as a cutting tool and a shaping tool and a sanding tool.  He pushed the tool to its limits.  But at the same time, he was limiting himself.  He could use it as a cutting tool, but not as effectively as a band saw or a jigsaw or a table saw.  I understand why this may be appealing: power tools are expensive.  But in the software world, many of the "tools" that we use are simply ideas (such as design patterns or delegates or interfaces).  There isn't a monetary investment required, but we do need to make a time investment to learn them.

Step 2: Know Your Environment
The next step is to understand your environment.  Whether we are working for a company writing business applications, doing custom builds for customers, or writing shrink-wrap software, this means that we need to understand our users and the system requirements.  We need to know what things are likely to change and which are not.

As an example, I worked for many years building line-of-business applications for a company that used Microsoft SQL Server.  At that company, we always used SQL Server.  Out of the 20 or so applications that I worked on, the data store was SQL Server.  Because of this, we did not spend time abstracting the data access code so that it could easily use a different data store.  Note: we did have proper layering and isolation of the data access code (meaning, all of our database calls were isolated to specific data access methods in a specific layer of the application).

On the other hand, I worked on several applications that used business rules for processing data.  These rules were volatile and would change frequently.  Because of this, we created rule interfaces that made it very easy to plug-in new rule types.

I should mention that these applications could have benefited from abstraction of the database layer to facilitate unit testing.  We were not doing unit testing.  In my next article, I will talk more about unit testing (and my particular history with it), and why it really is something we should all be doing.

Step 3: Learn the Smells
A common term that we hear in the developer community is "code smells". This basically comes with experience.  As a developer, you look at a bit of code and something doesn't "smell" right -- it just feels off. Sometimes you can't put your finger on something specific; there's just something that makes you uncomfortable.

There are a couple of ways to learn code smells.  The preferred way is through mentorship.  Find a developer with more experience than you and learn from him/her.  As a young developer, I had access to some really smart people on my development team.  And by listening to them, I saved myself a lot of pain over the years.  If you can learn from someone else, then be sure to take advantage of it.

The less preferred (but very effective) way of learning code smells is through trial and error.  I had plenty of this as a young developer as well.  I took approaches to applications that I later regretted.  And in that environment, I got to live with that regret -- whenever we released a piece of software, we also became primary support for that software.  This is a great way to encourage developers to produce highly stable code that is really "done" before release.  While these applications were fully functional from a user standpoint, they were more difficult to maintain and add new features than I would have liked.  But that's another reality of software development: constant learning.  If we don't look at code that we wrote six months ago and say "What was I thinking?", then we probably haven't learned anything in the meantime.

Step 4: Abstract as You Need It
I've been burned by poorly designed applications in the past -- abstractions that added complexity to the application without very much (if any) benefit.  As a result, my initial reaction is to lean toward low-abstraction as an initial state.  I was happy to come across the advice to "add abstraction as you need it".  This is an extremely good approach if you don't (yet) know the environment or what things are likely to change.

As an example, let's go back to database abstractions.  It turns out that while working at the company that used SQL Server, I had an application that needed to convert from SQL Server to Oracle.  The Oracle database was part of a third-party vendor product.  For this application, I added a fully-abstracted repository that was able to talk to either SQL Server or the Oracle database.  But I just did this for one application when I needed it.

Too Much Abstraction
As mentioned above, too much abstraction can make an application difficult to understand and maintain.  I encountered an application that had a very high level of abstraction.  There were new objects at each layer (even if the object had exactly the same properties as one in another layer).  The result of this abstraction meant that if someone wanted to add a new field to the UI (and have it stored in the database), the developer would need to modify 17 different code files.  In addition, much of the application was wired-up at runtime (rather than compile time), meaning that if you missed a change to a file, you didn't find out about it until you got a runtime error.  And since the files were extremely decoupled, it was very difficult to hook up the debugger to the appropriate assemblies.

Too Little Abstraction
At the other end of the spectrum, too little abstraction can make an application difficult to understand and maintain.  Another application that I encountered had 2600 lines of code in a single method.  It was almost impossible to follow the logic (lots of nested if/else conditions in big blocks of code).  And figuring out the proper place to make a change was nearly impossible.

"Just Right" Abstraction
My biggest concern as a developer is finding the right balance -- the Goldilocks Principle: not too much, not too little, but "just right".  I've been programming professionally for 12 years now, so I've had the benefit of seeing some really good code and some really bad code (as well as writing some really good code and some really bad code).

Depending on what kind of development work we do, we can end up spending a lot of time supporting and maintaining someone else's code.  When I'm writing code, I try to think of the person who will be coming after me.  And I ask myself a few key questions.  Will this abstraction make sense to someone else?  Will this abstraction make the code easier or harder to maintain?  How does this fit in with the approach used in the rest of this application?  If you are working in a team environment, don't be afraid to grab another developer and talk through a couple of different options.  Having another perspective can make the decision a lot easier.

The best piece of advice I've heard that helps me write maintainable code: Always assume that the person who has to maintain your code is a homicidal maniac who knows where you live.

One other area that will impact how much abstraction you add to your code is unit testing.  Abstraction often helps us isolate code so that we can make more practical tests.  I'll be putting down my experiences and thoughts regarding unit testing in the next article.  Until then...

Happy Coding!

No comments:

Post a Comment