Tuesday, April 30, 2013

Explained: Properties in .NET

Properties in .NET seem a bit curious when you're new to the framework.  They look like data, but that's not quite how they behave.  Things can get even more confusing when using the automatic property syntax (which is awesome if you know what it's actually doing).  So, let's take a bit of a deeper dive to see exactly what's going on in a .NET property.

Note: A video version of this article is also available on YouTube: JeremyBytes - C# Properties.

Property with a Backing Field
We'll create a simple "Person" class to explore properties.  Here's a typical property that has a backing field:


First, we have a private backing field ("firstName").  Then we have our public property ("FirstName") that has both a getter and a setter to access the backing field.  The getter returns the value of the backing field, and the setter sets the value of the backing field.

Note: You can easily create a property with a backing field using the "propfull" snippet in Visual Studio.  To do this, go to a blank line and type "propfull".  IntelliSense will show this as "Code snippet for property and backing field".  When you press Tab, it stubs out the code for you.  Then you get 3 highlighted items.  The first ("int" by default) is the type.  Just update this with the type you want ("string" in our case), and press Tab.  This will take you to the name of the backing field.  Update the name and press Tab again to go to the name of the property.  When you're done, just hit Enter.  Snippets are awesome for this kind of stuff.

This syntax looks a bit strange at first, but it's simply a shorthand way encapsulating a field.  This will become a bit more clear when we compare it to another language.

Get / Set Methods
Java has a different paradigm for this type of encapsulation.  The data is still hidden in a private field, but a pair of methods (get / set) are used to access this data.  Here's a phone number "property" that uses this type of syntax:


(Note: this is still C# code, but it's written in the Java property style -- I've also used the "curly braces on the same line" format, but we already know that this doesn't really matter).  This code makes things a bit more clear.  We have encapsulated our data with a pair of accessor methods.

By encapsulation, I mean that we have hidden our data ("phoneNumber") and provided specific methods ("get_PhoneNumber" and "set_PhoneNumber") that we can use to interact with the data from the outside world .  We could easily create a read-only property by simply omitting the "set" method.  Our data is safely hidden behind a public interface, and we can be confident that it cannot be modified without our class knowing about it.

.NET Properties are Methods
It turns out that properties in .NET also use get and set methods to control access to the data.  The difference is that these methods are hidden from our view.  Let's look at the IL that is generated by the code we've seen so far.

We'll use ildasm.exe to look inside our assembly.  If you want to see how to add IL DASM to your Visual Studio tools, take a look at this article: Book Review: Professional .NET Framework 2.0 (yes, the instructions are part of a book review I did).

Here's the overview of our class in IL DASM:


There are a couple of interesting things here.  First, notice our backing fields (with the teal diamonds).  Both "firstName" and "phoneNumber" are private string fields.  This is exactly what we would expect.

In our methods (the purple squares), we see the "get_PhoneNumber" and "set_PhoneNumber" methods that we created ourselves.  But we also see "get_FirstName" and "set_FirstName".  These methods were generated by the compiler from our property.

The last item (the red triangle) is our "FirstName" property.  Here's what it does:


This says that when someone tries to "get" the property, the method "get_FirstName" should be called.  When someone tries to "set" the property, then then method "set_FirstName" should be called.  This is how our property gets wired up to those compiler-generated methods.

If we compare the getters for our two properties, we see that the look almost exactly the same:


When we look at the method body, we see "ldarg.0" (load argument) -- this is at the beginning of every instance method.  It pushes "this" (which is argument 0) onto the stack so that it can be used in the method.  Next, we have "ldfld" (load field) -- this pushes the value of the specified field onto the stack.  We can see that the each method pushes "firstName" or "phoneNumber" as appropriate.

Finally, we have the "ret" (return) -- this returns the value on the top of the stack, which happens to be the field that we just pushed.

Note: This sample is built as "Release".  If you build as "Debug", then you may see some "nop" (no operation) instructions.  There are placeholders so that you can set breakpoints on the curly braces in the source code.

If we look at the method declaration (at the top), we see that these two methods are *almost* identical.  The difference is that "get_FirstName" has the "specialname" attribute specified.  This means that "get_FirstName" is a special method that cannot be called by user code.  If you try to put "person.get_FirstName()" in your code, you'll just get a compiler error.  We aren't allowed to access this method directly.

Other than that, we can see that there's no real difference between our manual getter (for the "phoneNumber" field) and the compiler-generated getter (for the "FirstName" property).

Automatic Properties
In C# 3.0 (.NET 3.5), we got some new syntax for creating properties: automatic properties.  Here's a sample:


The automatic property syntax is simply a shortcut.  If we don't need to do anything special in the getter or setter (more on this below), then this shorthand saves us from having to type out the standard implementation of the getter and setter (it also saves us from having to create a separate backing field).

Note: Automatic properties can be created with the "prop" snippet.  This only has 2 editable items: the type and the property name.  This snippet doesn't save nearly as much typing as "propfull", but it can still be useful.

So, let's compare our full property ("FirstName") to the automatic property ("LastName") to see what the compiler builds for us.  First, an overview:


We now have a new field: "<LastName>k__BackingField".  So, even though we don't create a backing field in our code, the compiler generates one for us.  It uses the strange name so that there won't be any potential naming conflicts with non-generated code.

Here's the IL for the "firstName" field and the "<LastName>k__BackingField":


There's no surprises for the "firstName".  It just shows it as a private string field.  Notice that the first line of the LastName backing field looks similar: it just shows a private string field.

The next line is an attribute noting that this field was generated by the compiler (there's actually a bit more code that has been truncated from this screenshot).

Finally, let's compare the getters for these two properties:


Here we see that the getters are exactly the same (except for the CompilerGeneratedAttribute).  This shows us that if we use automatic properties, we get (almost) exactly the same code as if we use a property with a backing field.

One thing to note about automatic property syntax is that we must have both the "get;" and the "set;".  If you want to make a read-only property, this is easy to get around: just make the setter private.  For example:


This has the effect of making the property read-only to the outside world.  It can only be set by the class internally.

Why Don't We Always Use Automatic Properties?
So, since we see that the property with the backing field and the automatic property generate the same IL, why would we ever want to use a property with a manual backing field?

There are a couple of answers to this.  First, sometimes we want to do some more work in the getter (other than just retrieving the value of the backing field).  For example, we may want the getter to supply a default value for a property if the backing field is null.  We may also want to use Property Injection (see: Dependency Injection: A Practical Introduction).

More frequently, we want to do more work in the setter.  We can put validation code in the setter that would reject an invalid value.  Or we can put security code in the setter that would only update the backing field if you have proper authorization.  Or we may want to fire an event to notify other objects that the value has changed.

This last scenario is very common in the XAML and WinForms databinding world.  "INotifyPropertyChanged" is an interface that specifies an event that can be fired when data is changed programmatically.  This notifies the UI that something is different and it needs to update the values that are displayed in the controls.  Here's an example of this type of setter:


Here we see that the setter has a bit more code in it.  First, we check to see if the incoming value already matches what's in the backing field.  If the data is unchanged, we "return" -- short-circuiting the rest of the setter.  Otherwise, we set the backing field and then raise the notification event.

Optimization Note: the "if" statement is not required; it just saves the firing of the notification event.  But conditionals also have a little bit of overhead, so you may want to re-evaluate whether the conditional is advisable based on your particular application.

So, we see that there are a few reasons why we may want to supply our own backing field and get/set implementation.

Wrap Up
The .NET property syntax is a bit odd when you first see it, especially if you are coming from a language that doesn't have this type of property implementation.  As we've seen, properties are just sets of methods that help us encapsulate our data.

Whether we create our own backing fields or use automatic properties depends on how we're using them.  For example, a DTO (Data Transfer Object) is just used to move data around.  For these, we can use automatic properties.  In contrast, if we have a View Model with properties that need to be databound, we'll probably want to have "full" properties so that we can fire notification events when the values are changed in the setters.

Properties are everywhere, and they are incredibly useful.  Hopefully, this has given you a better insight into what's going on "under the covers."

Happy Coding!

Thursday, April 25, 2013

Where Do Curly Braces Belong?

A question came up in one of my sessions this past weekend (Clean Code: Homicidal Maniacs Read Code, Too):

Do you put opening curly braces on the same line or a separate line?

(Let the religious war begin...)  So, I gave the answer, "It usually doesn't matter as long as you are consistent."  There is an exception for one particular language (which I'll mention below).

The Options
So, let's start by looking at the problem.  With any "curly brace" language, we need to decide whether we put the opening curly brace on the same line as the method or conditional, or if we put the curly brace on a separate line.  "Curly brace" languages are basically anything that uses C-style syntax, such as C, C++, Java, C#, and JavaScript (there are others as well).

Option one is to put the curly braces on the same line.  Here's a sample:


Option two is to put the curly braces on a separate line.  Here's a sample:


Each of these styles has its pros and cons.  So, let's take a look at the advantages, and I'll let you know my personal preference.

Camp One: Opening Curly Braces on the Same Line
The biggest advantage to having the opening curly brace on the same line is to save vertical space.  Vertical density is an important topic when talking about easy-to-read code, so this is definitely something that we should consider.

By having the curly brace on the same line, we reduce the (already short) code sample above by 3 lines.  This means that we have more space on the screen to see additional code.

This is also welcome in books and blogs that include code samples since more code can fit in less vertical space.

Camp Two: Opening Curly Braces on a Separate Line
The biggest advantage to having the opening curly brace on a separate line is that the curly braces will always line up visually (assuming that we are also using good horizontal spacing in our code).  This makes it easy to visually spot the beginning of the code block -- we just need to find the end brace and then scan up until we see an opening brace in the same column.

Camp Three: K&R
Now, there is a third camp that I should mention just because you may run across it.  This is the K&R method (from the book The C Programming Language by Brian W. Kernighan and Dennis M. Ritchie - Amazon link).  I recently ran across a curly brace discussion, and they decided that no matter what, K&R is wrong.

I'll have to admit that I didn't know what the K&R method was.  When I'm reading books, I generally don't pay attention to curly brace layout.  As mentioned above, books often use the "same line" version to save space.  So, I went back and took a look through my K&R and was a bit surprised at what I saw.

Here's a sample (based on the same code block from above):


For methods, the opening curly brace is on a separate line.  But for other blocks (such as the "if" statement), the curly brace is on the same line.

I think that we can all agree that consistency should be high up on the list So, we'll take K&R off the list of possible choices.

Pick One & Be Consistent
Ultimately, which style we choose doesn't really matter.  What's important is that we pick one and stay consistent.

As with everything else in the development world, we need to find the balance.  That's why I don't spend time in my session to talk about this particular topic.  People on both sides are extremely passionate about how they do it "the one true way".  But really, there are more important things that we should be discussing.

One Exception: JavaScript
There is one exception to the "it doesn't matter" rule: JavaScript.  JavaScript has a quirk in the language called semicolon insertion.  This means that if the JavaScript parser thinks that you forgot to put a semicolon at the end of a line, it will put one in for you.  This can cause problems.

For more information on this, just do some searches for "JavaScript semicolon insertion", and you'll find tons of results.  This is also mentioned in JavaScript: The Good Parts by Douglas Crockford (Amazon Link + Jeremy's Review).  Of course, he puts this into a section called "The Awful Parts."

So, for JavaScript, we should put our opening curly braces on the same line.  Otherwise, we may get unexpected behavior from our code.

Update 2020: Another Exception: Go
Go uses a C-style syntax with braces for blocks. The language enforces style #1: opening brace on the same line. It also enforces that braces are required for "if" blocks and similar structures, even if there is only one line inside the block.

Personal Choice
Okay, so you've read through all of this, and I still haven't told you my preference.  I like to have the curly braces on a separate line.  There are 2 reasons for this.

First, I like that the curly braces line up.  This helps me see where the blocks are.  (The same was true when I programmed Delphi (Object Pascal) -- I like to have "Begin" and "End" statements line up.)

Second, I'm a lazy programmer.  I constantly let Visual Studio reformat my code for me (Ctrl+K, Ctrl+R).  The defaults for Visual Studio are to put the opening curly brace on a separate line (unless both curly braces are on the same line -- but that's an entirely different discussion).  As a side note, if you get the "Productivity Power Tools" for VS2012, there is an option to have it reformat your code automatically when you save a file; I don't have this turned on.

You can change the way that Visual Studio formats your code; there's a whole slew of options available.  This is why I put this into the "lazy programmer" category.  For the most part, I run with the Visual Studio defaults (unless there is something that really bugs me).  I have a history of moving around to a variety of machines, so I usually have as little customization as possible so that they will all behave the same.  (I know, there are ways to export profile settings now, but I'm still stuck in my old ways.)

Wrap Up
I always find it interesting to have these types of discussions with other developers.  I like to see who takes it seriously and who sees it as an arbitrary choice.  There's also plenty of other people in between.

Pick your battles.  In a team environment, we need to make sure that we all work in a consistent manner.  Ultimately, where the curly braces go is pretty unimportant compared to other things -- like unit testing strategies.

So, keep your curly braces consistent.  And make sure you're using "the one true way." <smirk/>

Happy Coding!

Wednesday, April 24, 2013

Cool Stuff: Building a DSL

Code Camp is a great place to expand your thinking.  Whenever I attend, I like to go to sessions on topics that I wouldn't normally seek out on my own.  I've already dedicated the day to attending, so I might as well try out some new things.  (I also go to meet other developers -- find out what works, what doesn't work, and what's really cool.)

This past weekend during Desert Code Camp, you may have seen "MIND BLOWN" come across my Twitter feed (and if not, then you can follow me: @jeremybytes). This came from the session "Building a DSL with an OData Source".

I didn't attend this session because I had a particular interest in the topic -- I've never worked with an OData feed, and I've always thought DSLs (Domain Specific Languages) were complicated (more on this in a bit).  I attended because it was presented by a friend of mine from the dev community: Barry Stahl (blog: Cognitive Inheritance; twitter: @bsstahl).  I was amazed at what I saw.

The Scenario
Barry showed how to query the OData feed from StackOverflow.  StackOverflow is a good choice since pretty much every developer is familiar with the site.  To query the data in its "native" state (meaning, after Visual Studio has a chance to make a service proxy based on the Atom feed), the original query looks like this:


This query gets questions that were asked in the last 30 days that have at least 1 accepted answer.  There are a lot of details that you need to know to write this query.  For example, "Parent == null" means that this is a root element (meaning, a question).  And, "AcceptedAnswerId != null" means that there is at least one accepted answer.

The goal was to create a DSL that would save the developer from these details and make this query readable and discoverable.  Like this:


This version is much easier to read and hides the details of the original API.  You can get the slides and code download for this session from Barry's site: DCC 2013.1.

The Mind Blowing Realization
What was amazing about this presentation is how easy this is to implement.  I had always thought of a DSL as something complex to build.  This put it into the "you probably don't need to do this" category in my mind -- best left for the big brains who work on languages and compilers and that sort of thing.

But Barry showed that building a DSL can be exactly the opposite -- something easy and obvious.  He used a combination of helper classes and properties along with extension methods to shield the user of the DSL from the underlying details.

For example, the "Questions" property (from the FluentStack object above) wraps the "Posts where ParentID == null" part of the original query.  The other elements are similar wrappers that make the API easy to use and discoverable (through IntelliSense).

Very simple.  Extremely approachable.  Amazingly cool.

Wrap Up
I never would have thought of this approach on my own.  I still don't know if I'll end up ever working with an OData feed.  But I do know that I'll be using some of these techniques to make complex APIs easier to use.

Barry is really smart and a great presenter (plus, he's a good guy to sit around and chat with).  He travels to Code Camps around the country, so be sure to look for him at an event near you.

Code Camp is a great way to expand your horizons, learn new things, and maybe even be surprised from time to time.

Happy Coding!

Book Review: Building Windows 8 Apps with C# and XAML

I recently finished reading Building Windows 8 Apps with C# and XAML by Jeremy Likness (Amazon link).  Unfortunately, I am unable to recommend this book.

Publisher Gambling
I'm going to chalk the primary issues up to publisher gambling.  The technology industry moves quickly (as we all know).  Technical books that are based on a new/updated technology have a very short shelf life -- especially when we have new versions of tools coming out every 9 months.  This means that technical books are written with pre-release versions, and sometimes things change in the final release.

Building Windows 8 Apps with C# and XAML was released in October 2012, which means that it was written with pre-release versions of Windows 8 as well as Visual Studio 2012.  Since I haven't worked with the pre-release tools, I'm not sure how much has changed, but knowing that Jeremy Likness is a good speaker, author, and consultant, I have to give him the benefit of the doubt.

This was a gamble that the publisher lost.

What Turned Me Off
That last thing I want to do is to rip a book apart.  There is a lot of work that goes into working with the technology, writing it down in a way that makes sense, going through the editing process, and releasing it into the wild.  So, I'll just give one thing I had difficulty with.

The first difficulty was with the example in Chapter 2.  This was a "follow along at home" sample.  The scenario was to create a Windows 8 Store Application that did 3 things:
  1. Use the webcam to take photos
  2. Save photos to the Pictures Library (or SkyDrive, or wherever else)
  3. Act as a Share Target for pictures
If you've been following my blog, you know that one of the things I'm most excited about in Windows 8 is the Sharing system (see Steal My Windows 8 Idea: Share with Grandma from last December).  So, I was very glad to get "straight to the point" with a sample I was interested in.

I followed along step by step.  The application would build and deploy successfully.  I could take photos with the webcam.  But I couldn't save, and the app wasn't showing up as a Share Target.  I did what you're supposed to do in this situation: I double-checked the steps to see if I missed anything.  I took a second look at the configuration screens to see if there was anything different or obviously wrong.

I finally figured out what was wrong with the Share Target: In the application manifest on the "Declarations" page, I needed to indicate the "Supported file types" (which is empty by default).  I immediately went back to book to see if I had missed something.  The section that talked about adding the Share Target on the Declarations page did not mention needing to change supported file types.  There was a screenshot of the Declarations page, but unfortunately, this value was not shown (it was hidden under the Output window).

I checked the "Supports any file type" box, and the Share Target functionality started to work.

For the Save functionality, things were a bit trickier.  I was getting a buffer overflow exception at a runtime.  Again, I checked the code against the code in the book and everything matched.  I knew that this was not something I would be able to debug myself (since I'm not familiar with the WinRT libraries), so I went to the code download.

On the download site, there was a note that the ImageHelper application (the one I was working with) had a buffer overflow problem that had been updated.  So, I downloaded the code and things worked from there.

First Impressions are Important
I'll have to admit that this experience put me off for the rest of the book.  As I read through the other examples, I had doubts about whether they were accurate.  (I know logically that they were okay; only 1 other project was mentioned on the code download site as needing updates -- but this doesn't change the "gut feeling" based on experience.)

I have a few other concerns with the book regarding how deeply (or not) specific topics were covered, but I'd rather not focus on any more negatives.

Wrap Up
Unfortunately, I'm not able to recommend Building Windows 8 Apps with C# and XAML.  I'm still looking for a good reference on the topic.  If you have any book recommendations, feel free to leave them in the comments.  Now, I'm off to the next book in the stack.

Happy Coding!

Tuesday, April 16, 2013

Dependency Injection: The Service Locator Pattern

The Service Locator pattern (or anti-pattern) is one of the many Dependency Injection patterns that allow us to create loosely-coupled code.  I mentioned this pattern just a bit in my review of Mark Seemann's excellent book Dependency Injection in .NET (Jeremy's Book Review).

Seemann refers to the Service Locator as an anti-pattern -- meaning that we probably don't want to use it in most cases because there are better patterns out there.  But he also notes that he used the pattern for many years before coming to grips with its shortcomings.

As mentioned in the book review, I have also used the Service Locator, and I was a little surprised when it was referred to as an anti-pattern.  But in reading Seemann's description of the shortcomings, I had no choice but to agree.  I had actually come across these shortcomings in my own code, but in that environment, it made sense to continue to use the pattern.

So let's compare the Service Locator to one of the other DI patterns (specifically, the Constructor Injection pattern).  This will point out the problems that I ran across -- which also happen to be the problems that Seemann mentions in his book.

Constructor Injection
We'll start by taking a look at the Constructor Injection pattern.  For a detailed description of Constructor Injection, refer to my presentation "Dependency Injection: A Practical Introduction".

The basics of Constructor Injection is that we pass the dependencies that a class needs as parameters in the constructor.  Here's an example:


In this constructor, we are injecting an IPersonService and a CatalogOrder.  The constructor then assigns these to local fields that can be used by the class.  Whatever creates this object is responsible for supplying these dependencies.  Generally, this is done in the composition root and often involves a Dependency Injection container.

Now, let's compare this to the Service Locator.

Service Locator
The Service Locator varies a bit from Constructor Injection.  As mentioned above, with Constructor Injection, something else is responsible for resolving the dependencies that are needed by the class.

In contrast, when using the Service Locator, the class itself is responsible for resolving its dependencies by asking for them from the Service Locator.  This can be a DI container or some other object or method that can return dependencies.  In our sample, we'll use a DI container (Unity).

Here's a sample constructor:


Notice that the constructor is passed the entire DI container (although there are other ways to get the service locator into the object).  Then the object uses the service locator to pick out its own dependencies.  This takes the control away from the composition root and gives it to the object itself.

Note that just because you are using a DI container doesn't mean that you are using the Service Locator pattern.  The Service Locator is defined by how the container is used (i.e., whether the object is resolving its own dependencies).

On the surface the Service Locator appears to hit all of the points of Dependency Injection: it allows for extensibility; it allows for the swapping out of test doubles; it allows for late binding.  This is why it is considered by many to be a valid DI pattern.

But it has a major shortcoming: It hides the dependencies that are being used.

This might not seem like a big problem, but let's take a look at some unit tests to show the shortcoming in action.

Initial Unit Tests
We'll start by unit testing the constructor that uses Constructor Injection.  Here's our test:


Notice here that when we instantiate our CatalogViewModel, we must provide the both the IPersonService (_personService) and the CatalogOrder (_currentOrder).  If we didn't provide both of these parameters, the test code would not build.  As a side note, both of these dependencies (_personService and _currentOrder) are mock objects that are created in the test setup (not shown).

This test builds and passes with our current code.

Now let's look at the same test for the constructor that uses the Service Locator:


In this test, we must pass in a populated Unity container (called "_locator" in this case).  As noted in the comments, our "_locator" has both an IPersonService and a CurrentOrder registered in its catalog (these are added in the test setup).

This test builds and passes with our current code.

The Problem with Hiding Dependencies
With Constructor Injection, we can easily see that we have 2 dependencies just by looking at the constructor.  However, with the Service Locator, we don't have visibility to the dependencies from the outside (meaning, looking at the public API signatures of the class).

Let's see what happens when we add a 3rd dependency.  Our Constructor Injection implementation now looks like this:


The first thing that happens is that we get a build failure of our unit test.  Note the squigglies below:


Build failures are good.  They give you immediate feedback that something is wrong.  In this case, we can see that we have a missing dependency since the constructor wants a 3rd parameter.  We would see this same error throughout our code base wherever we try to construct a CatalogViewModel.  And we know exactly what to fix.

But what about the Service Locator.  Let's add another dependency:


If we rebuild our application, everything builds fine -- including the existing unit test for the constructor that uses the Service Locator.  The problem is when we run the tests.

Since our tests do not know about the 3rd dependency, the test setup does not load it into the Unity container.  So, this test will throw an exception (when it tries to resolve the "CurrentUser" from the container).  In addition, all other tests we have that are creating this object will fail with the same exception.

This is the equivalent of a runtime error.  Whenever we have a choice between a compile-time error and and runtime error, we should pick the compile-time error.

Wrap Up
As we've seen the Service Locator pattern is a valid DI pattern in that it addresses the issues that we are trying to resolve with Dependency Injection.  However, it does have a flaw: it keeps the actual dependencies hidden.

When we have a choice between the Service Locator and Constructor Injection, we should favor Constructor Injection.  Constructor Injection makes the dependencies completely obvious to whomever is using the class.  And if the dependencies need to change, the result is build failures -- we don't have errors accidentally slipping into compiled code.

I have used the Service Locator pattern in production code.  And it did work.  But I did run into the issue described here.  When we added another dependency to an object, the unit tests would build successfully but then all fail (it's a bit disconcerting when you have 30 unit tests fail at once).  The test failures were not a problem with the code, they were a problem with the test setup.  Since the dependency was hidden by the Service Locator, it wasn't until we received these failures that we realized we needed to go in and add the dependencies in the setup.

So, if you're willing to live with the shortcoming, you can consider the Service Locator a DI pattern.  For everyone else, consider it an anti-pattern.  As with everything, we need to understand the pros and cons of each tool that we use so that we can make informed choices.  This is how we build maintainable, extensible code.

Happy Coding!

Wednesday, April 10, 2013

The Key to My Success: User Interaction

A couple months ago, I wrote about the importance of development teams and the business areas working together (Development and the Business - A Partnership).  This is a topic that I've been thinking about quite a bit recently as I review the applications I've been involved with.

The applications in my career that I consider to be most successful are the ones where I was constantly involved with my users.  There are other applications that have had limited user communication that have resulted in successful applications, but the process was extremely painful.  A couple of applications fall into the failure category, and these happen to be the applications where I had zero user contact.

This lines up with two of the four points in the Agile Manifesto:

o Individuals and interactions over processes and tools
o Working software over comprehensive documentation
o Customer collaboration over contract negotiation
o Responding to change over following a plan

These are elaborated in the 12 principles behind Agile.

So, let's take a look at some of the successful and not-so-successful projects I've worked on and why I consider user interaction to be a key component in what has worked well.

Just a quick note about my terminology: I would normally refer to people by name (since they are people).  But to keep things (semi-)anonymous, I will refer to someone as the "user" or "super user".  This is simply because I can't refer to them as "Frank" or "Janice" here.

My First Project - The One that Wouldn't Leave
I actually got into development as a profession almost by accident (I'll say almost because I wanted to be a developer and was looking for ways to break in, but this opportunity came up by chance).  I was on the user side of a multi-year project to build a system that would consolidate information from different departments around the company.  During the pilot phase (that is, figuring out what we wanted the development team to build for us), I got to know all of the people in the various departments that would eventually be using the system.  We were collecting the information manually and figuring out what we needed for incoming fields, approval processes, and outgoing reporting.

During this process, the manager who was working on the developer side of the project saw that I had some qualities that could make me a good developer.  So, about a year into the project, I was offered a junior developer position.  And that's how I got started.

Now, originally, the project had a team of about 6 people, and I was doing the types of things that you would give to a junior developer at the time: slinging around a bit of HTML and learning how to write Crystal Reports.

Fast forward a bit: the project is released, the senior developers go on to other stuff, and I end up as primary support for the application.  This made sense since a lot of the issues that came up were odd behavior related to reporting or some of the SQL behind it, and I could always ask questions of the original developers.

But it turned out to make sense for a much bigger reason: I knew the users.  Since I had spent so much time working with them on what the system needed (i.e., gathering requirements), I had built a relationship with them.  And I ended up supporting this application for close to 10 years (until I left that company).

A Couple Key Wins
The application had 2 primary pieces: a Windows application used for data entry / administration and a web application that was used for reporting.  I was given the task of re-writing the data entry application at one point, and I took that as a chance to add some things to help out my users.

The Tedious Process
The data entry application did not have any reporting at all (you had to go to the web app for that).  One of the business areas handled approvals for the items that were entered into the system.

I visited their office one day to talk about some things in the application. While they were working, something caught my eye. And it was something I hadn't anticipated.

In watching their process, I saw that they printed out a copy of an item after it was approved; this copy was then put into their files. And the current system was a bit tedious:
  1. Open the data entry application to get the list of items to be approved.
  2. Approve an individual item.
  3. Open the web application.
  4. Navigate to the Search screen.
  5. Use the search fields to locate the item they just entered (and based on similar names, this was sometimes tricky).  This was generally the most frustrating part of the process.
  6. Print out the item from the web application.
  7. Go back to the data entry application to approve the next item.
This is what these users did for 3-4 hours per day.  I like to make jobs easier.  So when I saw this, I offered to add a Print button to the approval screen. This would take me about 5 minutes to do (I already had the code), and it would make their process much easier:
  1. Open the data entry application to get the list of items to be approved.
  2. Approve an individual item.
  3. Print out the item from the current screen.
  4. Go back to the list to approve the next item.
This completely eliminated the most frustrating part of the process and made their work much easier.

The reason I found this problem is because I actually watched people using the system.

Eliminating Data Entry
For some departments, the new system eliminated the need for them to maintain their own data.  For other departments, the new system would not be a replacement.  As an example, a department had a financial system that they used for resource planning and scheduling.  All the new system cared about was one small piece of their existing data.  This resulted in a bit of duplicate data-entry (at least to start with).

Once the new system was running and stable, we started to look at where we could eliminate the duplicate data entry.  Again, because I knew the users and spoke with them regularly, I could prioritize based on which of the business areas would get the most benefit.  If a department had dozens of items a day to enter, we could prioritize that automation over those that had only a few items a week.

Also, we could determine how to automate based on the timeliness of the information.  For items that were scheduled several months out, we could do a once-a-day import from the systems.  Conversely, for items that would result in immediate change to operations planning, we could tie more directly into the source systems for close-to-real-time data.

The Key to Success
My ability to make these choices -- the choices that ultimately affect the business -- was due to my relationship with the users in the various departments.  I continued those relationships for many years.  And when someone left a position, I went to go see the replacement in person as soon as I could (this was really easy when they were in the same building; a little more difficult if they were on the other side of the 100-acre property).  My goal was to establish very quickly that I was committed to their success.  Fortunately for me, most of the people who left positions told the new person that I was really good to work with.

The Project That Could Have Been a Disaster
One project sticks out where user interaction really saved the day.  This was a rewrite of an Access application that had been built by someone in the business area.  The application was having issues with scaling and was crashing fairly regularly.  They came to the development team to build a stable app.

I was the development lead, and my first request on the rewrite was for me to go out to the business area, do some job shadowing, and find out what they really need.  Unfortunately, that was not approved, and I was given the directive that the new application should do exactly what the old application did.

This is never what you want to hear.  In this case, there were 2 reasons I didn't like it.  First, I didn't know whether the current application actually met the users' needs.  Second, the current application consisted of about 20,000 lines of VBA code (it still gives me shivers to think about it).

I built a very good relationship with the super user.  He knew the functions of the current application, and he also fielded questions and issues from the users.  He became my source for real knowledge.

So, I dug through each module of the Access application.  More often than not, I found out that the application code was not actually doing what the users thought it was.  For example, there was an part of the application that looked at the city of a customer record and gave back the current local time for the customer.  The actual code had a hard-coded list of cities for each state along with the time zone.  The problem is that then a city wasn't found in the list, it would just pick the first city in the list.  This meant that it wasn't always accurate (particularly for states that had multiple time zones and/or multiple rules about Daylight Saving Time).

If I were to simply "do what the current application did", I would end up reproducing the errors.  If I wanted to make the function accurate, I would need to figure out a better way of getting the current time information (maybe a call to a web service?).  But it turned out to be much simpler than that: I talked to the super user.

In talking to the super user, I found out the intent of this function.  There were situations where the department would call a customer, and they wanted to make sure that they were calling within a particular window (for example, not after 8:00 p.m.).  It turned out that they didn't need the precision of a particular city, just a general idea.  So, instead of a function that would tell the time for a city, I provided a Time Zone map that showed the current time in each time zone.  This was a fairly simple solution that met the users needs.

I have many other examples from that particular application.  Time after time, communication with the users saved me from over-building and allowed me to provide a system that more closely met their actual needs.

Disaster averted.

Disaster Not Averted
Unfortunately, not all projects can be successful.  I can think of two projects in particular that were extremely difficult (and ultimately never made it to release).  In both of these scenarios, I was "insulated" from contact with the users through a project manager.

In one scenario, all of the requirements came through the project manager.  I was only given what the application should do (i.e., it should show data this way, it should print that way); I was never given visibility to the actual business issues that we were trying to solve.  I just had a solution.  And I was doubtful whether that solution was really what the business area needed.

Ultimately, that version of the application was never released (which was a frustration to me even though I only worked on it for a few weeks).  The good news is that a year later, an updated version of that project came up, and I got to work on it.  In that iteration, we were working closely with the user group and created a very successful solution.

In another scenario, the requirements were coming through a (different) project manager.  For whatever reason, she didn't want us (the two developers on the project) talking directly with the user.  I'm still not quite sure; it might have been for political reasons.  The problem is that whenever we came up with a prototype, we would give it to the project manager who would test it before giving it to the user.  When the changes would come back, we would never get the "why" of the change, just the "what".  And whenever we had a question about a particular function, it would go through the project manager and take about a week to get the answer back to us.

That project was put on the back burner several times, and it still wasn't finished when I left the company two years later.  Kind of sad because it was something that could have been very useful.  Sometimes we don't have as much control over the situation as we would like.

Wrap Up
I have many other examples from my career.  Fortunately, most of them have been successes: like the time that the project manager was someone who used to work in the business area.  Again, she had the answers to most of the questions, and if she didn't she had all of the right contacts.  In addition, she didn't insulate us from the users; instead, we had conference calls and were encouraged to talk to them directly.

I've worked on a number of different projects in different environments (at one company, I had primary responsibility for 20 applications when I left).  The projects that I consider to be most successful have one thing in common: constant user interaction.  Meet face-to-face when you can; we need to realize that we are all people, not just voices on a phone.

Building real partnerships with the business area is how I've been successful.  And I'm not sure how you can build a successful product if you don't truly understand who will be using it and what their needs are.

Happy Coding!

Thursday, April 4, 2013

April 3013 Speaking Engagements

I have two events lined up for April 2013.  Come on out if you can.

Thursday, April 18, 2013
SoCal .NET Architecture
http://www.socaldotnetarchitecture.org/
Santa Ana, CA
Topic: Dependency Injection: A Practical Introduction

Dependency Injection (DI) seems to be a popular topic.  As with all of my sessions, we'll take a complex topic and break it down into easy-to-understand pieces.  At it's core, DI is just a set of design patterns, and basic implementation of the patterns isn't that hard.  Once we have those core concepts down, we're ready to let something else (like a DI container) do a lot of the work for us.  And since we understand what that work is, there's no mystery involved.

Saturday, April 20, 2013
Desert Code Camp
http://apr2013.desertcodecamp.com/
Chandler, AZ
3 Sessions to choose from (or come to all 3)
o Clean Code: Homicidal Maniacs Read Code, Too
o Learn the Lingo: Design Patterns
o Dependency Injection: A Practical Introduction

Desert Code Camp is always a lot of fun -- this will be my 5th time.  There's a wide variety of topics on all sorts of technologies, lots of new people to meet, and they feed you throughout the day.  Plus, if you're in So Cal, it's a good excuse for a road trip.

Happy Coding!