Friday, January 31, 2014

Finding Elegance in Functional Programming

There is a certain elegance that I've found with my explorations with functional programming and Haskell. My plan is to take these learnings and apply them to the .NET world -- most likely with F#, but also by taking better advantage of the functional parts of C#.

I'm all about using the right tool for the job. This is why I get discouraged when I see zealots who say that we should always use a certain programming paradigm or a particular language. (You can see here for a rather humorous take on this: Why [Programming Language X] Is Unambiguously Better than [Programming Language Y]).

Euler Problems
I've talked about Euler problems previously. These are mathematical problems that really lend themselves to functional solutions. It's really easy to build up a solution by breaking the problem down into discrete pieces and adding the functionality step by step.

So, let's take a look at Euler problem #2. This is not very complicated, but it does have quite a few steps.
Each new term in the Fibonacci sequence is generated by adding the previous two terms. By starting with 1 and 2, the first 10 terms will be: 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ... By considering the terms in the Fibonacci sequence whose values do not exceed four million, find the sum of the even-valued terms.
The first step of this problem is to generate a Fibonacci sequence. Lucky for us, we have a good function for that. We have the Fibonacci sequence itself:

fibs = 1 : 1 : zipWith (+) fibs (tail fibs)

And we have a separate function that uses a list comprehension to create a limited sequence:

testFibs = [x | x <- takeWhile (<= 21) fibs]

Let's combine these into a single function:

testFibs = [x | x <- takeWhile (<= 21) fibs]
  where
    fibs = 1 : 1 : zipWith (+) fibs (tail fibs)

We've rolled in the "fibs" definition into our "testFibs" function. The overall result is that we have a list comprehension that provides us with Fibonacci numbers as long as the values are less than or equal to 21. And here's the output:

[1,1,2,3,5,8,13,21]

Now, one issue with the Euler problem is that it defines the Fibonacci sequence a little bit differently from what we have here. Most Fibonacci sequence implementations start with 1 and 1 (resulting in 1, 1, 2, 3, 5, 8...) or with 0 and 1 (resulting in 0, 1, 1, 2, 3, 5, 8...). This example specifies starting with 1 and 2.

We technically don't need to account for this discrepancy because the problem also specifies that we want "even-valued terms", so the leading "1" would be excluded. But we can also update our "fibs" function to account for this:

testFibs = [x | x <- takeWhile (<= 21) fibs]
  where
    fibs = 1 : 2 : zipWith (+) fibs (tail fibs)

Which gives us this updated sequence: [1,2,3,5,8,13,21]

An Elegant Solution
Just to show how elegant a functional solution can be, let's walk through the process of building up our function. We'll break it down into the following parts:

1. Fibonacci sequence whose values do not exceed 4 million
2. Even-valued terms
3. Sum

We already have a function that gives us terms that do not exceed 21, so changing this to not exceed four million is pretty easy:

testFibs = [x | x <- takeWhile (<= 4000000) fibs]
  where
    fibs = 1 : 2 : zipWith (+) fibs (tail fibs)

And the result:

[1,2,3,5,8,13,21,34,55,89,144,233,377,610,987,1597,2584,4181,6765,10946,17711,28657,46368,75025,121393,196418,317811,514229,832040,1346269,2178309,3524578]

Now, we just need to add an additional filter to only pick out the even values. I say "additional filter" because the "takeWhile" function is a filter that limits the "fibs" sequence to a finite set of values. Lucky for us, Haskell has an "even" function built in.

testFibs = [x | x <- takeWhile (<= 4000000) fibs, even x]
  where
    fibs = 1 : 2 : zipWith (+) fibs (tail fibs)

And the result:

[2,8,34,144,610,2584,10946,46368,196418,832040,3524578]

This cuts the sequence down quite a bit.

Our last step is to simply sum up the values. Let's rename the function as well:

euler2 = sum [x | x <- takeWhile (<= 4000000) fibs, even x]
  where
    fibs = 1 : 2 : zipWith (+) fibs (tail fibs)

And this gives us the answer:

4613732

And we know this is correct by asking the internet for the solution to Euler Problem #2.

Wrap Up
There's a certain amount of elegance to this solution. We have built up a solution using several smaller functions, including "fibs", "takeWhile", "even", and "sum". Each function is responsible for doing a single thing. And we combine these small units to create more complex, useful functions.

So, as you can tell, I'm becoming a fan of functional programming. But we want to make sure that we're using the right tool for the job. Is functional programming the right solution for every problem? Absolutely not. But there are some problems (such as the one we looked at here) that really lend themselves to a functional solution. The same code written imperatively would be much more complex.

But we can take these functional solutions and apply them to a more general-purpose language like C#. At its heart, C# is an imperative language that has its roots in object-oriented programming. But it has also had many functional features added on over the years. We can start to "think functionally" and use those features more effectively. And if those features fall short, we also have the option of using F#, a .NET language built from the beginning as a functional language.

I always want to use the right tool for the job. And that means learning about as many tools as possible and figuring out where they fit in the toolbox.

Happy Coding!

Learning from Haskell (Preview)

I really want to talk about the cool take-aways I got from exploring Haskell. But I want to put this off for just a little bit. The reason is that I went from reading Learn You as Haskell for Great Good! (review) straight into Parallel Programming with Microsoft .NET, a book from the Microsoft Patterns and Practices team (available to read online).

I've heard people talk about how functional programming is a good match for parallel programming, and I'm really seeing that as I read through the recommendations and patterns in the Parallel Programming book. These are things like immutability, avoiding updates to shared objects, repeatability, and the ability to break things up into practical chunks. And I've seen these features in my exploration of Haskell.

So, I'm going to hold off talking about cool things like "Maybe" and higher-order functions until after I finish the latest book. That way, I'll be able to show exactly how these types of things help with parallel programming.

An interesting note about Parallel Programming with Microsoft .NET: the technologies include the Task Parallel Library (TPL) and PLINQ. And LINQ/PLINQ are very functional technologies that have been included in .NET. I'm a huge fan of LINQ, and I've been exploring this for a while.

So, stay tuned, and I'll have some good stuff in a couple of weeks.

Happy Coding!

Thursday, January 30, 2014

February 2014 Speaking Engagements

I have quite a few speaking engagements scheduled over the next few months. If you'd like me to come out and speak at your developer event or company, just drop me a note. I specialize in making intermediate topics accessible to developers of all skill levels.

I have 2 events currently scheduled for February:

Tuesday, February 4, 2014
LA C#
Pasadena, CA
http://lacsharp.org/
Topic: Learn the Lingo: Design Patterns

Wednesday, February 19, 2014
Pasadena .NET Developers Group
Pasadena, CA
http://www.meetup.com/sgvnet/events/163057812/
Topic: Shields Up! Defensive Coding in C#

I've got several other events scheduled in the coming months. Here's a preview:

Mar 5, 2014: SoCal .NET Developers Group
Buena Park, CA

Mar 15, 2014: Utah Code Camp
Salt Lake City, UT

Mar 18, 2014: Corporate Event
Burbank, CA

Mar 27, 2014: dotNetGroup.org
Las Vegas, NV

Apr 5, 2014: Desert Code Camp
Chandler, AZ

Apr 8, 2014: Inland Empire .NET User Group
San Bernardino, CA

I'm looking forward to all of these events. I get to catch up with developers that I haven't seen for a while, and I get to meet a bunch of new people. Hope to see you at an event sometime this year.

Happy Coding!

Wednesday, January 29, 2014

Dependency Injection: The Property Injection Pattern

There are a variety of Dependency Injection patterns. Previously, we looked at the Service Locator pattern (or anti-pattern, depending on your point of view). We contrasted this with Constructor Injection to see the pros and cons of each approach.

In my presentation on Dependency Injection (and also my Pluralsight course), we cover Constructor Injection and Property Injection, and we see how Property Injection can be a good choice if we have a default value that we want to use at runtime, but we still want to be able to inject a different object for testing. But there are other ways to implement the Property Injection pattern.

Let's take a look at these approaches and see how they compare to Constructor Injection.

Property Injection with a Good Default
In the sample code, we look at using Property Injection as a way to facilitate unit testing. In this case, we're using it with a service. Here's that property code:


Our property is of type IPersonService -- this is the interface that specifies the service. When we run our application, we want to use a real WCF service (which is called "PersonServiceClient" -- technically, this is a proxy that points to the production service, but the proxy pattern lets us treat this as if it were the service itself). Because of the way we have this property set up, our repository will use our production WCF service by default (i.e., if we do nothing).

Runtime Behavior
So, if we call a method such as "ServiceProxy.GetPeople()", it will run the getter on the ServiceProxy property. The first time through, the backing field ("_serviceProxy") will be null. If it is null, then we new up an instance of our production WCF service ("PersonServiceClient") and assign it to the backing field. Then we return that value.

If we call "ServiceProxy.GetPeople()" a second time, the backing field will not be null (it will be already populated with our production service), and it will simply use the service proxy we already created.

And this is great at runtime. When we run our application, we want to use the production WCF service 100% of the time. But we are also leaving ourselves open to injecting that service through the property.

Testing Behavior
The issue is when we get to unit testing. We want to be able to test our repository class without needing to rely on a production WCF service. If the production service is down for some reason, then our tests will fail. But we don't want to test the service here; we want to test the repository class.

Because of the way we implemented our property, we can use Property Injection to inject a mock service into our tests. Here's what one of our tests looks like:

Immediately after creating the repository, we assign a new value to the ServiceProxy property -- before we call any methods on that service. This will set the ServiceProxy backing field to a mock or fake service that we can use for testing. And that's exactly what we do. Here's the configuration for the "_service" field that we use:


This creates a mock object (using Moq) that will behave the way we want it to for testing purposes.

Using Property Injection this way works really well when we have a default value that we use at runtime, but we want to swap out that value for testing purposes.

Property Injection with No Default
There is another way to implement the Property Injection pattern: by using an automatic property. Here's what that property would look like in our repository:


This property still has a setter, which means that we can swap out the value. But this time, we do not have a default value for the property. This means that we need to set the property somehow. We can do this by configuring our Dependency Injection container.

Injection with Unity
One of our samples from the session uses the Unity container (from the Microsoft Patterns & Practices team). Here's how we would need to expand the container configuration to inject this property (this is in the composition root of our application where we configure our container):


The syntax for Unity is a little cryptic. What we're saying here is that we want to add some configuration when the container resolves a "PersonServiceRepository" (which is our repository type). When Unity resolves this type, we would also like it to inject a value for the "ServiceProxy" property.

For this, Unity will go through its standard process. It will look at the ServiceProxy property, determine its type ("IPersonService"), and figure out how to resolve an object of that type. Now, we will need to give Unity a little more information for this. Here's one way of doing that:


In this code, we create an instance of the "PersonServiceClient" (that points to our production WCF service) and then register that instance with the Unity container. So, when we ask Unity to resolve an "IPersonService", it will provide us with the "serviceProxy" instance that we created.

And to take this a step further, when Unity is asked to inject the "ServiceProxy" property (which is of type "IPersonService"), it will inject the "serviceProxy" instance.

Our unit tests could remain unchanged -- we can manually set the "ServiceProxy" property in our tests (as we did above). Or, if we want to use a Unity container in our tests, we could configure the Unity container to use the mock service rather than the production service.

Alternative Injection
Other DI containers (such as MEF) may use different methodology to inject the property. I've worked with Prism (an application framework that also comes from the Microsoft Patterns & Practices team), and it provides an abstraction that allows us to configure property injection by using an attribute.

In that case, we just mark our ServiceProxy property with a "[Dependency]" attribute, and the framework ensures that the container injects that property.

A Big Drawback
There is a big drawback to implementing Property Injection this way. I managed to find this out the hard way when I was new to dependency injection. But it does make sense when you think about it.

When we do not supply a default value for the property, what happens if we try to include the following code?


This is the constructor for our repository. In the constructor, we use the ServiceProxy property to populate some data. We may want to do this if we want to initialize some data when our object gets created.

The problem is that this will result in a runtime error. (But it compiles just fine.)

Let's walk through what happens when our repository gets created (we'll assume its being created by the Unity container).

1. The Unity container creates an instance of the PersonServiceRepository. (This executes the constructor.)
2. The Unity container injects the ServiceProxy property based on the "RegisterType" configuration that we saw above.

Do you see the problem now? Then constructor executes *before* the property is injected. (This is also the case when using the "[Dependency]" attribute with Prism.) This results in a null-reference exception when we try to access the ServiceProxy property in the constructor.

Mitigating the Risks
We've seen the potential risks with a "no default" implementation of Property Injection. How do we get around this?

Well, we'll go back to our recommendations from the Dependency Injection session: We should favor using Constructor Injection for items that our object requires but do not have good default values. So, instead of using Property Injection in this case, we can move the dependency into the constructor as a parameter.

So, our constructor would look like this:


This lets us safely use the "ServiceProxy" property in our constructor.

Another option is to move this initialization code out of the constructor. We generally want our constructors to run quickly, and this means avoiding network or database calls during construction. We can look at lazy loading our data when we actually use it. Then we can safely use Property Injection in our class.

Wrap Up
After looking at this, we may think that Property Injection should be avoided. But Property Injection is still a good dependency injection pattern. We just need to be aware of the consequences of the pattern before we implement it in our code (this is true of all design patterns). And this leads us to the recommendations as we approach dependency injection.

Use Property Injection when we have a default value that we want to swap out for testing. But favor Constructor Injection as much as possible. Constructor Injection has the benefit of keeping dependencies obvious (as we saw when we looked at the Service Locator).

Dependency Injection helps us create loosely coupled code that facilitates maintainability, extensibility, and testability. The better we understand the pros and cons of each of the dependency injection patterns, the better the results will be in our applications.

Happy Coding!

Tuesday, January 28, 2014

A Functional Fibonacci Sequence (as Learned by an Imperative Programmer)

As mentioned previously, I've been interested in learning different programming languages and paradigms, not necessarily so that I can program in those languages, but so that I can improve my everyday programming in my main language.

One of my favorite things to noodle around with is the Fibonacci Sequence. This is just complicated enough to be interesting but not too complicated to be overwhelming. The sequence is easy enough:
1, 1, 2, 3, 5, 8, 13, 21, ...
Each item in the sequence is created by adding the 2 previous numbers. The sequence can start with either 0 or 1 (I use the version that starts with 1). So, the sequence goes like this:
 1
 1 = 1 + 0
 2 = 1 + 1
 3 = 1 + 2
 5 = 2 + 3
 8 = 3 + 5
13 = 5 + 8
21 = you get the idea
This is a classic example of a sequence that would lend itself to a recursive function. In my example code, I calculate the sequence using some global variables, but that's usually because the focus of the example is not on the Fibonacci sequence itself but on the code that uses it.

A Recursive Implementation
When I started diving into Haskell, I thought that implementing a Fibonacci sequence would be a good way for me to get familiar with the language. My first implementation included 2 functions.
fibonacci :: Int -> Int
fibonacci 1 = 1
fibonacci 2 = 1
fibonacci n = fibonacci(n-1) + fibonacci(n-2)
This method takes a parameter for the position of the number in the sequence. So, for example, if we call "fibonacci 4", it will give us the 3rd number in the Fibonacci sequence (which is 3). It does this with a recursive call. Let's walk through the implementation above.

"fibonacci 4" would use the last case ("fibonacci n") and would result in the following:
fibonacci (4-1) + fibonacci (4-2)
Reduced to
fibonacci 3 + fibonacci 2
"fibonacci 2" is equal to "1". Let's put that in:
fibonacci 3 + 1
"fibonacci 3" will use the "fibonacci n" case. Let's expand that:
fibonacci (3-1) + fibonacci (3-2) + 1
or
fibonacci 2 + fibonacci 1 + 1
And since "fibonacci 2" equals 1 and "fibonacci 1" also equals 1, we're left with
1 + 1 + 1
So, the 4th item in the Fibonacci sequence is 3.

To create a sequence of these numbers, I created another recursive function:
fibonacciList :: Int -> [Int]
fibonacciList 1 = [1]
fibonacciList n = fibonacciList (n-1) ++ [fibonacci n]
To use this method, we just pass in the number of items in our Fibonacci sequence, and it returns a list with the specified number of items.
fibonacciList 8
[1,1,2,3,5,8,13,21]
This function basically builds a list based on the "fibonacci" function. So, it creates the elements of the list by running "fibonacci 1", then "fibonacci 2", "fibonacci 3", ... "fibonacci n". We won't go through the recursion step-by-step.

A Big Problem
I was pretty proud of myself when I came up with these methods. I created a Fibonacci sequence, and I did it "functionally". But these methods have a big problem.

Since the "fibonacciList" function calls "fibonacci n" for each element, it actually ends up recalculating all of the previous numbers each time. What this means is that it works fine for small lists. But once we get to slightly larger lists, we find that the calculation slows down significantly. In fact, "fibnonacci 35" takes over 20 seconds to calculate on my laptop. That seems pretty ridiculous.

I was actually working on Euler problem #2 when I ran into this performance issue. And it really bothered me. So, I started to look for other solutions to calculate the Fibonacci sequence.

Note: I talked about Euler problems when I worked through Euler problem #1 on my blog.

A Truly Functional Fibonacci
I ran across this implementation that is very simple. But since my brain isn't quite used to thinking functionally, this completely escaped me. And when I saw the solution, it took a while before I really understood what it was doing.
fibs = 1 : 1 : zipWith (+) fibs (tail fibs)
The ":" operator basically creates list elements. So, when we have "1 : 1", this is the equivalent of the list "[1,1]". This function will actually create an infinite list of Fibonacci numbers (which is really interesting to run). We can wrap this in a list comprehension to return a list that stops once we get to 21.
testFibs = [x | x <- takeWhile (<= 21) fibs]
[1,1,2,3,5,8,13,21]
This works because Haskell is lazy evaluated. So even though the "fibs" list is infinite, Haskell will stop processing once it hits the takeWhile limitation that we set (<= 21).

Let's think about how "fibs" works. The "zipWith" function applies a function (in this case "+") to two lists. The lists are "fibs" (which is the current list) and "tail fibs" which includes everything but the first item. Again, this takes a bit to wrap your head around since these are recursive calls that are lazy evaluated.

It ends up creating a sort of ongoing offset list.
               [1, 1]
               [1]
         [1, 1, 2]
Here "fibs" is [1,1] and "tail fibs" is [1]. When we add these together (ignoring any "leftover" elements), we get [2] which is appended to the list.
            [1, 1, 2]
            [1, 2]
      [1, 1, 2, 3]
To get the 4th item, "fibs" is [1,1,2] and "tail fibs" is [1,2]. When we add these, we get [2, 3].
         [1, 1, 2, 3]
         [1, 2, 3]
   [1, 1, 2, 3, 5]
This continues, we can see that adding the columns from the diagram, fibs + tail fibs = the rest of the Fibonacci sequence.
      [1, 1, 2, 3, 5]
      [1, 2, 3, 5]
[1, 1, 2, 3, 5, 8]
Okay, so I don't know if the diagrams make it more clear or less clear. The combination of lazy evaluation with the recursiveness makes this quite a brain twister. But once it "clicks", it's extremely cool.

Wrap Up
So, what I've learned from this exercise is that I'll have to do a lot more work with functional programming for this to become more natural. I know that my first pass won't always be correct. But, I've found that solving a complex problem by building the solution up from smaller steps is a pretty amazing way to work.

I'll keep exploring functional programming (heading into F# soon), and I'm looking forward to making my brain think in new ways.

Happy Coding!

Monday, January 27, 2014

Book Review: Learn You a Haskell for Great Good!

I recently finished reading Learn You a Haskell for Great Good! by Miran Lipovača (Amazon Link). And, yes, it did take me a long time to get through this book -- mainly due to other distractions. I started reading this way back in October, got a good start, and then let it sit around for a while.

I'll split this into 2 parts -- the basics and the more advanced stuff.

The Basics
Learn You a Haskell gives a fairly gentle introduction to functional programming and the Haskell language. Lipovača understands that functional programming is an unfamiliar concept to many programmers who are starting out with Haskell. As such, he combines functional programming concepts with the description of how the various elements of the language work.

Haskell is a purely functional language, meaning that functions are first-class citizens (actually, the only citizens). And it is built around the concepts of immutability, "no side effects", and repeatability. Some other functional languages have compromises that let them fit in with more imperative programming.

I found this very helpful as a first step into functional programming. I really had to start thinking about immutability and repeatability in my own Haskell code. There is no other way of doing it.

The book takes small steps in introducing new concepts. This is very additive in nature (which is nice). And to describe some of the basics, the examples build simple functions based on prior learnings. Often, these simple functions end up as standard functions that are supplied with the standard modules. But it's nice to see how these functions actually work.

For an example, take a look at the zipWith' function (which I mentioned earlier in a different context):


This is from Chapter 5 (Higher-Order Functions) and shows several different concepts that are covered earlier. The concepts include function declaration (the first line), lists (the angle brackets), filters (the last 3 lines that work like a "case" statement), working with head/tail on lists (the "x:xs" and "y:ys"), and recursion (the "zipWith'" call in the last line).

This particular example is to add on the "higher-order function" part. A higher-order function is a function that takes another function as a parameter. In this case, the zipWith' function takes a function and 2 lists as parameters. Then it applies the function to each element of the lists and returns a list.

For example:
zipWith' (+) [1,2,3] [4,5,6]
Takes the "+" function (yes, that's really a function in Haskell) and uses it with the first element of each list, then the second element of each list, and so on. So, it calls "1 + 4", "2 + 5", and "3 + 6".  The result is a list: [5, 7, 9].

And of course, this leads to the Foo Bar example that I actually like.

The Advanced Stuff
The skill level ramps up. Lots of concepts are covered including Functors, Applicative Functors, Monoids, and Monads.

Things were a little fuzzy to me in the middle of the book. Since Lipovača uses the "building on previous knowledge" approach. I figured that I would be totally lost for the rest of the book, but I kept pressing forward.

What I found was that I was able to pick things back up again. For example, I got a bit lost in the description of applicative functors. But then, moving on to monoids, Lipovača describes how monoids build on top of applicative functors. And for some reason, reviewing the old material in conjunction with the new material made the old concept "click" with me.

Where I Explored Further
To help me get acquainted with Haskell (and functional programming in general), I took a look at the Euler problems. These are math-related problems that really lend themselves to being solved functionally. I used these to get me to start thinking functionally. I wrote about my encounter with the first Euler problem a few months back.

And I spent quite a bit of time on the second Euler problem. This deals with the Fibonacci sequence. I have a bit of fondness for the Fibonacci sequence. I have used a non-recursive implementation in C# in some of my previous examples. Since the Fibonacci sequence is often solved with a recursive function, I started there.

What I found is that creating a Fibonacci sequence with a classic recursion method is extremely slow. So, I spent some more time trying different approaches. And I found some really interesting Haskell examples online with completely different approaches. I'll talk about these in a later article.

Wrap Up
I found Learn You a Haskell for Great Good! to be a really good resource for me. It gave me a good overview of functional programming concepts. Now, I have had some exposure to functional concepts; I've been exploring them casually for the last year or so. Someone who is brand new to functional programming may need to spend a bit more effort on the early chapters. But that effort will pay off.

So, I put Learn You a Haskell for Great Good! into the "recommended" category. Check it out if you want to get started with programming in a purely functional language.

Happy Coding!

Monday, January 6, 2014

Improving Reflection Performance with Delegates

Reflection is an extremely powerful tool. But one of the drawbacks is performance. In my presentations on Reflection, we look at an application that shows the speed differences between calling methods directly and calling them with Reflection (live presentation on my website; video presentation on Pluralsight).

Now, the point of this application is to show that dynamically invoking a method through reflection is 30 times slower than making a direct method call. This is to encourage us to make sure we only use reflection when we actually need it. But if we do need it, there are ways to improve the performance that I don't talk about in the presentation. Instead of doing dynamic invocation directly, we can use a delegate to improve performance.

The sample code is available here: http://www.jeremybytes.com/Downloads/ReflectionWithDelegates.zip.

Baseline Speed
The sample application shows 4 different ways to call a method (as opposed to the 2 methods shown in the original presentation sample).

Here's the code for the direct method call:


Most of this code is boiler-plate to get the metrics for the UI -- and I use the word "metrics" here very loosely. This code performs the loop 10,000,000 times (which is a lot). That's how many times we need to do this so that we can get some human-noticeable times. And this just gives us a general idea. When we run this code, the computer is doing other stuff (background operations, UI updates, network polling, etc.), so the exact numbers will vary.

This important bits of the above method are the first line (where we create a new List object) and the line inside the "for" loop (where we add the indexer to the list).

When running on my machine (a dual-core i7), we get the following result:


Dynamic Invocation with Reflection
When we try the same functionality using reflection, we get a much different result. Here's the code:


This code is a little bit different. We still create the new List variable. But then, we use reflection to execute the "Add" method. To do this, we get a Type object based on List<int>. Then we call "GetMethod" which gives us back an MethodInfo object.

Then inside the loop, we call Invoke on the MethodInfo object. The parameters look a bit strange for this method. The first parameter is the instance we can to call the method on -- in this case, it is the "list" variable that we created at the top. The second parameter is an object array for the method parameters. Since we need to pass in a single parameter (an integer), we create an object array with a single value.

The result of this method call is the same as the method in the first example -- we add 10,000,000 items to a List object.

The performance is significantly different:


Instead of 127 milliseconds, we get 3.5 seconds! That's around 30 times longer.

Using an Interface
The recommendation in the Practical Reflection presentation is to use Reflection to load and instantiate an object (to give us the flexibility of run-time loading), but then cast the object to a known interface in order to call the method. This gives us the best of both worlds.

Here's that code:


At the top of this method, we get a Type object based on List<int>, and then we use the Activator class to create an instance of that type. Notice that our variable ("list") is an interface type (IList<int>) rather than a concrete type.

Because of this, even though we create the object dynamically, we can call the "Add" method just like we would on a normal object. (And we see this inside the "for" loop.)

The result is that we do not get a performance hit. It runs at the same speed as a direct method call:


Reflection with a Delegate
But there is another option as well. If we absolutely need to use Reflection to dynamically call a method multiple times, we can use a delegate to improve the performance.

Here's the code for that:


This code is a bit more complicated. Notice at the very top (outside of our button click handler), we have a definition of a delegate ("ListAddDelegate"). Notice the signature for this delegate. The first parameter is "List<int>" -- this is the instance of the list that the "Add" method is called on. The second parameter is the parameter for the "Add" method -- in this case, an integer. The delegate returns void because List<T>.Add (the method we want to call) returns void.

The first 3 lines of the button click handler match the reflection method. We create an instance of a List<int>, get a Type variable, and then use GetMethod to get a MethodInfo object.

But then we create a delegate instance. We use "Delegate.CreateDelegate" to create a delegate object based on our MethodInfo object. The first parameter is the Type of the delegate we want (our custom ListAddDelegate), and the second parameter is the MethodInfo object (the "addMethod" that we got above). Then we cast this whole thing to a ListAddDelegate.

Inside our "for" loop, we simply invoke our custom delegate by calling "addDelegate" with the 2 parameters (the List<int> instance and the integer that we want to "Add").

The result is much better performance:


This time is inline with the direct method call and the interface method call.

Here are all of the results together:


Wrap Up
The point of the original speed comparison is to show that using Reflection to call a method is 30 times slower than making a direct call. But if we do find that we need reflection to dynamically call a method multiple times, we do have the option of creating a delegate to handle the method calls.

There are several ways to come up with similar answers. As developers, we should be used to this. What we need to do is weigh the pros and cons of each approach in the context of our own application -- keeping in mind that we want to balance flexibility and performance.

Happy Coding!

Thursday, January 2, 2014

January 2014 Speaking Engagement: San Diego .NET Developers Group

I'll be speaking in San Diego next week (on Tuesday):

Tuesday, January 7, 2014
San Diego .NET Developers Group
San Diego, CA
http://www.meetup.com/San-Diego-NET-Developers-Group/events/150380092/

The topic is Practical Reflection in .NET (based on my Pluralsight course of the same name). Reflection is one of the geekier topics in .NET. And a lot of times, we see it as something that's good for people building developer tools. But there are several features that are good for everyday programming.

Reflection is a very powerful tool. And with great power comes great responsibility (or as some people like to say, "It's your foot"). We'll focus on how we can use Reflection to add flexibility to our applications while still maintaining safety and performance.

I've spoken at the San Diego .NET Developers Group five times so far, and it's always a lot of fun. They will celebrate 20 years as a user group in April. There are lots of great people to talk to, so come out if you can.

Happy Coding!

New Pluralsight Course: Introduction to Localization and Globalization in .NET

My latest course is now available on Pluralsight: Introduction to Localization and Globalization in .NET.

Introduction to Localization and Globalization in .NET
We create the best experience for our users by communicating with them in a way that they understand. .NET provides robust localization and globalization features that allow us to create and deploy applications that adapt for different languages and cultures. We'll take a look at the basics and ready our applications to take on the world.
This course is all about how to prepare our applications to support different languages and cultures. We talk about the fundamentals of culture and the built-in .NET support including thread culture, localizable resource files, and satellite assemblies. A big part of the course is to take an existing application, prepare it for localization, and then apply different languages and cultures.

The world is getting smaller and smaller. Localization is not a complex topic, but it is a topic that many developers never think about. And there are a few "gotchas" out there the first time you localize an application. And if we think about localization while we're designing our application (rather than tacking it on when we're "done"), we'll have a much easier experience in getting localized resources and globalized formats into the app.

Thanks to My Translators
I want to give a big thanks to my translators. They were nice enough to provide me with translations of the strings that I needed for my application. And they haven't asked for anything in return (at least not yet).
Tomas Petricek Mathias Brandewinder Filip Ekberg Volkan Uzun
Tomas Petricek did the Czech localizations. He is the author of Real-World Functional Programming, and he is a Microsoft MVP for Visual F#. Visit him at http://tomasp.net/ and on Twitter @tomaspetricek.

Mathias Brandewinder did the French localizations. He speaks around the world on functional programming and machine learning, and he is also a Microsoft MVP for Visual F#. Visit him at http://www.clear-lines.com/blog/ and on Twitter @brandewinder.

Filip Ekberg did the Swedish localizations. He is author of C# Smorgasbord as well as a Pluralsight author and Microsoft MVP for Visual C#. Visit him at http://blog.filipekberg.se/ and on Twitter @fekberg

Volkan Uzun did the Turkish localizations. He works in the application security world and does wonders with integrating custom auth providers into SharePoint.

More Courses to Come
I've still got a couple more courses in the hopper. Right now, I'm working on Defensive Programming in C#. Look for this course to come out later this month.

Until then, be sure to check out Introduction to Localization and Globalization in .NET, and get ready to take on the world!

Happy Coding!