Friday, December 22, 2017

Your Ideas Needed: Other Ways to Run Code in Parallel

My last article, How Does Task in C# Affect Performance?, drew quite a few suggestions on how to improve the performance. If you'd like to contribute your code, now's your chance!

As mentioned in the prior article, the technique of generating a bunch of tasks is a brute force approach. The reason that I like it is that it gets me the UI behavior that I really want. The goal is machine learning for recognizing hand-written digits, but the visualization is the whole point of this application.

Since the process of making a prediction takes a bit of time, I want the UI to update as each item is predicted. Here's an animation of the application running the current code (available on GitHub: https://github.com/jeremybytes/digit-display):


The point of this application is visualization (back to the first iteration): I want to see different algorithms side-by-side to understand what each one gets right and wrong. If they vary in accuracy, are they making the same errors or completely different ones? The speed is also part of that.

Here are the things I like about this particular implementation:
  1. It's easy to tell by the animation above that the Manhattan Classifier runs faster than the Euclidean Classifier.
  2. We don't have to wait for complete results to start analyzing the data.
  3. It gives me an idea of the progress and how much longer the process will take.
These are things that the brute-force method accomplishes. You can look at the previous article to see the code that runs this.

A Different Attempt
Before I did the manual Tasks, I tried to use a Parallel.ForEach. It was a while back, and I remember that I couldn't get it to update the UI the way that I wanted.

I thought I would take another stab at it. Unfortunately, I ended up with an application that went into a "Not Responding" state and updated the UI in a block:


Instead of showing two different algorithms, this shows two different methods of running the tasks in parallel.

On the left, the "Parallel Manhattan Classifier" runs in the "ParallelRecognizerControl". This is a user control that uses a Parallel.ForEach. On the right, the "Manhattan Classifier" runs in the "RecognizerControl". This is a user control that uses the brute-force approach described previously.

A couple things to note:
  1. The application goes into a "Not Responding" state. This means that we have locked our UI thread.
  2. The results all appear at once. This means that we are not putting things into the UI until after all the processes have completed.
This code is available in the "Parallel" branch of the GitHub project, specifically, we can look in the ParallelRecognizerControl.xaml.cs file.


This uses the "Parallel.ForEach" to loop through our data. Then it calls the long-running process: "Recognizer.predict()".

After getting the prediction, we call "CreateUIElements" to put the results into our UI. The challenge is that we need to run this on the UI thread. If we try to run this directly, we run into threading issues. The "Task.Factory.StartNew()" allows us to specify a TaskScheduler that we can use to get back to the UI thread (more on this here: Task, Await, and Asynchronous Methods).

But as we can see from the animation, this does not produce the desired results.

I tried a few approaches, including a concurrent queue (part of that is shown in the commented code). That got a pretty complicated pretty quickly, so I didn't take it too far.

How You Can Help
If you're up for a challenge, here's what you can do.
The application is already configured to run the "ParallelRecognizerControl" in the left panel and the "RecognizerControl" in the right panel. So you should only have to modify the one file.

If you come up with something good (or simply fun, interesting, or elegant), submit a pull request, and we'll take a look at the different options in future articles.

If you don't want to hash out the code yourself, leave a comment with your ideas along with your approach.

Remember: We're looking for something that gives us a UI that updates as the items are processed. There are much faster ways that we can approach this without the visualization. But the visualization is why this application exists.

Happy Coding!

Sunday, December 17, 2017

How Does Task in C# Affect Performance?

When I talk about using Task (and await -- which is a wrapper around Task), often there's a question about performance. The concurrent nature of Task means that there is some overhead. We don't get these things for free.
The good news is that the cost is negligible for the types of things we generally deal with in user space.
To test this out, I did some experimentation with my Digit Recognizer application (GitHub project). In that project, I create a large number of Tasks in order to maximize the usage of the CPU cores on my machine.

There are a couple of caveats with regard to this particular project:
  1. The parallel tasks are CPU-bound, meaning the CPU speed is the constraint. This is different from tasks that are I/O-bound (whether waiting for a file system, network, or database).
  2. The number of tasks is a "reasonable" size. The definition of reasonable will vary based on the application, but we'll see this as we take a closer look at the code.
The Existing Application
This is a machine learning application that is designed to try out different algorithms for recognizing hand-written digits. Here's a typical output (from the master branch of the project):


This shows 2 algorithms side-by-side. On the left is using Manhattan distance and on the right using Euclidean distance. As we can see, the Euclidean distance takes a bit longer, but it is more accurate.

The way that I maximize CPU usage is to split each item into a separate task. That allows me to use all 8 cores on my machine.

Here's the code to run things in parallel (from RecognizerControl.xaml.cs in the DigitDisplay project):


This uses a bit of a brute-force method to running things in parallel. The string array that comes into this method represents the items we're trying to recognize (each string is a separate digit - you can read more about it here: Displaying Images from Pixel Data).

The "foreach" loop grabs each individual string and creates a Task that will run it through the classifier. The "Recognizer.predict()" function is the one that does the heavy lifting.

After kicking off the task (Task.Run creates a Task and runs it as soon as possible), we add a continuation (in the "ContinueWith()" method). This runs the "CreateUIElements()" method that will create a bitmap, pair it with the prediction result, and then display it in the WrapPanel of our UI. This uses the "TaskScheduler.FromCurrentSynchronizationContext" to get back to the UI thread. You can learn more about this from my articles and videos about Task: Task, Await, and Asynchronous Methods.

The last line of the method ("Task.WhenAny()") is there to reset the timer appropriately. This is how we get the "Duration" on the screen.

A Lot of Tasks
This generates *a lot* of Tasks. A Task is created for each item (242 for each classifier), and these are all told to "start now". It's then up to the Task Scheduler to figure out when and where to run each of these tasks.

So I'm creating 484 tasks, and saying "run these when you can".

That can't be good for performance, right?

It runs faster than when it was single-threaded (and that makes a whole lot of sense). But what is it really doing for performance?

Performance Test
I was talking to someone about Tasks and performance, and it made me curious as to how creating all of these Tasks impacts the performance of this application.

I figured that if I stripped away the CPU-intensive processing, what we would have left would be the overhead of using Tasks this way. So I started by creating a "Null Classifier" that didn't to any work.

Initial results were promising:


If you'd like to see this code, then you'll need to switch over to the "NullClassifier" branch of the GitHub project: https://github.com/jeremybytes/digit-display/tree/NullClassifier.

Here's the very complex code for the Null Classifier (from "FunRecognizer.fs" in the "FunctionalRecognizer" project):


The "nullClassifier" takes in an integer array, and always returns "0". This doesn't do any processing, but it's also not very accurate :)

To get a clearer picture of what was going on, I also increased the number of records from 242 to 5,000 (which is actually 10,000 records since we have 2 classifiers). In the results, we can see that the "Manhattan Classifier" took 180 seconds to finish; the "Null Classifier" took 6 seconds to finish.

That implies that there is 6 seconds of overhead for using Task this way.

But that's also a bit misleading.

A More Accurate Test
The 6 seconds didn't seem like that much (relatively-speaking), but it also seemed a bit high for what was going on. But it turned out, I wasn't eliminating all of the processing.

A big part of the processing was creating the bitmaps and putting them into the WrapPanel of the UI. So I went through and commented-out the code that generated the UI elements and ran my test again. This time, the results were pretty surprising (even to me):


This processed the same number of records: 5000. The "Manhattan Classifier" took 164 seconds to complete, so it was a little bit faster. But the "Null Classifier" took no time at all (well, less than a second).

At first, I thought I may have taken out a bit too much code. Maybe the continuations weren't really running. But each record was processed, and we can see that the error count is the same as before: 4515. The error count is incremented in the continuation, so I knew that was still running.
So by eliminating the CPU-intensive operation and the UI-intensive code, I found that the Task overhead is negligible for this type of work.
That was good news to me. Performance is way better than single-threading (3 to 6 times better depending on the machine and the number of cores).

Caveats
It's very easy to mess up parallel programming. If we have interactions between the processes, things can get ugly very quickly. This code was designed to be parallelized from the ground up. This means that I made sure that each Task could be isolated and run without outside dependencies.

I also have a CPU-bound operation. This means that by using Task this way (and letting the Task Scheduler do its job of figuring out the best way to mete out the jobs), I can take advantage of multi-core machines. In fact, I just got a new laptop with 8 cores, and this code runs about twice as fast as on the 4 core machine I was using before.

I'm also running this on a reasonable number of records. In typical usage, this application would process 600 to 800 records -- the number that would fit on the screen. Even when I increased that 10-fold, I had good performance. If I were trying to process hundreds of thousands of records, I would expect this to break down pretty quickly.
For different scenarios, I'd definitely take a look at the advice in Parallel Programming with Microsoft .NET (my review). It has different patterns that we can follow depending on what our needs are.
Update: If you'd like to contribute your own ideas to improving this code, take a look at the next article which describes the application goals: Your Ideas Needed: Other Ways to Run in Parallel. Feel free to submit a pull request.

Wrap Up
I really like using Task. Whenever I tried to do my own threading, bad things would generally happen. That's why I've been a big fan of the BackgroundWorkerComponent in the past and of Task and await today. These abstractions give us a higher-level way of dealing with these concurrent processes. And we can leave it up to the compiler and .NET run-time designers to take care of the details. I'm happy to leave things to people who know way more than I do.

If you want to explore some more, be sure to take at the articles in the README.md of the "digit-display" project on GitHub, and also check out the articles and videos on getting started with Task and await.

Happy Coding!

Saturday, September 30, 2017

Using the Built-In Dependency Injection in ASP.NET Core 2.0

ASP.NET Core has a built-in Dependency Injection (DI) container that takes care of things for us with very little code. As an experiment, I added DI to the WebAPI service that I created a couple weeks ago. Let's walk through that process to see what we need to do.

The code is available in GitHub on the "di" branch of the person-api-core project.

Note: .NET Core 2.0 Articles are collected here: Getting Started with .NET Core 2.0.

The Goal
Our goal is to break some tight-coupling in our WebAPI controller. Right now, it relies on a static People class -- this is where the data comes from. Instead, I'd like the controller to rely on an abstraction (an interface in this case). This allows us to easily swap out the data source, and this is great for unit testing. (We'll talk about unit testing in upcoming articles.)

We'll break the tight-coupling by using constructor injection, just by adding a constructor parameter. Once we do that, we configure the ASP.NET Core DI container to resolve that parameter. That's it.

The Current State
The current state of our application isn't the greatest for breaking the coupling. There are a few things we'll need to do to get the code into better shape to support it.

The initial code is available on the "master" branch of the project: GitHub - jeremybytes/person-api-core.

Here's what our WebApi Controller looks like now (from Controllers/PeopleController.cs):


What I'd like to do is get rid of the reliance on the concrete "People" class (which is a static class at this point). What we really need is access to a "GetPeople()" method to retrieve our data. But where this particular method comes from isn't important to the controller itself. And this is one of the cool things about DI that we'll see in a bit.

Here's the current People class (from Models/People.cs):


This is a static class with a static method. And the data that's returned is just hard-coded. We'll have to make a few changes to this class. The main thing is that I want to implement an interface to give us an abstraction to work with. But static classes and static members don't work with interfaces. So some minor surgery will be necessary.

Finally, we have the Startup class (in Startup.cs). Specifically, here's the ConfigureServices method:


This is the default that we get when we create a WebAPI project using the .NET Core 2.0 template. The services collection is where we can add our DI configuration. I won't go into the details of the ASP.NET Core pipeline (because there are lots of people more qualified that I am to talk about that). We'll see the changes we need to make to this method way at the end.

Removing the Static
As mentioned above, I'd like to use an interface to create an abstraction between the controller and the item that provides the data. Unfortunately, our data class "People" is currently a static class with a static method. So we'll have to change that first.

It's a pretty easy change to the "People" class. We just remove "static" from the class and the method:


But now our controller won't work. We'll need to make a few changes to our "PeopleController" class.


Since "People" is no longer static, we can't simply call "People.GetPeople()". I made a change that requires the fewest code updates. This is "new"ing up the "People" class and then immediately calling the "GetPeople" method on that instance.

I really don't like this syntax. I like to keep my "new" (creation) separate from method calls (usage). Since we don't have an intermediate variable that holds the item we created, we have no way to get to this instance of the "People" class again. It's not really a problem in this case, but I'd hate to get into the habit of it.

This is just a personal preference. There are samples on the Microsoft Docs site that use this syntax.

The "People" class is also used in the "Get(int id)" method. So I made a similar change here:


Once we get the interface in place, we'll make a few more changes to this class (and get rid of the syntax that's bothering me).

But the important thing right now is that we can build and run our WebAPI service. (For instructions on how to run/call the service, see the prior article about building a WebAPI service.) We don't have any looser coupling, but we did get rid of the static class and method that prevented us from moving forward.

Adding the Interface
Now that we have an instance class and method, we can create an interface for that. Here's the interface file that we created, Models/IPeopleProvider.cs (note that we've flipped over to the "di" branch in the GitHub project for this code):


I called this "IPeopleProvider". I'm not very happy with the naming. But as we all know, naming really sucks.

This interface only has one member "GetPeople()". This gives us the abstraction that we need to move forward.

Now we just have to mark that the "People" class implements this interface:


We don't have to make any other changes since the People class already has the "GetPeople" method. But now, I want to change the name of the class.

Renaming a Bit
I want to change the name of the "People" class because it isn't descriptive enough now. With the changes that we're making, it's implied that we will have different implementations of the "IPeopleProvider" interface. So I want this particular implementation to be a bit more descriptive.

For this, we'll change the name of the class to "StaticPeopleProvider". So we'll rename the file and the class itself.



Again, I'm not completely happy with this name. Since the data coming from this class is hard-coded (as opposed to coming from a database, file, or service), I used "static" as part of the name. Hopefully, this isn't too confusing since we just removed the "static" attribute of the class. "HardCodedPeopleProvider" might be better, but that didn't sound right to me either.

Anyway, there's always room for improvement, particularly with naming.

Since we renamed that class, we'll also need to update the controller:


This ensures that our code still builds and runs. I really like to keep non-building code to a minimum. And if things still run, I have a lot more confidence that I haven't strayed too far.

This is another reason I like to have unit tests in place -- to make sure I haven't strayed too far. Unfortunately for this service and the changes that we're making to it, unit tests wouldn't help too much with that. We always have to find a good tool for the job.

Constructor Injection
Now that we have the pieces in place, we can write the code that will give us the loose coupling. For this, we'll use a pattern known as constructor injection.

We have a dependency on the "StaticPeopleProvider" class. In particular, we need to call the "GetPeople" method on that class in order for our code to work. Right now we're handling that dependency ourselves (by "new"ing up an instance of the class).

Rather than being responsible for that dependency, we'd like someone else to provide it for us. In this case, we'll ask someone else to provide the dependency as a constructor parameter.

Here are a couple updates to our "PeopleController" (in Models/PeopleController.cs):


We've done two things here. First we've added a private field to hold our "IPeopleProvider". We don't care about what the concrete type of this object is, all we care about is that it has a "GetPeople" method that we can call in our code. And that's exactly what the interface gives us.

Next, we've added a constructor to our class that takes an "IPeopleProvider" as a parameter. This means that whoever creates our controller class must also provide us with an instance of a class that implements that interface. We then assign that parameter to our private field.

Using the Dependency
Now that we have the "provider" field in our class, we can update our methods to use it. (This will also get rid of that syntax I don't like.)

Here are the updated "Get" methods (also in Models/PeopleController.cs):


Instead of "new"ing up items, we use the "GetPeople" method on our private field. The result is that our controller class doesn't need to know anything about the "StaticPeopleProvider" class or any other concrete class. It only needs to know about the abstraction (the interface).

Injecting Dependencies
We have't removed our dependencies: we still need an object with a "GetPeople" method on it. But instead of handling that dependency ourselves (in the controller), we're now injecting the dependency (hey, that's "Dependency Injection") through the constructor. And since we're injecting it through the constructor, we call this pattern "Constructor Injection".

Right now, our service will build, but it will not run. Our last step will be to configure the DI container.

Configuring the Container
Our controller is now all set up for dependency injection, but we need to get that dependency in there somehow. Since we're using the DI container that's built-in to ASP.NET Core, we just need to make a change to our Startup class (from Startup.cs):


In the "ConfigureServices" method, we just need to add one line of code. This code associates an abstraction (IPeopleProvider) with a concrete type (StaticPeopleProvider).

The method "AddSingleton" means that we end up with one and only one instance of the "StaticPeopleProvider" class. That works in our case particularly since we have hard-coded data.

Other options include "AddTransient", which means that we get a new instance each time we ask for one, and "AddScoped", which means that we get a new instance for each HTTP Request. I won't go into those details because other folks have done a pretty good job with that.

Now that the container is configured, our code will build and run just fine. To run the service, we can just type "dotnet run" at a command prompt:


Then we can navigate to the service in the browser (in this case: http://localhost:9874/api/people):


It looks kind of magical, so let's take a step back to see how this works.

How It Works
It feels like some code is missing here, but the ASP.NET Core DI container takes care of the details for us. This is a big reason to use a container (and also why they can be confusing); they are a bit magical.

When we navigate to the service with the URL, the router figures out that we need an instance of the "PeopleController" class. When we look at the constructor, we find that we need a parameter:


The DI container says, "I need an IPeopleProvider. Do I know what that is?" It looks through the configuration and finds a mapping:


The DI container then says, "Okay, so if someone asks for an IPeopleProvider, I'm supposed to give them a StaticPeopleProvider."

Then it finds the StaticPeopleProvider:


This doesn't have an explicit constructor, so the DI container creates an instance of the StaticPeopleProvider using the default constructor. Then it passes that object to the constructor of the "PeopleController" class.

Once we have an instance of the "PeopleController" class, the appropriate "Get" method is called:


The "Get" method then returns that data based on the "StaticPeopleProvider" class that was injected through the constructor.

More Information
There are several good articles on ASP.NET Core dependency injection. The one on Microsoft docs has quite a bit information (though it might be a bit difficult if you're new to DI): Introduction to Dependency Injection in ASP.NET Core. This also shows how to swap out the built-in container for a third-party one (such as Autofac).

And Shawn Wildermuth has some great information (as usual): ASP.NET Core Dependency Injection.

And if you want to get a better grasp on the concepts behind Dependency Injection, I've got lots of articles on my website: DI Why? Getting a Grip on Dependency Injection. And I've also got a Pluralsight course on the subject: Dependency Injection On-Ramp.

Wrap Up
I'm glad to see that getting started with the built-in DI container in ASP.NET Core goes pretty quickly. I'm sure that there are quite a few nuances, and I'll be digging in a bit deeper. Constructor Injection works quite well. Property Injection is not currently supported, but other things such as Parameter Injection are. There's quite a bit more exploring to do.

Happy Coding!

Friday, September 8, 2017

Using Task with .NET Core 2.0 (Success, Error, Cancellation)

Tasks are a key part of asynchronous programming in the .NET world -- this includes .NET Core 2.0 which was released last month. Today we'll take a look at consuming an asynchronous method (one that returns a Task) and see how to perform additional work after it has completed (a continuation), how to deal with exceptions, and also how to handle cancellation.

Note: .NET Core 2.0 Articles are collected here: Getting Started with .NET Core 2.0.

Initial Setup
In the previous 2 articles, we've seen how to create a WebAPI service and also a console application that consumes that service. Both of these are built with .NET Core 2.0. If you'd like to follow along, you can download the projects on GitHub:

WebAPI Service: https://github.com/jeremybytes/person-api-core
Console Application: https://github.com/jeremybytes/task-app-core

These projects have the completed code.

I'm using the command-line interface (CLI) in a bash terminal along with Visual Studio Code on macOS. But this will also work on Windows 10 using PowerShell and Visual Studio Code. My environment includes (1) the .NET Core 2.0 SDK, (2) Visual Studio Code, and (3) the C# Extension for Visual Studio Code. You can check the Microsoft .NET Core 2.0 announcement for resources on getting the environment set up.

We'll be expanding on the console application that was described in the previous article. We won't be so focused on the .NET Core environment as much as we'll be looking at how to use Task.

The Asynchronous Method
The asynchronous method that we're using is in the task-app-core project (a console application with a couple of class files). We'll start by opening the project folder in Visual Studio Code. This is the "TaskApp" folder that we worked with last time.

Here's the method signature of the asynchronous method (from the PersonRepository.cs file):


The important bit here is that "GetAsync()" returns a Task. A Task represents a concurrent operation. It may or may not happen on a separate thread, but we don't normally need to worry about those details.

In this case, we can treat "Task<List<Person>>" as a promise: at some point in the future, we will have a list of Person objects (List<Person>) that we can do something with.

The parameter is a CancellationToken. This allows us to notify the process that we would like to cancel.

We won't look at the insides of this method here. The only thing that we need to know is that this method has an artificial 3 second delay built in. This is so that we can see the asynchronous nature of the method.

This method gets data from a WebAPI service (person-api-core). We'll make sure that that service is running when we're ready to use it.

Running the Console Application
All of our work today will be done in the Program.cs file of the console application. The GitHub project has the completed code, but we'll be starting from scratch. If you want to follow along, you can get the GitHub files and then remove everything in the Program.cs file.

Here's how we'll start out:


This will print something to the console and then wait for us to press "Return" before it exits.

From the command line, just type

     dotnet run

to build and run the application:


If you see something different than this, it may be because the files are not saved. I'm used to Visual Studio automatically saving files whenever I build, so I've spent a bit of time trying to figure out why my code changes aren't working. It's usually because I forgot to save the files before running the application.

Starting the Service
Before going any further, let's fire up the service in another terminal window. To do this, we just need to navigate to the folder that contains the WebAPI project ("PersonAPI" from the prior article).

Then we just need to type

     dotnet run

to start the service:


Now we can find our data at http://localhost:9874/api/people.

Using a Task (Success Path)
Now we'll flip back to the console application (make sure you leave the service terminal window open). Let's consume the asynchronous method that we looked at above.

We need to create an instance of the "PersonRepository" class and then call the "GetAsync" method. Here's some initial code:


For now, I'll use "var result" to grab the return from the method. We'll give this a better name in just a bit. But before that, let's take a look at the parameter.

The "GetAsync" method needs a CancellationToken parameter, but we don't really care about cancellation at the moment. Rather than passing in a "null" or create a token that we won't use, we can use the static property "CancellationToken.None". This is designed so that we can just ignore cancellation.

Adding Using Statements
But when we type this into our code file, we see that "CancellationToken" does not resolve. In the .NET Core world, we create new class files that are empty; they don't have any "using" statements at the top. So we get to add them ourselves. But the C# Extension in Visual Studio Code will help us out a bit.

If we put our cursor on "CancellationToken", we see a lightbulb offering us help. You can press "Cmd+." on macOS (or "Ctrl+." on Windows, the same as Visual Studio) to activate the popup. You can also click on the lightbulb to get the popup:


The option "using System.Threading" will add the appropriate "using" statement to the top of our file. Since we're mostly starting with empty files, I find myself using this feature quite a bit.

Fixing the Result
I don't like using "var" to capture the return in this scenario because the return type isn't clear by just looking at the code. We can always hover our mouse over the "var" keyword to see the actual type, but I like the code to be readable without needing the extra assistance. So we'll change this to be more explicit with "Task<List<Person>>" and also by renaming "result" to "peopleTask".


This gives us a better idea of what is going on just by looking at the code.

Adding a Continuation
We can run the application now, but we'll get the same result that we had earlier. We're calling the "GetAsync" method, but we aren't doing anything to get the results out of it.

Task has a "Result" property, and it's really tempting to use it directly. The problem with using it directly is that it may not be populated yet. If we try to access the "Result" property before it is populated, then the current thread will be blocked until it is populated.

How do we get the result without blocking the current thread? We set up a continuation.

A continuation is a delegate that will run after the Task is complete. We can set this up with the "ContinueWith" method on the task. And this is where things get interesting:


Notice that the parameter for "ContinueWith" is "Action<Task<List<Person>>>". Yikes! That's a bit of a mouthful. (As a side note, there are many overloads for this method; we'll see another in a bit.)

"Action" represents a delegate that returns "void". In this case, the delegate takes one parameter of type "Task<List<Person>>". (For more information on Delegates, check out Get Func<>-y: Delegates in .NET.

To fulfill this delegate parameter, we just need to create a method that matches the method signature:


This method returns "void" and takes "Task<List<Person>>" as a parameter.

Inside this method, we can access the "Result" property safely. That's because this code will not run until *after* the Task has completed. So we know we won't inadvertently block the thread waiting for that. (We will still need to deal with exceptions, but we'll see that in a bit.)

"Result" is "List<Person>" in this case. I'm putting it into an explicit variable of that type to help with readability. After we have that, we just loop through the items and output them to the console. The "Person" class has a good "ToString()" method, so we don't have to do any custom formatting for basic output.

Then we just need to use "PrintNames" as the parameter for the "ContinueWith" call.

Here's our current "Program.cs":


This is enough to show our "success" path. To run the application, we'll go back to the command line (the one in the "TaskApp" folder).

Just type

     dotnet run

to see the result:


This console will display "One Moment Please..." and then after 3 seconds, it will list out the names that came from the service.

Just hit "Return" to exit the console application.

Because things are running asynchronously, the "ReadLine" method will run immediately. This means that if we press "Return" after "One Moment Please..." appears (but before the names appear), the application will exit and we will not see the names listed. (It will also exit if we press it *before* "One Moment Please..." appears because the console input gets cached until something uses it.)

Cleaning Things Up (and a Lambda Expression!)
I don't like the idea of needing to press "Return" to close the console application. If the data is printed successfully, then I'd like it to automatically exit. We can do this with the "Environment.Exit()" method:


The parameter is the exit code. "0" denotes no error, while other values are generally used to show that something went wrong.

Now when we run the application, we do not need to press "Return" after the names are printed:


Moving the Delegate to a Lambda Expression
The other thing I'd like to do is change the named delegate into a lambda expression. I've talked about lambda expressions many times, so I'll let you refer to those resources. In this case, the lambda expression is more-or-less an inline delegate.


Instead of using "PrintNames" as the parameter for "ContinueWith", we can use a lambda expression. The first part, "task", is the parameter for the delegate. This is the same parameter from "PrintNames", so it is a "Task<List<Person>>" (the compiler knows that so we don't need to type it explicitly). Then we have the lambda operator =>. And after that is the body of our lambda expression.

In this case, we can use the separate delegate or the in-lined lambda expression and get the same results. There are a few advantages to lambdas that are helpful with Task (such as captured variables). The main reason I encourage people to learn how to use lambdas here is because that's what you'll see in the wild when looking at code.

Breaking the Application
What we have now works -- as long as nothing goes wrong. Once we have something unexpected happen, things go a bit wonky.

Let's shut down the service to see how our application handles it. Just go back to the console window where the service is running and press "Ctrl+C" to stop the service:


Now when we run the console application, it just hangs:


Since it's hard to tell what's going on, we're going to use Visual Studio Code for some debugging. We'll go to the "Debug" menu and choose "Start with Debugging" (or press F5). This will stop on the actual error:


We can dig right into the middle and see that the error is "Couldn't connect to server". That makes sense since the server is no longer running.

But notice where we got this error. We actually get this error when we try to access the "Result" property on the Task. When an exception occurs inside Task code, the Task goes into a "faulted" state. When a Task is "faulted", then the "Result" property is invalid. So if we try to look at "Result" on a faulted Task, we'll get the exception instead.

Notice that the exception is an "AggregateException". This is a special exception that can hold other exceptions. If you'd like further details, you can take a look at this article: Task and Await: Basic Exception Handling.

Limiting Damage
We don't want to access the "Result" property on a faulted task, so we'll limit when our continuation runs. The "ContinueWith" method has many overloads, and a handy one takes "TaskContinuationOptions" as another parameter:


"TaskContinuationsOptions" is an enum. The "OnlyOnRanToCompletion" value means that this continuation will only run if the Task completes successfully. That takes care of our initial problem.

But our application still behaves the same way. If we run the application, it just hangs:


And if we use the debugger in Visual Studio Code, we don't see the exception since we're not doing anything to look for it.

Exception Handling (Error)
We've limited the damage, but we still want our application to behave nicely if something goes wrong. For this we'll add another continuation:


Now we have a continuation that's marked "OnlyOnFaulted". This will only run if the Task is in a faulted state. We can check the "Exception" property of the Task for details, but we'll just print out a "something went wrong" message here. For details on getting into the exception, check the Basic Exception Handling article mentioned above.

Another thing to note is that we're using an exit code of "1". We haven't defined what the different exit codes mean, but as noted, anything other than "0" denotes that something went wrong.

When we run the application now, it lets us know that something went wrong:


And if we start the service back up, we can see that the happy path still works for our application:


Is This Really Asynchronous?
It's hard to tell whether this code is really asynchronous. We saw above that if we pressed "Return" before the data returns that our application would exit. That shows us that things are still processing. But let's add a bit of code to see things more clearly.

At the bottom of the application, we'll add a few more "ReadLine"/"WriteLine" calls:


Now when we run our application, we can press "Return" three times, and we'll see the messages printed out:


If we press "Return" four times, then the application would exit. Let's add one more message to the code:


Then if we press "Return" four times, the application will exit before it prints out the names:


This is a bit of a simple demonstration, but it shows that our application is still running and processing input while the asynchronous method is running.

Stopping the Process Early (Cancellation)
The last thing we want to look at today is how to deal with cancellation. Our asynchronous method is already set up to handle cancellation -- that's what the CancellationToken parameter is for. On the consumer side, we need to be able to request cancellation and also deal with the results.

It's really tempting to simply new up a "CancellationToken" and pass it as a parameter to our method. But that will not work the way we want it to. That's because once we create a CancellationToken manually, there is no way for us to change the state. That's not very useful.

CancellationTokenSource
But we can use a "CancellationTokenSource" instead. This is an object manages a cancellation token.

Here's how we'll update our code:


At the top of our "Main" method, we create a new "CancellationTokenSource" object. Then when we call the "GetAsync" method, instead of passing in "CancellationToken.None", we'll pass in the "Token" property of our token source.

This gets the cancellation token to our asynchronous method. Now we need to set the token to a cancelled state.

Cancelling
For our console application, we'll give our user a chance to cancel the operation with the "X" key. We'll look for an "X" with the "ReadKey" method. This won't operate exactly how we would like, but we will clean that up in a little bit.

For now, we'll add a "ReadKey" just above our "ReadLine"/"WriteLine" methods:


If someone presses "X" (and it doesn't matter if it is upper case or lower case), then we call the "Cancel()" method on the CancellationTokenSource. This will put the cancellation token into a "cancellation requested" state. From there, it's up to the asynchronous method to figure out how it wants to handle that request.

Handling a Canceled Task
When a Task is canceled, it is put into a "canceled" state. This is different from the "faulted" state and the "ran to completion" state. With our current code, this means that we need another continuation to deal with cancellation.

Here's that continuation:


This is marked to run "OnlyOnCanceled", and we'll print a message to the output.

Testing Cancellation
With the pieces in place, we can run the application and try out cancellation. After "One Moment Please..." appears, press the "X" key. There will still be a 3 second pause because the asynchronous method doesn't check for cancellation until after the 3 second delay.

But then we should see our "canceled" message:


And if we don't press any keys, we'll see the success path:


This lets us see that cancellation is working, but we can't really get to our "Waiting..." logic anymore. We'll have to do some modification for that.

Better User Interaction
It's usually a challenge to get a good user experience with a console application, but we'll give it a shot. Here's the approach that I took for this code:


This replaces the previous "if (ReadKey..)" and "ReadLine"/"WriteLine" code. The endless loop lets us press "X" to cancel, "Q" to quit, and "Return" to get a "Waiting..." message. It's still a bit odd, but it handles the bulk of this application's needs.

The only other code update is to change the initial message:


Now we can test our interactivity. First, we'll press "X" for cancellation:


Then we'll try "Q" to quit:


And then we'll try "Return" a few times for the "Waiting..." message:


And if we do nothing, we'll still get the same results that we saw earlier.

Not a Ton of Code
We've implemented quite a bit of functionality here. We're consuming a method asynchronously; we're processing results when it completes, we're dealing with exceptions, and we can cancel the operation before it completes.

Here's the entire console application:


You can also look at this code on GitHub: Program.cs. The online code is a little different because I extracted out some separate methods for readability.

As an alternate, we can also create a single continuation and have code that branches based on the state of the Task (success, faulted, canceled). For more information, you can take a look at "Checking IsFaulted, IsCompleted, and TaskStatus".

Wrap Up
The good news is that using Task with .NET Core 2.0 is not much different from using it with the full .NET framework. The main challenges with porting over the full-framework code (available here: GitHub: jeremybytes/using-task) have to do with the UI demonstration. It's a bit easier to show some features when we have a desktop UI application, but we can still see the features in a console application.

Showing "await" with a console application is a bit more of a challenge. "Await" works just fine (with C# 7.1 which allows us to have an "async Main()" method), but since the code looks like blocking code, it makes it harder to show the asynchronous interactions. I'm still working on a demonstration for that.

If you want more information on Task and Await, be sure to check out my materials online: I'll Get Back to You: Task, Await, and Asynchronous Methods. This includes articles, code, videos, and much more. And also be on the lookout for my live presentations on the topic. I'm also available for workshops where we spend a full day on the topic of asynchronous programming.

I'm still exploring in the .NET Core 2.0 world. And I'm sure that there will be much more fun to come.

Happy Coding!