Wednesday, February 28, 2024

Continue Processing with Parallel.ForEachAsync (even when exceptions are thrown)

Parallel.ForEachAsync is a very useful tool for running code in parallel. Recently, we have been exploring what happens when an exception is thrown inside the loop:

  1. If we "await" ForEachAsync, then we get a single exception (even if exceptions are thrown in multiple iterations of the loop).
  2. The loop short-circuits -- meaning not all items are processed.

In this series of articles, we look at these issues and how to deal with them.

Code samples for all articles are available here: https://github.com/jeremybytes/foreachasync-exception.

In the last article, we saw how to deal with behavior #1 by getting all of the available exceptions. In this article, we will look at behavior #2 -- short-circuiting. We can eliminate the short-circuiting of the loop so that all of the items are processed, and we can collect the exceptions along the way.

Short Version:

Handle exceptions inside the body of ForEachAsync

For slides and code samples on Parallel.ForEachAsync (and other parallel approaches), you can take a look at the materials from my full-day workshop on asynchronous programming: https://github.com/jeremybytes/async-workshop-2022. (These materials use .NET 6.0. Updates for .NET 8.0 are coming in a few months.) For announcements on public workshops, check here: https://jeremybytes.blogspot.com/p/workshops.html.

Prevent Short-Circuiting

Parallel.ForEachAsync will stop processing if it encounters an unhandled exception. This is the behavior that we've seen in our other examples. The result is that we only process 17 of the 100 items in our loop.

Since an unhandled exception is the cause of the short-circuit, we can continue processing by eliminating that unhandled exception.

And an easy way to eliminate an unhandled exception is to handle it.

Try/Catch inside the ForEachAsync Body

Here is the updated code (from the "doesnt-stop/Program.cs" file):

    await Parallel.ForEachAsync(Enumerable.Range(1, 100),
        new ParallelOptions() { MaxDegreeOfParallelism = 10 },
        async (i, _) =>
        {
            try
            {
                Console.WriteLine($"Processing item: {i}");
                await Task.Delay(10); // simulate async task
                MightThrowException(i);
                Interlocked.Increment(ref TotalProcessed);
            }
            catch (Exception ex)
            {
                Console.WriteLine($"Caught in Loop: {ex.Message}");
            }
        });

Here we have a try/catch block inside the body of ForEachAsync. If an exception is thrown, it is handled inside of the loop. From ForEachAsync's perspective, there are no unhandled exceptions, so it continues processing.

Output

In the output, all 100 items processed -- or are at least attempted to be processed. Here is the last part of the output:


Caught in Loop: Bad thing happened inside loop (87)
Processing item: 92
Processing item: 93
Processing item: 94
Processing item: 95
Caught in Loop: Bad thing happened inside loop (84)
Processing item: 96
Caught in Loop: Bad thing happened inside loop (81)
Processing item: 97
Processing item: 98
Processing item: 99
Processing item: 100
Caught in Loop: Bad thing happened inside loop (99)
Caught in Loop: Bad thing happened inside loop (93)
Caught in Loop: Bad thing happened inside loop (96)

Total Processed: 67
Total Exceptions: 33
Done (Doesn't Stop for Exceptions)

All 100 items were processed. 67 were successful and 33 of them failed -- this is what we expect based on our method that throws exceptions.

Observations

With this approach, we do not have to deal with AggregateException. Instead, we handle the individual exceptions as they occur. This could include logging or retrying the operation.

Because we have a standard try/catch block, we get the full exception (including stack trace and other information). We can log this information if we need to investigate further.

Since the loop does not stop, we do not need to worry about where the loop left in a short-circuit. All of the items have a chance to be processed.

We do need to worry about concurrency. The body of the catch block could be running for multiple items at the same time. So we need to ensure that our logging methods and any other processing in the catch block is thread-safe.

Wrap Up

Ultimately, the approach we take depends on the specific needs of the process. But we do need to keep the default behavior of Parallel.ForEachAsync in mind:

  1. If we "await" ForEachAsync, then we get a single exception (even if exceptions are thrown in multiple iterations of the loop).
  2. The loop short-circuits -- meaning not all items are processed.

If getting an exception in the ForEachAsync loop is truly exceptional (meaning it really should never happen), then the default behavior may be okay. An exception is a catastrophic error that stops the process and lets us know that (at least) one item failed. In this case, default behavior may be fine (as we saw in the first article).

It may be that short-circuiting is okay (because we need to restart the loop from the beginning in case of failure), but we still want more information about what happened. We can get all of the available exceptions by either using a continuation or by using ConfigureAwaitOptions (as we saw in the last article).

If we want to continue processing even if some of the items fail, then we can take the approach from this article -- put a try/catch block inside the body of ForEachAsync (as we saw in this article).

Whatever approach we take, it is always good to know that there are options. Each application has its own needs. Our job as programmers is to pick an option that works well for the application. So keep looking at options; you never know when you'll need a particular one.

Happy Coding!

Tuesday, February 27, 2024

Getting Multiple Exceptions from Parallel.ForEachAsync

Parallel.ForEachAsync is a very useful tool for running code in parallel. Last time, we looked at what happens when an exception is thrown inside of the loop:

  1. If we "await" ForEachAsync, then we get a single exception (even if exceptions are thrown in multiple iterations of the loop).
  2. The loop short-circuits -- meaning not all items are processed.

Depending on what we are doing in the parallel loop, these items may not be a concern. But there are situations where I would like to get all of the exceptions back; and there are times when I would like to capture the exceptions and continue processing.

In this series of articles, we look at these issues and how to deal with them.

Code samples for all articles are available here: https://github.com/jeremybytes/foreachasync-exception.

In this article, we will take on the first issue: how to get all available exceptions from the loop. The number of exceptions that we get depends on the second issue. Since the loop short-circuits, we end up with some exceptions, but not as many as if the loop were to complete normally. (The third article in the series shows how we can keep the loop from short-circuiting.)

2 Approaches

We will look at 2 approaches to getting the available exceptions.

  1. Using a Continuation
    With this approach, we look at the exceptions in a continuation instead of letting them bubble up through the ForEachAsync method/Task.

  2. Using ConfigureAwaitOptions.SuppressThrowing
    This uses a feature added in .NET 8: ConfigureAwaitOptions. With this approach, we suppress the exceptions in the loop and then use a "Wait" to show the aggregate exception. I came across this approach in Gérald Barré's article "Getting all exceptions thrown from Parallel.ForEachAsync".

The approaches both give access to the entire set of exceptions. The differences are subtle, and we'll look at those after we've seen both approaches.

A Reminder

As a reminder, here is how we left our code in the previous article (from the original-code/Program.cs file):

    try
    {
        await Parallel.ForEachAsync(
            Enumerable.Range(1, 100),
            new ParallelOptions() { MaxDegreeOfParallelism = 10 },
            async (i, _) =>
            {
                Console.WriteLine($"Processing item: {i}");
                await Task.Delay(10); // simulate async task
                MightThrowException(i);
                Interlocked.Increment(ref TotalProcessed);
            });
    }
    catch (Exception ex)
    {
        Console.WriteLine($"Exception: {ex.Message}");
    }

Every 3rd iteration of the loop throws an exception. This causes the ForEachAsync loop to short-circuit (after 17 items, generally). Because of the "await" on ForEachAsync, only 1 of the exceptions is shown (out of 5, generally).

Using A Continuation

For the first approach, we will use a continuation. Instead of letting the exceptions bubble up through the ForEachAsync method call, we add a continuation that runs some code if there is an exception. From there, we can gracefully handle the exception(s).

Here is the code for that (from the "continuation/Program.cs" file):

    try
    {
        await Parallel.ForEachAsync(Enumerable.Range(1, 100),
            new ParallelOptions() { MaxDegreeOfParallelism = 10 },
            async (i, _) =>
            {
                Console.WriteLine($"Processing item: {i}");
                await Task.Delay(10); // simulate async task
                MightThrowException(i);
                Interlocked.Increment(ref TotalProcessed);
            })
            .ContinueWith(task =>
            {
                if (task.IsFaulted)
                    Console.WriteLine($"Exception: {task.Exception!.Message}");
            });
    }

After calling "Parallel.ForEachAsync", we add a continuation by calling ".ContinueWith". This specifies code that we want to run after a Task has completed.

The parameter of "ContinueWith" is a delegate that has the code we want to run when the Task is complete. The "task" parameter represents the ForEachAsync task itself. 

Because we have access to the this task, we can check the "IsFaulted" property. A task is faulted if an exception is thrown in the task (which is exactly what we are expecting here). If an exception is thrown, then we output it to the console.

Output

When we run the code, we get the following output:


Processing item: 9
Processing item: 4
Processing item: 1
Processing item: 6
Processing item: 5
Processing item: 8
Processing item: 3
Processing item: 10
Processing item: 2
Processing item: 7
Processing item: 11
Processing item: 16
Processing item: 12
Processing item: 13
Processing item: 14
Processing item: 15
Processing item: 17
Exception: One or more errors occurred. (Bad thing happened inside loop (6)) (Bad thing happened inside loop (3)) (Bad thing happened inside loop (9)) (Bad thing happened inside loop (15)) (Bad thing happened inside loop (12))

Total Processed: 12
Total Exceptions: 5
Done (AggregateException from Continuation)

This shows us the 17 items that were processed, and the Exception message "One or more errors occurred" along with additional information. This is a typical message from an AggregateException.

For more specifics on AggregateException, you can take a look at "Task and Await: Basic Exception Handling" and "'await Task.WhenAll' Shows One Exception - Here's How to See Them All".

An AggregateException contains all of the exceptions that were thrown in a Task. It has a tree structure of inner exceptions that let it handle various complexities of Task (including concurrent tasks and child tasks).

The Original Catch Block Does Nothing

The call to Parallel.ForEachAsync is inside a "try" block (just like in our original code). There is still a "catch" block, but it is not used in this scenario.

Here's the updated "catch" block:

    catch (Exception)
    {
        // You never get to this code. Exceptions are handled
        // inside the ForEachAsync loop.

        // But just in case, rethrow the exception
        Console.WriteLine("You shouldn't get here");
        throw;
    }

As the comment says, we should never get to this catch block. But just in case we do, we rethrow the exception. In this case, it would result in an unhandled exception and the application would crash.

But why won't we get to this "catch" block?
Our code has changed in a subtle way. In the original code, we used "await" on the ForEachAsync method/Task. When we "await" a faulted Task, the AggregateException is not thrown; instead, one of the inner exceptions is thrown. (This is what we saw in the previous article.)

The difference here is that we no longer "await" the ForEachAsync method. Instead, we "await" the continuation task returned from "ContinueWith". The code in our continuation is not likely to throw an exception, so we will not hit the "catch" block.

Displaying the Inner Exceptions

So we now have access to the AggregateException. Let's display the inner exceptions. Here is a method to do that (from the "continuation/Program.cs" file):

    private static void ShowAggregateException(AggregateException ex)
    {
        StringBuilder builder = new();

        var innerExceptions = ex.Flatten().InnerExceptions;
        builder.AppendLine("======================");
        builder.AppendLine($"Aggregate Exception: Count {innerExceptions.Count}");

        foreach (var inner in innerExceptions)
            builder.AppendLine($"Continuation Exception: {inner!.Message}");
        builder.Append("======================");

        Console.WriteLine(builder.ToString());
    }

This flattens the inner exceptions (to get rid of the tree structure), loops through each exception, and builds a string to display on the console.

We just need to adjust our continuation to call this new method:

    .ContinueWith(task =>
    {
        if (task.IsFaulted)
            ShowAggregateException(task.Exception);
    });

Now we get the following output:

Processing item: 9
Processing item: 2
Processing item: 6
Processing item: 1
Processing item: 3
Processing item: 8
Processing item: 7
Processing item: 4
Processing item: 10
Processing item: 5
Processing item: 11
Processing item: 12
Processing item: 16
Processing item: 14
Processing item: 15
Processing item: 13
Processing item: 17
======================
Aggregate Exception: Count 5
Continuation Exception: Bad thing happened inside loop (6)
Continuation Exception: Bad thing happened inside loop (3)
Continuation Exception: Bad thing happened inside loop (9)
Continuation Exception: Bad thing happened inside loop (12)
Continuation Exception: Bad thing happened inside loop (15)
======================

Total Processed: 12
Total Exceptions: 5
Done (AggregateException from Continuation)

Now we can see all 5 of the exceptions that were thrown in our loop. In this case, we are simply putting the exception message on the console. But we do have access to each of the full exceptions, including the stack traces. So we have the access to the same information as if we were to "catch" them.

This shows all of the exceptions that were thrown by using a continuation. Now let's look at another way to get the exceptions.

Using ConfigureAwaitOptions.SuppressThrowing

I came across this approach in Gérald Barré's article "Getting all exceptions thrown from Parallel.ForEachAsync". You can read his original article for more details.

This code is in the "configure-await-options" project in the code repository.

An Extension Method

This approach uses an extension method to suppress throwing exceptions on the awaited task (to keep the AggregateException from getting unwrapped). Then it waits on the Task to get the AggregateException directly.

Note: This uses a feature added in .NET 8: ConfigureAwaitOptions. So, this approach will not work for earlier versions of .NET.

Here is the extension method (from the "configure-await-options/Program.cs" file):

public static class Aggregate
{
    internal static async Task WithAggregateException(this Task task)
    {
        await task.ConfigureAwait(ConfigureAwaitOptions.SuppressThrowing);
        task.Wait();
    }
}

This extension method takes a Task as a parameter and returns a Task. So we can think of this as a decorator of sorts -- it modifies the functionality of a Task in a fairly transparent way to the caller.

At the heart of this is the "ConfigureAwait" call. "ConfigureAwait" got a new parameter in .NET 8: ConfigureAwaitOptions. If you want the details, I would recommend reading Stephen Cleary's article: ConfigureAwait in .NET 8.

In this code, "SuppressThrowing" will keep the Task from throwing an exception when it is awaited. If we "await" a task that has this option set, an exception will not be thrown at the "await".

The next line of the method calls "Wait" on the task. Generally, we want to avoid "Wait" because it is a blocking operation. But since we call "Wait" on a task that has already been awaited, we do not need to worry about this. The task is already complete.

But when we call "Wait" on a faulted task, the exception is thrown. And in this case, it is the full AggregateException (rather than the unwrapped inner exception that we normally get with "await").

Using the Extension Method

To use the extension method, we tack it onto the end of the Parallel.ForEachAsync call (also in the "configure-await-options/Program.cs" file):

    try
    {
        await Parallel.ForEachAsync(Enumerable.Range(1, 100),
            new ParallelOptions() { MaxDegreeOfParallelism = 10 },
            async (i, _) =>
            {
                Console.WriteLine($"Processing item: {i}");
                await Task.Delay(10); // simulate async task
                MightThrowException(i);
                Interlocked.Increment(ref TotalProcessed);
            }).WithAggregateException();
    }
    catch (AggregateException ex)
    {
        ShowAggregateException(ex);
    }
    catch (Exception ex)
    {
        Console.WriteLine($"Exception: {ex.Message}");
    }

Notice that "WithAggregateException" is called on the "ForEachAsync" task. This means that the Task that is awaited at the top is the Task that has been modified by the extension method.

So instead of throwing an individual exception, this throws an AggregateException.

You can see that we have an additional "catch" block for AggregateException, and this uses the same "ShowAggregateException" method from the other example.

Output

The output is similar to the other example:

Processing item: 10
Processing item: 7
Processing item: 9
Processing item: 2
Processing item: 4
Processing item: 6
Processing item: 8
Processing item: 5
Processing item: 1
Processing item: 3
Processing item: 11
Processing item: 15
Processing item: 12
Processing item: 13
Processing item: 14
Processing item: 16
Processing item: 17
======================
Aggregate Exception: Count 5
Continuation Exception: Bad thing happened inside loop (6)
Continuation Exception: Bad thing happened inside loop (9)
Continuation Exception: Bad thing happened inside loop (3)
Continuation Exception: Bad thing happened inside loop (12)
Continuation Exception: Bad thing happened inside loop (15)
======================

Total Processed: 12
Total Exceptions: 5
Done (AggregateException from ConfigureAwaitOptions)

Now we can see all 5 of the exceptions that were thrown in our loop. As with the first example, we have access to each of the full exceptions, including the stack traces.

Differences in the Approaches

These approaches both accomplish the same thing. They give us access to all of the exceptions that were thrown inside the ForEachAsync loop. Here are some things to keep in mind.

Availability

Using a Continuation is available back to .NET 6 (technically, it goes back further, but ForEachAsync only goes back to .NET 6, so that's really what we care about).

ConfigureAwaitOptions is new in .NET 8. So this will work great going forward, but does not work with prior versions of .NET.

Which approach you prefer depends on what version of .NET you use. If your code is .NET 8, then either approach will work.

Throwing Exceptions

Using a Continuation does not throw the AggregateException. Instead, the AggregateException is cracked open and examined in the continuation. This is also why we do not hit the "catch" block that we have in the code.
Note: With this approach, the outer AggregateException is never thrown, so we do not get the stack trace and some other elements that are filled in when an exception is thrown. All of the inner exceptions (the ones we care about here) *have* been thrown, so they do have the stack trace and other information.

ConfigureAwaitOptions does throw the AggregateException. When "task.Wait()" is called, the AggregateException is thrown. This is caught in the appropriate "catch" block where we can examine it.

Which approach you prefer will depend on your philosophy when it comes to exceptions and exception handling.

Throwing an exception is fairly expensive computationally. So some developers try to avoid throwing exceptions if processing can be handled another way.

On the other hand, some developers prefer to throw exceptions so that they can follow a consistent try/catch pattern throughout their code. This gives consistency and takes advantage of the chain-of-responsibility set up by the exception handling mechanism.

A Missing Feature

Both of these approaches give us access to all of the exceptions that were thrown. However, they do not address the short-circuiting problem. Even though our loop is set up to process 100 items, it stops processing after 17. As noted in the previous article, this is due to the way that Parallel.ForEachAsync is designed.

This missing feature may or may not be an issue, depending on what your code is trying to do.

Bottom Line

Both of these approaches accomplish our mission of getting all of the exceptions thrown in Parallel.ForEachAsync.

Wrap Up

Once we start diving into exception handling, we find that there are different options and different approaches. This includes addressing the default behavior of Parallel.ForEachAsync:

  1. If we "await" ForEachAsync, then we get a single exception (even if exceptions are thrown in multiple iterations of the loop).
  2. The loop short-circuits -- meaning not all items are processed.

    With both of the approaches shown in this article, we have solved for behavior #1: we get access to the AggregateException of the ForEachAsync task. With this information, we can find all of the errors that occurred in the loop.

    If this is all that we are concerned about, then we are good to go. But if we are concerned about item #2 (the loop short-circuiting), we need to look at another approach. And that is what we will do in the next article.

    Happy Coding!

    Monday, February 26, 2024

    Parallel.ForEachAsync and Exceptions

    Parallel.ForEachAsync is a very useful tool for running code in parallel. But what happens when an exception is thrown inside the loop?

    By default, the following things happen:

    1. If we "await" ForEachAsync, then we get a single exception (even if exceptions are thrown in multiple iterations of the loop).
    2. The loop short-circuits -- meaning not all items are processed.

    Depending on what we are doing in the parallel loop, these items may not be a concern. But there are situations where I would like to get all of the exceptions back; and there are times when I would like to capture the exceptions and continue processing.

    Over the next several articles, we will look at these issues and how to deal with them.

    Code samples for all articles are available here: https://github.com/jeremybytes/foreachasync-exception.

    This article shows the basics of using Parallel.ForEachAsync and what happens when exceptions are thrown.

    For slides and code samples on Parallel.ForEachAsync (and other parallel approaches), you can take a look at the materials from my full-day workshop on asynchronous programming: https://github.com/jeremybytes/async-workshop-2022. (These materials use .NET 6.0. Updates for .NET 8.0 are coming in a few months.) For announcements on public workshops, check here: https://jeremybytes.blogspot.com/p/workshops.html.

    Parallel.ForEachAsync Basics

    The code for today's article is in the "original-code" folder of the GitHub repository. The early code is work-in progress; only the repository only contains the finished code at the end of the article.

    We'll start by looking at the basics of Parallel.ForEachAsync. Here's a bit of code that sets up a non-parallel loop:


        static async Task Main(string[] args)
        {
            Console.Clear();
    
            foreach (int i in Enumerable.Range(1, 100))
            {
                Console.WriteLine($"Processing item: {i}");
                await Task.Delay(10); // simulate async task
            }
    
            Console.WriteLine("Done (Original Code)");
        }

    This code uses a regular foreach loop to iterate from 1 to 100. Inside the loop, we output to the console and simulate some async work with "Task.Delay(10)". This will delay processing for 10 milliseconds. Since this code is running sequentially, it will take about 1 second for the entire loop to complete.

    Here is what the output look like:

    An animation of console output showing "Processing Item: 1" "Processing Item: 2" all the way to "Processing Item: 100". It takes about 1 second to complete the list. At the end "Done (Original Code)" is output.

    Using Parallel.ForEachAsync

    The next step is to change this to a parallel loop:


        static async Task Main(string[] args)
        {
            Console.Clear();
    
            await Parallel.ForEachAsync(
                Enumerable.Range(1, 100),
                async (i, _) =>
                {
                    Console.WriteLine($"Processing item: {i}");
                    await Task.Delay(10); // simulate async task
                });
    
            Console.WriteLine("Done (Original Code)");
        }

    Here are a couple of notes on how this code works:

    First, notice that we "await" the Parallel.ForEachAsync method. The loop runs asynchronously, so if we do not "await" here, then the Main method would keep going. Because of the "await", the last line (writing "Done" to the console) will not run until after all iterations of the loop are complete.

    Next, let's look at the parameters for "ForEachAsync".

    The first parameter (Enumerable.Range(1, 100)) is the IEnumerable to iterate through. This is the same as the "in" part of the non-parallel foreach loop.

    The second parameter is a delegate that has the work we want to run in parallel.

    Delegate Parameter
    This delegate has 2 parameters (which we have as (i, _) here). The "i" parameter is the item in the current iteration of the loop. This is equivalent to the "i" in the foreach loop. We can use "i" inside the delegate body just like we can use "i" inside the body of the foreach loop. 

    The second parameter of the delegate is a CancellationToken. Since we are not dealing with cancellation here, we use a discard "_" to represent this parameter.

    The body of the delegate has the actual work. This is the same as the contents of the foreach loop above. We output a line to the console and then simulate some work with await Task.Delay(10).

    Because we have "await" in the body of the delegate, the delegate itself is also marked with the "async" modifier (before the parameters).

    Output
    Because our code is now running in parallel, it completes much faster. Here is what the output looks like (it is too fast to see well):


    The speed will depend on how many virtual cores are available to do the processing. Parallel.ForEachAsync normally figures out how many resources to use on its own. We'll add some hints to it later on so we can get more consistent results.

    One thing to note about the output is that "100" prints out before "98". This is one of the joys of parallel programming -- order is non-deterministic.

    Now let's move on to see what happens when one or more of these items throws an exception.

    Throwing an Exception

    Here's a method that sometimes throws an exception:


        private static void MightThrowException(int item)
        {
            if (item % 3 == 0)
            {
                Interlocked.Increment(ref TotalExceptions);
                throw new Exception($"Bad thing happened inside loop ({item})");
            }
        }

    This will throw an exception for every 3rd item. (We could hook this up to a random number generator, but this gives us some predictability while we look at results.)

    Interlocked.Increment
    There may be a line of code that does not look familiar here:

        Interlocked.Increment(ref TotalExceptions);

    In this case "TotalExceptions" is a static integer field at the top of our class. This lets us keep track of how many exceptions are thrown.

    "Interlocked.Increment" is a thread-safe way to increment a shared integer. Using the "++" operator is not thread-safe, and may result in incorrect values.

    Exceptions in the ForEachAsync Loop
    Now we'll update the code to call "MightThrowException" inside our loop. Since we do not want an unhandled exception here, we will wrap the whole thing in a try/catch block:


        static async Task Main(string[] args)
        {
            Console.Clear();
            try
            {
                await Parallel.ForEachAsync(
                    Enumerable.Range(1, 100),
                    async (i, _) =>
                    {
                        Console.WriteLine($"Processing item: {i}");
                        await Task.Delay(10); // simulate async task
                        MightThrowException(i);
                        Interlocked.Increment(ref TotalProcessed);
                    });
            }
            catch (Exception ex)
            {
                Console.WriteLine($"Exception: {ex.Message}");
            }
    
            Console.WriteLine($"\nTotal Processed: {TotalProcessed}");
            Console.WriteLine($"Total Exceptions: {TotalExceptions}");
            Console.WriteLine("Done (Original Code)");
        }

    We've changed quite a few things.

    First, we have wrapped the entire "ForEachAsync" call in a try/catch block. This is to make sure we do not have an unhandled exception.

    Next, we have added the "MightThrowException" call inside of our loop. This will throw an exception for every 3rd item.

    Next, we added "Interlocked.Increment(ref TotalProcessed);". This is after the point an exception might be thrown. So if an exception is not thrown, we increment a "TotalProcessed" field (similar to the "TotalExceptions" field). This will give us a count of the items that were processed successfully.

    In the "catch" block, we output the exception message.

    Finally, we have console output for the total number of items processed successfully and the total number of exceptions.

    Output
    Here is the output for this code (note: this output is not animated):


    Processing item: 15
    Processing item: 16
    Processing item: 17
    Processing item: 26
    Processing item: 18
    Processing item: 19
    Processing item: 20
    Processing item: 21
    Processing item: 22
    Processing item: 23
    Processing item: 24
    Processing item: 25
    Processing item: 27
    Exception: Bad thing happened inside loop (6)
    
    Total Processed: 18
    Total Exceptions: 9
    Done (Original Code)

    This is just the last part of the output, but it tells us enough of about what is happening.

    The Issues

    The output shows us the 2 issues from the start of this article. These may or may not concern us, depending on what we are doing. But we do need to be aware of them.

    Hidden Exceptions

    1. If we "await" ForEachAsync, then we get a single exception (even if exceptions are thrown in multiple iterations of the loop).

    When an exception is thrown in a Task, it gets wrapped in an AggregateException. This is because Tasks can be complex (with concurrent and child tasks). An AggregateException wraps up all of the exceptions that happen into a single exception.

    But when we "await" a Task, the AggregateException gets unwrapped for us. This can be good because we now have a "real" exception and do not have to deal with an AggregateException. But it can be bad because it hides the number of exceptions that actually occur.

    Since we "await" the ForEachAsync method, we only see one exception: "Exception: Bad thing happened inside loop (6)". So this is only showing the exception for item #6.

    But we can see in the "Total Exceptions" that 9 exceptions were thrown. The other exceptions are hidden from us here.

    Short-Circuited Loop

    2. The loop short-circuits -- meaning not all items are processed.

    The other thing to notice about the output is that the loop stops processing part way through. Only 27 of the 100 iterations of the loop ran. This is the nature of the ForEachAsync method. If a task throws an exception, the loop stops processing.

    Depending on our scenario, we may want the loop to continue even if one of the iterations throws an exception.

    We deal with both of these items in the next 2 articles.

    A Little Consistency

    Before leaving this code, let's add a little bit of consistency.

    One of the problems with parallel code is that the decision of how many items to run at the same time is left up to the parallel infrastructure. If we have a lot of resources available, then there will be more items run in parallel.

    But this also means that output will vary depending on what machine we are running on (and how many resources that machine has at the time). In this code, my desktop and laptop produce different results. The desktop generally stops after 27 items, the laptop will stop after 17 (sometimes fewer, depending on what else is going on).

    Parallel.ForEachAsync has an optional parameter where we can set the maximum degrees of parallelism. This will limit the number of items run concurrently, and if we set this to a value lower than our machine resources, will also add some consistency to the output.

    Here is our loop with the additional parameter. (This is the final state of our "original-code" project and can be found in the original-code/Program.cs file.)


        await Parallel.ForEachAsync(
            Enumerable.Range(1, 100),
            new ParallelOptions() { MaxDegreeOfParallelism = 10 },
            async (i, _) =>
            {
                Console.WriteLine($"Processing item: {i}");
                await Task.Delay(10); // simulate async task
                MightThrowException(i);
                Interlocked.Increment(ref TotalProcessed);
            });

    This second parameter is a ParallelOptions object that sets the MaxDegreesOfParallelism property to 10. This means that a maximum of 10 items run concurrently. (It may be fewer items if there are not enough resources available.)

    This gives me a consistency between my mid-range machines.


    Processing item: 6
    Processing item: 9
    Processing item: 5
    Processing item: 1
    Processing item: 3
    Processing item: 8
    Processing item: 11
    Processing item: 16
    Processing item: 12
    Processing item: 13
    Processing item: 14
    Processing item: 15
    Processing item: 17
    Exception: Bad thing happened inside loop (9)
    
    Total Processed: 12
    Total Exceptions: 5
    Done (Original Code)

    Now I get a fairly consistent 17 items processed. I want the consistency here so that we can more readily compare results when we look at different ways of handling issues.

    Wrap Up

    So to recap, here are 2 things that we need to be aware of when we use Parallel.ForEachAsync:

    1. If we "await" ForEachAsync, then we get a single exception (even if exceptions are thrown in multiple iterations of the loop).
    2. The loop short-circuits -- meaning not all items are processed.

    This may be fine for the work that we are doing, but we may want to go beyond that. We will tackle the first item in the next article. We'll look at 2 different ways to get all of the exceptions from ForEachAsync, and why we might choose one rather than the other.

    In the 3rd article, we will tackle the issue of short-circuiting. So be sure to check back.

    Happy Coding!

    Friday, February 23, 2024

    Have you lived more than 5 days in the last 4 months?

    Last night (more correctly, very early this morning), I learned something new about myself:

    I haven't been living every day.

    I read 1970s science fiction. I have over 600 physical books in my collection, and I've read about half so far. My collection started about 7 years ago (I'm still not sure why I ended up with this category). Here's a recent picture of my collection:

    2 bookshelves full of mostly paperback books.

    Some if it is good; some is bad; some is amazing. They are rarely life-changing, but I was floored by a very timely passage from an otherwise mediocre book: The Earth Tripper by Leo P. Kelley.

    The Earth Tripper

    Book Cover: The Earth Tripper by Leo P. Kelley featuring a helicopter above a dome.
    "When I was a child, I never lived in the present. I was always waiting for--oh, for Saturday to come, because on Saturday I had been promised a new bicycle. So Tuesday and all the other days didn't matter at all while I waited for Saturday to come. It came. So did the new bicycle and I was happy. But then it was Sunday all of a sudden and the bicycle wasn't quite so new anymore. I heard that the circus was coming to our town. I watched the men put up the posters all over town, and I got my sister--she was three years older than I was--to read the date to me--you know, the date the circus would arrive. I went home and made a red circle around the day on our kitchen calendar. I waited and waited, and at last the circus came. For a time, the lions and tigers, the bareback riders, and all the handsome aerialists filled my world. But next day, they were gone.

    "One day I took the calendar down from the kitchen wall and I counted all the days that I had circled in red. There were five of them from January to May. Then I counted all the ordinary days and I realized that I had been cheating myself. My mother made me stay in my room alone that night while the family ate dinner without me. She couldn't understand why I had torn the calendar to shreds."

    "Didn't you explain to her what you had discovered?"

    "I didn't know how to explain. I was ashamed. How could I tell anyone--how could I dare tell anyone that I had really lived only five days in four months?"

    [pp 139-140, The Earth Tripper, Leo P. Kelley, Coronet Books, 1974, Emphasis mine]

    This was particularly impactful to me because I wasn't expecting it. It was an otherwise mediocre book. I only had about 20 pages left. And I was reading it at 1:30 in the morning (because I couldn't sleep). I was just trying to finish up this book.

    But when I got to this passage, I realized that I have been living for the red circles.

    And how could I dare tell anyone that I had really lived only five days in four months?

    Red Circles

    For me, the red circle days are the days when I get to help other developers, when I get to make a difficult topic understandable, when I get to show someone something they didn't know. Historically I have done this is a number of ways: blog articles, videos, online courses, speaking engagements, workshops, and one-on-one mentoring.

    But I have been limiting my red circle days even further by only including the days I help someone actively rather than passively.

    Active vs. Passive

    Active help is when I get feedback while I am doing it. An example of this is giving a conference talk. While giving the talk, I can watch the lightbulbs go on -- I can see when someone is really understanding the topic and is excited to make use of what they have learned. It is also the conversations that happen after the talk: helping folks with their specific questions, clarifying a point, or going deeper into a specific area that the talk does not allow for.

    These are the times that I feel most useful. And these are the times that I know that my particular skillset allows me to make complex topics more approachable. And these are the times when I know that I am exactly where I need to be.

    I know that these active events do not reach everyone in the room. About 10% get enough out of it to leave a glowing rating or a comment. About 5% find it so not useful to them that they leave a comment. And I assume everyone else gets something out of it (probably not life-changing, but maybe useful some day) -- at minimum, they didn't hate it. But I am able to help some people (and see it), and that is enough for me to keep going.

    Passive help is when I do not get to see the impact of my work. For example, on this blog. I have written over 500 articles (over 15 years). Some of these are more useful than others. I don't get a huge amount of traffic, but I do get about 20,000 - 30,000 views per month. If I make an assumption that 1% of those views are actually useful to someone, that means that I help 6 - 10 people a day.

    But I don't usually think about these 6 - 10 people because I don't get to actually see them.

    So my red circles show up on the active help days but not the passive help days.

    Too Few Circles

    I'm not sure when I started relying on red circle days. For most of my speaking career (career isn't the right word, but I'll use it here), I have averaged about 1 event every 3 weeks. And during my peak year, it was a lot more frequent than that.

    But things change. When I moved out of Southern California, I lost access to the local user groups, community events, and local-ish events (within a half-day drive). With COVID, most events did not happen (some went online). And in the past couple of years, several of my favorite events have gone away.

    This year, I am confirmed for 2 events so far. I have another 3 or 4 potentials. The hardest part is that there were several events that I was relying on that did not select me. I have been having quite a bit of trouble with those (for a variety of reasons).

    The circles are very far apart. Right now, I am in a spot with 6 months between circles.

    Finding Life on the In-Between Days

    Recently, I have really been feeling a big gap in my life -- like I am merely passing time until the next big thing. 

    The passage from the book made me realize that I have been living for the red circle days. And I have been getting away with it for a long time because the circles were fairly close together. Somewhere along the way, I forgot about all of the days in between.

    The point is not to figure out how to get more red circles. The point is to figure out how to find life on all of the other days.

    Honestly, I'm not quite sure how I am going to do this yet. But that's okay.

    Rough Days

    The last couple months have been kind of rough for me -- and not because of the red circle days. I have discovered some things about myself that have impacted how I look at the world, how I see myself, and how I interact with other people. These discoveries are difficult to work through but are ultimately good.

    I have been able to put names and ideas on things that I have recognized in myself and my life. It has been hard because a lot of things I thought I knew about myself had to be reframed (and a bit of that reframing is still happening).

    And now I am aware of the impact of red circle days. I knew that there was something wrong, but I didn't know what. Now that I have identified an issue, I can go about changing things.

    Are you living just for red circle days? If so, I challenge you to find the life on the days in between.

    I am going to find that life, and I am going to start living it.

    Happy Living!

    Thursday, February 22, 2024

    Minimal APIs vs Controller APIs: SerializerOptions.WriteIndented = true

    Another thing added in ASP.NET Core / .NET 7.0 (yeah, I know it's been a while since it was released), is the "ConfigureHttpJsonOptions" extension method that lets us add JSON options to the services collection / dependency injection container. The setting that caught my eye is "WriteIndented = true". This gives a nice pretty output for JSON coming back from our APIs.

    This is the difference between:

    {"id":3,"givenName":"Leela","familyName":"Turanga","startDate":"1999-03-28T00:00:00-08:00","rating":8,"formatString":"{1} {0}"}

    and
    {
      "id": 3,
      "givenName": "Leela",
      "familyName": "Turanga",
      "startDate": "1999-03-28T00:00:00-08:00",
      "rating": 8,
      "formatString": "{1} {0}"
    }
    You may not want this because of the extra whitespace characters that have to come down the wire. But for the things that I work with, I want the pretty printing!

    The good news is that in .NET 7.0, the new "ConfigureHttpJsonOptions" method lets us set this up (among quite a few other settings).

    To use it, we just add the options to the Services in the Program.cs file of our project.


        // Set JSON indentation
        builder.Services.ConfigureHttpJsonOptions(
            options => options.SerializerOptions.WriteIndented = true);
    

    You can check the documentation for JsonSerializerOptions Class to see what other options are available.

    But there's a catch:
    ConfigureHttpJsonOptions does work for Minimal APIs.
    ConfigureHttpJsonOptions does not work for Controller APIs. 
    Let's take a look at that. For code samples, you can check out this repository: https://github.com/jeremybytes/controller-vs-minimal-apis.

    Sample Application

    The sample code contains a "MinimalApi" project and a "ControllerApi" project. These were both started as fresh projects using the "dotnet new webapi" template (and the appropriate flags for minimal/controller). Both projects get their data from a 3rd project ("People.Library") that provides some hard-coded data.

    Here is the minimal API that provides the data above:


        app.MapGet("/people/{id}",
            async (int id, IPeopleProvider provider) => await provider.GetPerson(id))
            .WithName("GetPerson")
            .WithOpenApi();

    And here's the controller API:


        [HttpGet("{id}", Name = "GetPerson")]
        public async Task<Person?> GetPerson(
            [FromServices] IPeopleProvider provider, int id)
        {
            // This may return null
            return await provider.GetPerson(id);
        }

    Both of these call the "GetPerson" method on the provider and pass in the appropriate ID.

    Testing the Output
    To test the output, I created a console application (the "ServiceTester" project). I did this to make sure I was not getting any automatic JSON formatting from a browser or other tool.

    I created an extension method on the Uri type to make calling easier. You can find the complete code in the RawResponseString.cs file.


        public static async Task<string> GetResponse(
            this Uri uri, string endpoint)

    The insides of this method make an HttpClient call and return the response.

    The calling code is in the Program.cs file of the ServiceTester project.


            Uri controllerUri = new("http://localhost:5062");
            Uri minimalUri = new("http://localhost:5194");
    
            // Minimal API Call
            Console.WriteLine("Minimal /people/3");
            var response = await minimalUri.GetResponse("/people/3");
            Console.WriteLine(response);
    
            Console.WriteLine("----------");
    
            // Controller API Call
            Console.WriteLine("Controller /people/3");
            response = await controllerUri.GetResponse("/people/3");
            Console.WriteLine(response);
    
            Console.WriteLine("----------");

    This creates 2 URIs (one for each API), calls "GetResponse" for each and then outputs the string to the console.

    This application was also meant to show some other differences between Minimal APIs and Controller APIs. These will show up in future articles.

    Without "WriteIndented = true"

    Here is the output of the ServiceTester application without making any changes to the settings of the API projects.

    Note that both the MinimalApi and ControllerApi services need to be running in order for the ServiceTester application to work. (I start the services in separate terminal tabs just so I can keep them running while I do my various tests.)


    Minimal /people/3
    {"id":3,"givenName":"Leela","familyName":"Turanga","startDate":"1999-03-28T00:00:00-08:00","rating":8,"formatString":"{1} {0}"}
    ----------
    Controller /people/3
    {"id":3,"givenName":"Leela","familyName":"Turanga","startDate":"1999-03-28T00:00:00-08:00","rating":8,"formatString":"{1} {0}"}
    ----------

    So let's change some settings.

    "WriteIndented = true"

    So let's add the options setting to both projects (MinimalApi/Program.cs and ControllerApi/Program.cs):


        // Set JSON indentation
        builder.Services.ConfigureHttpJsonOptions(
            options => options.SerializerOptions.WriteIndented = true);
    

    Then we just need to restart both services and rerun the ServiceTester application:


        Minimal /people/3
        {
          "id": 3,
          "givenName": "Leela",
          "familyName": "Turanga",
          "startDate": "1999-03-28T00:00:00-08:00",
          "rating": 8,
          "formatString": "{1} {0}"
        }
        ----------
        Controller /people/3
        {"id":3,"givenName":"Leela","familyName":"Turanga","startDate":
        "1999-03-28T00:00:00-08:00","rating":8,"formatString":"{1} {0}"}
        ----------
    This shows us that the pretty formatting is applied to the Minimal API output, but not to the Controller API output.

    So What's Going On?

    I found out about this new setting the same way as the one in the previous article -- by hearing about it in a conference session. I immediately gave it a try and saw that it did not work on the API I was experimenting with. After few more tries, I figured out that the setting worked with the Minimal APIs that I had, but not the Controller APIs. (And I did try to get it to work with the Controller APIs in a whole bunch of different ways.)

    Eventually, I came across this snippet in the documentation for the "ConfigureHttpJsonOptions" method (bold is mine):
    Configures options used for reading and writing JSON when using Microsoft.AspNetCore.Http.HttpRequestJsonExtensions.ReadFromJsonAsync and Microsoft.AspNetCore.Http.HttpResponseJsonExtensions.WriteAsJsonAsync. JsonOptions uses default values from JsonSerializerDefaults.Web.
    This tells us that these options are only used in certain circumstances. In this case, specifically when WriteAsJsonAsync is called.

    I have not been able to find what uses "WriteAsJsonAsync" and what does not. Based on my observations, I assume that Minimal APIs do use WriteAsJsonAsync and Controller APIs do not use WriteAsJsonAsync.

    So I add another item to my list of differences between Minimal APIs and Controller APIs. I guess it's time to publish this list somewhere.

    Wrap Up

    So, back to the beginning:
    ConfigureHttpJsonOptions does work for Minimal APIs.
    ConfigureHttpJsonOptions does not work for Controller APIs. 
    It would be really nice if these capabilities matched or if there was a note somewhere that said "This is for minimal APIs".

    Hopefully this will save you a bit of frustration in your own code. If you want to use "WriteIntented = true", then you will need to use Minimal APIs (not Controller APIs).

    Happy Coding!

    Wednesday, February 21, 2024

    Method Injection in ASP.NET Core: API Controllers vs. MVC Controllers

    Method Injection in ASP.NET Core got a little bit easier in .NET 7. Before .NET 7, we had to use the [FromServices] attribute on a method parameter in order for the parameter to be injected from the services collection / dependency injection container. Starting with .NET 7, the [FromServices] parameter became optional, but only in some places.

    Short Version:
    [FromServices] is no longer required for API Controllers. 
    [FromServices] is still required for MVC Controllers.
    The reason I'm writing about this is that I had a bit of confusion when I first heard about it, and I suspect others may have as well. If you'd like more details, and some samples, keep reading.

    Confusion

    I first heard about [FromServices] being optional from a conference talk on ASP.NET Core. The speaker said that [FromServices] was no longer required for method injection and removed it from the sample code to show it still worked.

    Now ASP.NET Core is not one of my focus areas, but I do have some sample apps that use method injection. When I tried this out for myself on an MVC controller, it did not work. After a bit of digging (and conversation with the speaker), we found that the change only applied to API controllers.

    Let's look at some samples. This code shows a list of data from a single source -- either through an API or displayed in an MVC view. (Sample code is available here: https://github.com/jeremybytes/method-injection-aspnetcore)

    Data and Service Configuration

    Both projects use the same library code for the data. This code is in the "People.Library" folder/project.

    IPeopleProvider:
        public interface IPeopleProvider
        {
            Task<List<Person>> GetPeople();
            Task<Person?> GetPerson(int id);
        }

    The method we care about is the "GetPeople" method that returns a list of "Person" objects. There is also a "HardCodedPeopleProvider" class in the same project that implements the interface. This returns a hard-coded list of objects.

    Both the API project and the MVC project configure the "IPeopleProvider" the same way. (API code: ControllerAPI/Program.cs -- MVC code: PeopleViewer/Program.cs


        // Add services to the container.
        builder.Services.AddScoped<IPeopleProvider, HardCodedPeopleProvider>();

    This registers the HardCodedPeopleProvider with the dependency injection container so that we can inject it into our controllers.

    API Controller

    Using [FromServices]
    Prior to .NET 7, we could inject something from our dependency injection container by using the [FromServices] attribute. Here's what that code looks like in the API controller (from the "PeopleController.cs" file)

        [HttpGet(Name = "GetPeople")]
        public async Task<IEnumerable<Person>> Get(
            [FromServices] IPeopleProvider provider)
        {
            return await provider.GetPeople();
        }

    When we call this method, the "HardCodedPeopleProvider" is automatically used for the "provider" parameter.

    The result is the following:


    [{"id":1,"givenName":"John","familyName":"Koenig","startDate":"1975-10-17T00:00:00-07:00","rating":6,"formatString":""},{"id":2,"givenName":"Dylan","familyName":"Hunt","startDate":"2000-10-02T00:00:00-07:00","rating":8,"formatString":""},...,{"id":14,"givenName":"Devon","familyName":"","startDate":"1973-09-23T00:00:00-07:00","rating":4,"formatString":"{0}"}]

    Note: I'm sorry about the JSON formatting. "SerializerOptions.WriteIndented" is a topic for another day.
    Update: Here's an article on that: Minimal APIs vs Controller APIs: SerializerOptions.WriteIndented = true.

    Removing [FromServices]
    As noted, the "FromServices" is now optional:


        [HttpGet(Name = "GetPeople")]
        public async Task<IEnumerable<Person>> Get(
            IPeopleProvider provider)
        {
            return await provider.GetPeople();
        }

    The output is the same as we saw above:


    [{"id":1,"givenName":"John","familyName":"Koenig","startDate":"1975-10-17T00:00:00-07:00","rating":6,"formatString":""},{"id":2,"givenName":"Dylan","familyName":"Hunt","startDate":"2000-10-02T00:00:00-07:00","rating":8,"formatString":""},...,{"id":14,"givenName":"Devon","familyName":"","startDate":"1973-09-23T00:00:00-07:00","rating":4,"formatString":"{0}"}]
    [FromServices] is no longer required for API Controllers.
    Now let's take a look at an MVC controller.

    MVC Controller

    Using [FromServices]
    The [FromServices] attribute lets us use method injection in MVC controllers as well. Here is an action method from the "PeopleViewer" project (from the "PeopleController.cs" file):


        public async Task<IActionResult> GetPeople(
            [FromServices] IPeopleProvider provider)
        {
            ViewData["Title"] = "Method Injection";
            ViewData["ReaderType"] = provider.GetType().ToString();
    
            var people = await provider.GetPeople();
            return View("Index", people);
        }

    As with the API controller, the "HardCodedPeopleProvider" is automatically passed for the method parameter. Here is the output of the view:

    Web browser showing output of People objects in a grid. Example item is "John Koenig 1975 6/10 Stars"


    This shows the same data as the API in a colorful grid format (as a side note, the background color of each item corresponds with the decade of the date.)

    Removing [FromServices]
    Because we can remove [FromServices] from the API controller, I would guess that we can remove it from the MVC controller as well.


        public async Task<IActionResult> GetPeople(
            IPeopleProvider provider)
        {
            ViewData["Title"] = "Method Injection";
            ViewData["ReaderType"] = provider.GetType().ToString();
    
            var people = await provider.GetPeople();
            return View("Index", people);
        }

    However, when we run this application, we get a runtime exception:


    InvalidOperationException: Could not create an instance of type 'People.Library.IPeopleProvider'. Model bound complex types must not be abstract or value types and must have a parameterless constructor. Record types must have a single primary constructor. Alternatively, give the 'provider' parameter a non-null default value.

    ASP.NET Core MVC does not automatically look in the dependency injection container for parameters that it cannot otherwise bind. So, without the [FromServices] attribute, this method fails.
    [FromServices] is still required for MVC Controllers.

    Messaging and Documentation Frustrations

    If you've read this far, then you're looking for a bit more than just the "Here's how things are", so I'll give a bit of my opinions and frustrations.

    I've had a problem with Microsoft's messaging for a while now. It seems like they are very good at saying "Look at this cool new thing" without mentioning how it differs from the old thing or what is not impacted by the new thing. (I had a similar experience with minimal APIs and behavior that is different from controller APIs but not really called out anywhere in the documentation: Returning HTTP 204 (No Content) from .NET Minimal API.)

    I see the same thing happening with [FromServices].

    Here is the documentation (from "What's New in ASP.NET Core 7.0")
    Parameter binding with DI in API controllers
    Parameter binding for API controller actions binds parameters through dependency injection when the type is configured as a service. This means it's no longer required to explicitly apply the [FromServices] attribute to a parameter.
    Unfortunately, the "hype" shortens this message:
    [FromServices] is no longer required.
    Please don't write to me and say "It's obvious from the documentation this only applies to API controllers. There is no reason to believe it would apply to anything else." This is easy to say when you already know this. But what about if you don't know?

    Let's look at it from an average developer's perspective. They have used API controllers (and may be using minimal APIs) and they have used MVC controllers. The methods and behaviors of these controllers are very similar: parameter mapping, routing, return types, and other bits. It is not a very far leap to assume that changes to how a controller works (whether API or MVC) would apply in both scenarios.

    As noted above, the speaker I originally heard this from did not realize the limitations at the time, and this speaker is an expert in ASP.NET Core (including writing books and teaching multi-day workshops). They had missed the distinction as well.

    And unfortunately, the documentation is lacking (at least as of the writing of this article in Feb 2024): 

    • The "FromServicesAttribute" documentation does not have any usage notes that would indicate that it is required in some places and optional in others. This is the type of note I expect to see (as an example  the UseShellExecute default value changed between .NET Framework and .NET Core, and it is noted in the language doc.)
    • The Overview of ASP.NET Core MVC documentation does have a "Dependency Injection" topic that mentions method injection. However, the corresponding Create web APIs with ASP.NET Core documentation does not have a "Dependency Injection" topic.

    Stop Complaining and Fix It, Jeremy

    I don't like to rant without having a solution. But this has been building up for a while now. The obvious answer is "Documentation is open source. Anyone can contribute. Update the articles, Jeremy."

    If you know me, then you know that I have a passion for helping people learn about interfaces and how to use them appropriately. If I have "expertise" in any part of the C# language, it is interfaces. When it came to a set of major changes (specifically C# 8), I looked into them and wrote about them (A Closer Look at C# 8 Interfaces), and I have also spoken quite a bit about those changes: Caching Up with C# Interfaces, What You Know may be Wrong.

    I have noted that the documentation is lacking in a lot of the areas regarding the updates that were made to interfaces. So why haven't I contributed?
    I do not have enough information to write the documentation.
    I can write about my observations. I can point to things that look like bugs to me. But I do not know what the feature is supposed to do. It may be working as intended (as I have discovered about "bugs" I have come across in the past). I am not part of the language team; I am not part of the documentation team; I do not have access to the resources needed to correctly document features.

    Wrap Up

    I will apologize for ranting without a solution. I was a corporate developer for many years. I understand the pressures of having to get code completed. I see how difficult it is to keep up with changing environments while releasing applications. I know what it is like to support multiple applications written over a number of years that may or may not take similar approaches.

    I have a love and concern for the developers working in those environments. There are times when I have difficulty keeping up with things (and keeping up with things is a big part of my job). If I have difficulty with this, what chance does the average developer have? Something has got to give.

    Anyway, I have some more posts coming up about some other things that I've come across. Hopefully those will be helpful. (No ranting, I promise.)

    Happy Coding!