Archives for : C#

The Task Parallel Library Sampler – Part 14: Wrap-up / Future posts

This has been a fun journey and hopefully you’ve learned a bit about MVVM and the TPL. I do have plans for future posts in this series that will center around the TPL.Dataflow namespace. It’s an area of the TPL I don’t seem written about a lot and I think there are probably a few posts that would benefit you.

My next series of posts, however, is on the SOLID principles of good object-oriented programming and design. Often when it comes to concepts like design patterns and MVVM we can’t see the forest from the trees and we get lost in the concepts. The awesome thing about SOLID is that it is so straight-forward that all of us should have a thorough understanding of it.

After SOLID, I have a series planned on design patterns. I won’t cover all the design patterns but will hit on quite a few of them. I also have a post planned on the patterns you don’t really care about if you apply fundamental .net development concepts and practices as the patterns are incorporated into those practices. This will relate to patterns like lazy initialization, object/thread pool, factory method and dependency injection, observer, locking and scheduling, all of which I have hit on in some form or another in assorted posts here while not necessarily calling them out as such patterns.

I always welcome your feedback and allow anonymous comments so feel free to comment on anything. Spam and off-topic/inappropriate comments are and always will be removed.

Full series of posts:
Part One: Starting with MVVM
Part Two: The MVVM solution structure and basic framework
Part Three: Base Classes
Part 4: Sampler View, View Model and Model
Part 5: Running and working with the TPL samples
Part 6: Parallel.For Sample
Part 7: Using Parallel.For effectively
Part 8: Adding a New Sample, Matrices Multiplication
Part 9: Basic Exception handling with the AggregateException
Part 10: Loop control of the Parallel.For and .ForEach
Part 11: Cancelling Threads with the CancellationTokenSource – The MVVM
Part 12: Cancelling Threads with the CancellationTokenSource – The Code
Part 13: Async/Await

Thanks,
Brian

The Task Parallel Library Sampler – Part 13: Async/Await

Previous Post in this series:
Part 12: Cancelling Threads with the CancellationTokenSource – The Code

This sample derives from a Microsoft example and an updated solution is available here.

AsyncAwaitSample model:

public class AsyncAwaitSample : Sample
{
	public override string SampleName
	{
		get { return "Async/Await Sample"; }
	}

	public override bool ImageRequired
	{
		get { return false; }
	}

	public async override void Run(System.Drawing.Bitmap bmp = null, Action<string> UpdateLog = null)
	{
		Stopwatch s = new Stopwatch();
		s.Start();

		UpdateLog("Step 1 (Run 1): Starting an await call to an asyncronous method.");
		int result = await AccessTheWebAsync("Run 1", UpdateLog);
		UpdateLog("Step 6 (Run 1): Done await call to an asyncronous method.");

		//this works and will even run asynchronously but won't wait on any result.
		//more then likely we will be long gone from this method before the method below is done
		UpdateLog(Environment.NewLine + "Step 1 (Run 2): Starting an async call without await and no result.");
		AccessTheWebAsync("Run 2", UpdateLog);
		UpdateLog("Step 6 (Run 2): Done with the async call without await and no result");

		s.Stop();
		RunTime = s.Elapsed;
	}

	// Three things to note in the signature: 
	//  - The method has an async modifier.  
	//  - The return type is Task or Task<T>. (See "Return Types" section.)
	//    Here, it is Task<int> because the return statement returns an integer. 
	//  - The method name ends in "Async."
	async Task<int> AccessTheWebAsync(string runDesignation, Action<string> UpdateLog)
	{
		// You need to add a reference to System.Net.Http to declare client.
		HttpClient client = new HttpClient();

		// GetStringAsync returns a Task<string>. That means that when you await the 
		// task you'll get a string (urlContents).
		// This also allows you set a lot of properties on the task rather then just running
		// it if you need to.
		Task<string> getStringTask = client.GetStringAsync("http://msdn.microsoft.com");
		UpdateLog("Step 2 (" + runDesignation+ "): Sleeping for ten seconds.");
		await Task.Delay(10000);
		UpdateLog("Step 3 (" + runDesignation+ "): Woke up.");
		UpdateLog("Step 4 (" + runDesignation+ "): Getting the web page.");
		// The await operator suspends AccessTheWebAsync. 
		//  - AccessTheWebAsync can't continue until getStringTask is complete. 
		//  - Meanwhile, control returns to the caller of AccessTheWebAsync. 
		//  - Control resumes here when getStringTask is complete.  
		//  - The await operator then retrieves the string result from getStringTask. 
		// This could also have been done as 
		// string urlContents = await client.GetStringAsync("http://msdn.microsoft.com");
		string urlContents = await getStringTask;
		UpdateLog("Step 5 (" + runDesignation+ "): Got the web page.");
		// The return statement specifies an integer result. 
		// Any methods that are awaiting AccessTheWebAsync retrieve the length value. 
		return urlContents.Length;
	}
}

There are two runs here, one shown using the keyword “await” with an asyncronous method and another run just running an asyncronous method without using “await”. It is very important to understand the conventions when using async. That is the name of the method should contain “async” in it. As an example, the HttpClient.GetStringAsync() makes it obvious that it is an async method. As shown in the comments, when calling an async method with await, the current thread calls the method and then waits for the method to return before continuing. If an asyncronous method is called without using await, a seperate thread is spun off and execution continues on. This can be very dangerous if there are results you’re waiting for from an asyncrounous method.

Run results:

Starting Async/Await Sample
Step 1 (Run 1): Starting an await call to an asyncronous method.
Step 2 (Run 1): Sleeping for ten seconds.
Completed Async/Await Sample
Async/Await Sample ran in 00:00:10.0173992

Step 3 (Run 1): Woke up.
Step 4 (Run 1): Getting the web page.
Step 5 (Run 1): Got the web page.
Step 6 (Run 1): Done await call to an asyncronous method.

Step 1 (Run 2): Starting an async call without await and no result.
Step 2 (Run 2): Sleeping for ten seconds.
Step 6 (Run 2): Done with the async call without await and no result
Step 3 (Run 2): Woke up.
Step 4 (Run 2): Getting the web page.
Step 5 (Run 2): Got the web page.

Looking at Run 1 you can see that AsyncAwaitSample.Run() is started, the first run is started and then we get the message that the sample is done. But why? Because we’re calling the run method, which is asynchronous, without await, down in the Sampler model. As such, the calling code calls the method, a thread gets spun up, and then execution continues in the original calling code. It never waits, just continues. So why define the method as “async”? Because the only way to use await with an asynchronous method is if the method itself is async. If you look at Run 2, it is even more obvious what happens when you call an async method without using await. Run 1, which uses await, runs the steps sequentially (other then the messages from Sampler model). Run 2 clearly does the steps out of order.

Another important point, the Run method is defined by a base class, as such the Sampler model is just calling run on all it’s samples. But what is interesting is that we are able to make Run into an async method, which we have to do to use await, and we are given no warning from Sampler that we may be calling an async method without using await. Now, in AsyncAwaitSample we are given a warning that for Run 2 we are calling an async method without await. But there it is more obvious that we are doing so.

In the first post in this series I said there would be 15 posts but it looks like there will be only 14. The next post will be a wrap-up/final to this series.

Thanks,
Brian

Previous Post in this series:
Part 11: Cancelling Threads with the CancellationTokenSource – The MVVM

So now that we’ve covered the MVVM, which you saw was pretty trivial to implement, let’s cover the actual sample.

CancellationSample.Run

public override void Run(System.Drawing.Bitmap bmp = null, Action<string> UpdateLog = null)
{
	Stopwatch s = new Stopwatch();
	s.Start();

	IsRunning = true;

	CancellationTokenSource = new CancellationTokenSource();
	CancellationTokenSource.Token.Register(() => { IsRunning = false; });

	var options = new ParallelOptions { CancellationToken = CancellationTokenSource.Token };
	try
	{
		Parallel.ForEach(WhileTrue(), options, i =>
		{
			while (!options.CancellationToken.IsCancellationRequested)
			{
				UpdateLog("Sleeping in Cancellation Sample at " + i);
				Thread.Sleep(ran.Next(1000));
			}
		});
	}
	catch (OperationCanceledException)
	{
		UpdateLog("Operation has been cancelled."); ;
	}

	s.Stop();
	RunTime = s.Elapsed;
}

public static IEnumerable<int> WhileTrue()
{
	for (int i = 0; ; i++)
		yield return i;
}

We start by instantiating the CancellationTokenSource that holds our token. We have to do this for each run or else the token is already cancelled and the threads in the Parallel.ForEach won’t spawn.

Line 9 is pretty interesting. We’re registering a call-back to fire when the token gets cancelled. Here we set “IsRunning” to false but you can put any delegate here. This way when whatever code external to us cancels the token then we can handle the IsRunning.

Next we instantiate a ParallelOptions, setting the CancellationToken to the Token of the CancellationTokenSource.

The most important thing with all this is to note that cancelling the CancellationTokenSource doesn’t stop any threads, it just prevents more threads from being spawned. Because of this we have to check in our while loop if the CancellationToken has been cancelled and if it has been then stop. When canceling the CancellationTokenSource this will cause an OperationCanceledException to be thrown. This way you can handle if there is any clean-up that needs to happen if your Parallel.For and .ForEach may not have completed. In our case I’m using a “WhileTrue” that just goes on infinitely.

Up next I’ll go into async/await.

Thanks,
Brian

Previous Post in this series:
Part 10: Loop control of the Parallel.For and .ForEach

Next up we’ll work on cancelling threads with the CancellationTokenSource. This is really a two parter where the first part will deal with the changes I had to make to integrate a new view and the second part covering the sample model. There is an update set of code available.

Since we’re going to want to cancel a thread once we’ve started it we’ll need a mechanism for the user to do so. Working from the model to the view model to the view we’ll see how I do this.

CancellationSample

public class CancellationSample : Sample
{
	public override string SampleName
	{
		get { return "Cancellation Sample"; }
	}

	public override bool ImageRequired
	{
		get { return false; }
	}

	bool isRunning = false;
	public bool IsRunning
	{
		get { return this.isRunning; }
		set
		{
			if (this.isRunning != value)
			{
				this.isRunning = value;
				this.RaisePropertyChanged("IsRunning");
			}
		}
	}
			
	public CancellationTokenSource CancellationTokenSource { get; set; }

	private static Random ran = new Random();

	public override void Run(System.Drawing.Bitmap bmp = null, Action<string> UpdateLog = null)
	{
		Stopwatch s = new Stopwatch();
		s.Start();

		IsRunning = true;

		CancellationTokenSource = new CancellationTokenSource();
		CancellationTokenSource.Token.Register(() => { IsRunning = false; });

		var options = new ParallelOptions { CancellationToken = CancellationTokenSource.Token };
		try
		{
			Parallel.ForEach(WhileTrue(), options, i =>
			{
				while (!options.CancellationToken.IsCancellationRequested)
				{
					UpdateLog("Sleeping in Cancellation Sample at " + i);
					Thread.Sleep(ran.Next(1000));
				}
			});
		}
		catch (OperationCanceledException)
		{
			UpdateLog("Operation has been cancelled."); ;
		}

		s.Stop();
		RunTime = s.Elapsed;
	}

	public static IEnumerable<int> WhileTrue()
	{
		for (int i = 0; ; i++)
			yield return i;
	}
}

Above you can see the CancellationTokenSource we’re using but as I said I’ll go into further detail on the next post. For the most part this is virtually identical to the other samples provided. There are two new properties exclusive to this class, the IsRunning property that defines when the run method is started and ended and the CancellationTokenSource itself.

CancellationSampleViewModel

public class CancellationSampleViewModel : SampleViewModel
{
	public CancellationSampleViewModel(CancellationSample Sample) : base(Sample) { }
	
	public void CancelRun()
	{
		((CancellationSample)Sample).CancellationTokenSource.Cancel();
	}
}

The view model for the CancellationSample couldn’t be much easier. As you know, view models act as a go-between from the model and the view. In this case we expose a method to cancel a run where we just call the cancel method on the CancellationTokenSource.

CancellationSampleView.xaml

<UserControl x:Class="TPLSamples.Views.CancellationSampleView"
             xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
             xmlns:i="http://schemas.microsoft.com/expression/2010/interactivity"
             xmlns:ei="http://schemas.microsoft.com/expression/2010/interactions"
             xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
             xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" 
             xmlns:d="http://schemas.microsoft.com/expression/blend/2008" 
             mc:Ignorable="d" 
             d:DesignHeight="300" d:DesignWidth="300">
    <UserControl.Resources>
        <BooleanToVisibilityConverter x:Key="BoolToVis" />
    </UserControl.Resources>
    <Grid>
        <Grid.RowDefinitions>
            <RowDefinition />
            <RowDefinition />
        </Grid.RowDefinitions>
        <CheckBox Grid.Row="0" IsChecked="{Binding Sample.IsEnabled, Mode=TwoWay}" Content="{Binding Sample.SampleName}" ToolTip="{Binding Sample.SampleName}" />
        <Button Grid.Row="1" Margin="5" Content="Cancel Run" Visibility="{Binding Path=Sample.IsRunning, Converter={StaticResource BoolToVis}}">
            <i:Interaction.Triggers>
                <i:EventTrigger EventName="Click">
                    <ei:CallMethodAction TargetObject="{Binding}" MethodName="CancelRun" />
                </i:EventTrigger>
            </i:Interaction.Triggers>
        </Button>
    </Grid>
</UserControl>

This view is similiar to the generic SampleView.xaml with the addition of the button for cancelling the run. There are a few points of note. The first is the Interaction.Triggers to fire on the button click event. We bind to the CancelRun method of the view model that was previously mentioned. This is just like how the Submit/Reset buttons are set up. Next is the BooleanToVisibilityConverter in the resources of the control which is a standard class available in “System.Windows.Controls”. We use that to set the visibility of the button based on when the sample is running. As with the other view we bind directly to the model property, in this case “IsRunning”. As mentioned in a previous post this isn’t the truest of MVVM as these properties should be exposed in the ViewModel. This is problematic as it introduces a dependency directly between your view and model. In this instance, however, the code is a lot cleaner to understand when we just bind directly to the model and I feel justified in using this way.

As with nearly all MVVM implementations the code-behind is just the boiler-plate code created for us.

Finally I need to discuss what I had to change to the code to support the new model. In the SamplerViewModel I added it as I’ve added other models and view models. What had to really change is the ItemsControl.

SamplerView.xaml ItemsControl

<ItemsControl Grid.Row="1" IsTabStop="False" ItemsSource="{Binding Samples}">
	<ItemsControl.ItemsPanel>
		<ItemsPanelTemplate>
			<WrapPanel Orientation="Horizontal" IsItemsHost="True" Utility:MarginUtilities.Margin="5" />
		</ItemsPanelTemplate>
	</ItemsControl.ItemsPanel>
	<ItemsControl.Resources>
		<DataTemplate DataType="{x:Type ViewModels:CancellationSampleViewModel}">
			<Views:CancellationSampleView DataContext="{Binding}" />
		</DataTemplate>
		<DataTemplate DataType="{x:Type ViewModels:SampleViewModel}">
			<Views:SampleView DataContext="{Binding}" />
		</DataTemplate>
	</ItemsControl.Resources>
	<!--<ItemsControl.ItemTemplate>
		<DataTemplate>
			<Views:SampleView DataContext="{Binding}" />
		</DataTemplate>
	</ItemsControl.ItemTemplate>-->
</ItemsControl>

I left commented out the ItemTemplate for the SampleView. Since we were originally only using just the base SampleView this made since. Now that we’ve expanded the Views possible we need to provide a mapping of the view models to the correct view as we’re going to do this in the resources. You can see all we’re doing is defining for each view model a view. The order is not important and the binding will take care of assigning the most restrictive type possible. So even if we flip the two data templates the final binding works correctly.

That’s all for now. As I mentioned in next week’s post I’ll go into detail on the sample and what it’s doing.

Thanks,
Brian

Previous Post in this series:
Part 9: Basic Exception handling with the AggregateException

Generally we’re used to having a break when doing loops. If you’ve tried to continue or break out of a parallel loop you get:

No enclosing loop out of which to break or continue

“continue” is the easy case, just return. But breaking is a bit more complex. Do you want to stop all threads? Do you want to run all threads up to the point where you break? Well, you have a choice. Below and in the included solution are two samples showing how to handle loop control.

LoopBreakSample:

public class LoopBreakSample : Sample
{
	public override string SampleName
	{
		get { return "Loop Break Sample"; }
	}

	public override bool ImageRequired
	{
		get { return false; }
	}

	protected int MaxValue { get { return 50; } }
	protected int BreakValue { get { return 20; } }

	public override void Run(System.Drawing.Bitmap bmp = null, Action<string> UpdateLog = null)
	{
		Stopwatch s = new Stopwatch();
		s.Start();

		UpdateLog("Running to " + MaxValue);
		var loopResult = Parallel.For(0, MaxValue, (int i, ParallelLoopState loop) =>
		{
			UpdateLog("Starting " + i);
			if (i == BreakValue)
			{
				UpdateLog("Breaking " + i);
				loop.Break();
				return;
			}

			Thread.Sleep(100);
		});
		UpdateLog("IsCompleted == " + loopResult.IsCompleted);
		if (!loopResult.LowestBreakIteration.HasValue)
			UpdateLog("LowestBreakIteration has no value");
		else
			UpdateLog("LowestBreakIteration.Value == " + loopResult.LowestBreakIteration.Value);

		s.Stop();
		RunTime = s.Elapsed;
	}
}

There are a few things going on here besides your normal delegate for the loop. First is that the parameters for the lambda that define the delegate have a ParallelLoopState. It is this loop state that we are calling .Break().

Second is that we use the ParallelLoopResult to see if the loop has completed and what the lowest iteration was when break was called.

It is critical that you understand how break works. Per the documentation:

Break may be used to communicate to the loop that no other iterations after the current iteration need be run. For example, if Break() is called from the 100th iteration of a for loop iterating in parallel from 0 to 1000, all iterations less than 100 should still be run, but the iterations from 101 through to 1000 are not necessary.

This is very important. Break() will continue to spawn threads until the point is reached had the break been called as if this was a standard loop. LowestBreakIteration is set so the user knows at what point Break() was called.

The result of running this will look similar to:

Starting Loop Break Sample
Running to 50
Starting 12
Starting 0
Starting 6
Starting 18
Starting 24
Starting 30
Starting 36
Starting 42
Starting 13
Starting 1
Starting 7
Starting 25
Starting 31
Starting 43
Starting 37
Starting 2
Starting 14
Starting 8
Starting 20
Breaking 20
Starting 3
Starting 15
Starting 9
Starting 16
Starting 10
Starting 17
Starting 5
Starting 11
IsCompleted == False
LowestBreakIteration.Value == 20
Completed Loop Break Sample

As you can see, the break is called at 20 like it should have been. New threads, however, were spawned to makes sure that “i” still reaches 20 where Break() was called.

UPDATE: As you read the above list of started threads there seems to be a couple of threads missing if we’re really running to 20. I changed out the maximum number of threads to spawn (with ParallelOptions) to 2 and fiddled with some other code to get it to break early. It worked as the documentation states so I’m not sure why there are some missing numbers in the above results.

So what if you want to just stop new thread creation and not continue on? That is where Stop() is used.

LoopStopSample:

public class LoopStopSample : LoopBreakSample
{
	public override string SampleName
	{
		get { return "Loop Stop Sample"; }
	}

	public override void Run(System.Drawing.Bitmap bmp = null, Action<string> UpdateLog = null)
	{
		Stopwatch s = new Stopwatch();
		s.Start();

		UpdateLog("Running to " + MaxValue);
		var loopResult = Parallel.For(0, MaxValue, (int i, ParallelLoopState loop) =>
		{
			if (i > BreakValue)
			{
				UpdateLog("Stopping at " + i);
				loop.Stop();
				return;
			}
			UpdateLog("Starting at " + i);
			while(!loop.IsStopped)
			{
				Thread.Sleep(10);
			}
		});
		UpdateLog("IsCompleted == " + loopResult.IsCompleted);
		if (!loopResult.LowestBreakIteration.HasValue)
			UpdateLog("LowestBreakIteration has no value");
		else
			UpdateLog("LowestBreakIteration.Value == " + loopResult.LowestBreakIteration.Value);

		s.Stop();
		RunTime = s.Elapsed;
	}
}

Here Stop() is called when we reached a value greater then BreakValue. Stop() is different then Break() in that it will cause no more thread to be generated at all. Any threads that were created will continue to run. When Stop() is called the parallelLoopState.IsStopped will also be set so other threads know that they should stop. LowestBreakIteration will have no value though. This is only set when Break() is called.

The result of running this will look similiar to:

Starting Loop Stop Sample
Running to 50
Starting at 0
Starting at 6
Starting at 12
Starting at 18
Stopping at 24
Stopping at 30
IsCompleted == False
LowestBreakIteration has no value
Completed Loop Stop Sample

You can see that as soon as Stop() is called no more threads are created even though there are still a lot of threads that haven’t been created up to the iterator.

So to sum up:

Action Break() Stop()
Thread Creation Continues spawning threads until the point is reached had this been a standard loop. Any threads already created are allowed to finish. No more threads are created. Any threads already created are allowed to finish.
LowestBreakIteration Set at the point when Break() is first called. Not set
IsStopped Not set Sets to true when called

Up next is a sample using CancellationTokenSource to cancel the threads from running outside of the loop.
Thanks,
Brian

Previous Post in this series:
Part 8: Adding a New Sample, Matrices Multiplication

In the updated solution you’ll find two new models, AggregateExceptionNoCatchSample and AggregateExceptionCatchSample. The TPL provides a convienient exception handling mechanism in the form of an AggregateException.

If you run through a Parallel.For or .ForEach and an exception is thrown in one of the threads, no more threads are created and any exceptions that are thrown across all threads are wrapped in an Aggregate exception and that is thrown upon leaving the Parallel.For or .ForEach.

The first sample here, AggregateExceptionNoCatchSample, throws an exception when getting to row 100. Of course remember that we’re spinning off threads and the row we’re looking at could be random. The code could run all rows up to 100 and than get thrown or get could get row 400 after only processing row 8.

AggregateExceptionNoCatchSample:

public class AggregateExceptionNoCatchSample : Sample
{
	//used by DrawGreyScale to set the number of rows we started
	private int rowsStarted = 0;

	public override string SampleName
	{
		get { return "Aggregate Exception - Don't Catch Exceptions Sample"; }
	}

	public override bool ImageRequired
	{
		get { return true; }
	}

	public override void Run(System.Drawing.Bitmap bmp = null, Action<string> UpdateLog = null)
	{
		Stopwatch s = new Stopwatch();
		s.Start();

		try
		{
			rowsStarted = 0;
			DrawGreyScale(bmp, UpdateLog);
		}
		catch (AggregateException ae)
		{
			UpdateLog("Started " + rowsStarted + " rows.");
			//if an exception is handled and true is returned then nothing happens
			//if an exception isn't handled and false is returned then all unhandled exceptions are rewrapped in
			//a new AggregateException and thrown again.
			ae.Handle((x) =>
			{
				if (x is StackOverflowException) // This we know how to handle.
				{
					UpdateLog("Handling a stack overflow exception.");
					return true;
				}
				else
				{
					UpdateLog("Unhandled exception.");
				}
				return false; // Let anything else stop the application.
			});
		}

		s.Stop();
		RunTime = s.Elapsed;
	}

	private void DrawGreyScale(System.Drawing.Bitmap bmp, Action<string> UpdateLog)
	{
		System.Drawing.Imaging.BitmapData bmData = bmp.LockBits(new System.Drawing.Rectangle(0, 0, bmp.Width, bmp.Height), System.Drawing.Imaging.ImageLockMode.ReadWrite, System.Drawing.Imaging.PixelFormat.Format24bppRgb);
		int stride = bmData.Stride;
		System.IntPtr Scan0 = bmData.Scan0;
		unsafe
		{
			byte* start = (byte*)(void*)Scan0;

			int height = bmp.Height;
			int width = bmp.Width;

			Parallel.For(0, height, y =>
			{
				UpdateLog("Starting line " + y.ToString());
				Interlocked.Increment(ref rowsStarted);
				byte* p = start + (y * stride);
				for (int x = 0; x < width; ++x)
				{
					byte blue = p[0];
					byte green = p[1];
					byte red = p[2];

					p[0] = p[1] = p[2] = (byte)(.299 * red
						+ .587 * green
						+ .114 * blue);

					p += 3;
				}

				if (y >= 100)
				{
					UpdateLog("Throwing an exception at " + y);
					if (y % 2 == 0)
						throw new StackOverflowException("yeah, we got a stack overflow.");
					else
						throw new ArgumentNullException("yeah, we got a null argument.");
				}
			});
		}
		bmp.UnlockBits(bmData);
	}
}

As mentioned an exception is thrown in DrawGreyScale upon getting to row 100. In all likelihood multiple rows would have been spun off for rows greater then or equal to 100. All of these rows that throw an exception will get combined into an AggregateException. Inside of the AggregateException is a property titled “InnerExceptions” that contains all of these exceptions. For even rows a StackOverflowException is thrown (for no reason other then I wanted to throw that type) and for odd rows an ArgumentNullException is throws (for the same reason as the StackOverflowException).

In the .Run of AggregateExceptionNoCatchSample the call to .DrawGreyScale is wrapped in a try/catch for an AggregateException. Reading the documentation on ExceptionHandling in the TPL it is recommended you don’t wrap a call like this in a try/catch and not do anything with the exceptions. Uh, yeah, well, I hope you wouldn’t catch exceptions and not do something with them.

In the catch .Handle is being called that invokes the predicate passed in. This will iterate through all exceptions in the InnerExceptions of the AggregateException so you can handle each one individually. If you handle the exception the predicate should return a true so that AggregateException knows not to do anything with it. If you don’t handle the exception you should return false. If any exceptions aren’t handled they are wrapped in a new AggregateException and thrown outside of the handle method. This way if you get multiple types of exceptions and one appears you weren’t expecting it can be handled higher up in the stack.

In the sample it may actually kill the app because I only handle the StackOverflowException and not the ArgumentNullException which will get re-thrown.

AggregateExceptionCatchSample:

public class AggregateExceptionCatchSample : Sample
{
	//used by DrawGreyScale to set the number of rows we started
	private int rowsStarted = 0;

	public override string SampleName
	{
		get { return "Aggregate Exception - Catch Exceptions Sample"; }
	}

	public override bool ImageRequired
	{
		get { return true; }
	}

	public override void Run(System.Drawing.Bitmap bmp = null, Action<string> UpdateLog = null)
	{
		Stopwatch s = new Stopwatch();
		s.Start();

		try
		{
			rowsStarted = 0;
			DrawGreyScale(bmp, UpdateLog);
		}
		catch (AggregateException ae)
		{
			UpdateLog("Started " + rowsStarted + " rows.");

			ae.Handle((x) =>
			{
				UpdateLog("Handling an exception.");
				return true;
			});
		}

		s.Stop();
		RunTime = s.Elapsed;
	}

	private void DrawGreyScale(System.Drawing.Bitmap bmp, Action<string> UpdateLog)
	{
		ConcurrentQueue<Exception> exceptions = new ConcurrentQueue<Exception>();
		System.Drawing.Imaging.BitmapData bmData = bmp.LockBits(new System.Drawing.Rectangle(0, 0, bmp.Width, bmp.Height), System.Drawing.Imaging.ImageLockMode.ReadWrite, System.Drawing.Imaging.PixelFormat.Format24bppRgb);
		int stride = bmData.Stride;
		System.IntPtr Scan0 = bmData.Scan0;
		unsafe
		{
			byte* start = (byte*)(void*)Scan0;

			int height = bmp.Height;
			int width = bmp.Width;

			Parallel.For(0, height, y =>
			{
				try
				{
					UpdateLog("Starting line " + y.ToString());
					Interlocked.Increment(ref rowsStarted);
					byte* p = start + (y * stride);
					for (int x = 0; x < width; ++x)
					{
						byte blue = p[0];
						byte green = p[1];
						byte red = p[2];

						p[0] = p[1] = p[2] = (byte)(.299 * red
							+ .587 * green
							+ .114 * blue);

						p += 3;
					}

					if (y >= 100)
					{
						UpdateLog("Throwing an exception at " + y);
						if (y % 2 == 0)
							throw new StackOverflowException("yeah, we got a stack overflow.");
						else
							throw new ArgumentNullException("yeah, we got a null argument.");
					}
				}
				catch (StackOverflowException)
				{
					UpdateLog("Internally handled the StackOverflowException.");
				}
				catch (Exception e)
				{
					exceptions.Enqueue(e);
				}
			});
		}
		bmp.UnlockBits(bmData);

		if (exceptions.Count > 0)
			throw new AggregateException(exceptions);
	}
}

In the AggregateExceptionCatchSample the exceptions are caught in the Parallel.For and dropped into a ConcurrentQueue if we don’t handle it. Then upon exiting the Parallel.For, if any exceptions were queued we throw a new AggregateException passing in the queue so it can get handled above us.

So why do this? The biggest advantage is that you may be able to handle the exceptions right in the thread with no reason to kill all threads. This way, if there are any threads that have exceptions you can’t handle then let them bubble up in the AggregateException. In this sample we assume we can handle all exceptions in the .Run and just return true in the .Handle of the catch so we don’t kill the app for a sample but you will only want to return true if you handle the exception and false if you don’t. Up next is stopping and breaking in a Parallel.For and .ForEach.

Thanks,
Brian

Part One: Starting with MVVM
Part Two: The MVVM solution structure and basic framework
Part Three: Base Classes
Part 4: Sampler View, View Model and Model
Part 5: Running and working with the TPL samples
Part 6: Parallel.For Sample
Part 7: Using Parallel.For effectively

We’re going to add in two new samples to continue showing the benefits of utilizing the TPL by doing some matrix multiplication. As mentioned before I added this sample for the initial code I created for the mentoring session on the TPL because this sample was directly applicable to some of the work we do. These derive from a sample provided by Microsoft on How to: Write a Simple Parallel.For Loop. Here the samples are pretty straight-forward. What’s more important in this post is what I had to do to add these new samples to the MVVM solution.

MatricesMultiplicationSample:

public class MatricesMultiplicationSample : Sample
{
	// Set up matrices. Use small values to better view 
	// result matrix. Increase the counts to see greater 
	// speedup in the parallel loop vs. the sequential loop.
	protected static readonly int colCount = 180;
	protected static readonly int rowCount = 2000;
	protected static readonly int colCount2 = 270;

	//protected statics so these can be used in the other sample
	static readonly Lazy<double[,]> _lazyMatrix1 = new Lazy<double[,]>(() => { return InitializeRandomMatrix(rowCount, colCount); });
	protected static double[,] Matrix1
	{
		get { return _lazyMatrix1.Value; }
	}

	static readonly Lazy<double[,]> _lazyMatrix2 = new Lazy<double[,]>(() => { return InitializeRandomMatrix(colCount, colCount2); });
	protected static double[,] Matrix2
	{
		get { return _lazyMatrix2.Value; }
	}

	static Random ran = new Random();
	static double[,] InitializeRandomMatrix(int rows, int cols)
	{
		double[,] matrix = new double[rows, cols];

		for (int i = 0; i < rows; i++)
		{
			for (int j = 0; j < cols; j++)
			{
				matrix[i, j] = ran.Next(100);
			}
		}
		return matrix;
	}

	public override string SampleName
	{
		get { return "Matrices Multiplication"; }
	}

	public override bool ImageRequired
	{
		get { return false; }
	}

	public override void Run(System.Drawing.Bitmap bmp = null, Action<string> updateLog = null)
	{
		double[,] matA = Matrix1;
		double[,] matB = Matrix2;
		double[,] result = new double[rowCount, colCount2];

		Stopwatch s = new Stopwatch();
		s.Start();

		int matACols = matA.GetLength(1);
		int matBCols = matB.GetLength(1);
		int matARows = matA.GetLength(0);

		for (int i = 0; i < matARows; i++)
		{
			for (int j = 0; j < matBCols; j++)
			{
				for (int k = 0; k < matACols; k++)
				{
					result[i, j] += matA[i, k] * matB[k, j];
				}
			}
		}

		s.Stop();
		RunTime = s.Elapsed;
	}
}

We need to share the values between the two models we have for demonstrating matrices multiplication. As such I’ve decided to put these values in a class that extends Sample and then have the second model extend this class. We’re using the Lazy<> class to initialize our matrices. Because of that I don’t start the stopwatch until after the matrices have been retrieved so as not to affect the time. Since we’re not using an image the ImageRequired returns false.

MatricesMultiplicationParallelSample:

public class MatricesMultiplicationParallelSample : MatricesMultiplicationSample
{
	public override string SampleName
	{
		get { return "Matrices Multiplication Parallel"; }
	}

	public override void Run(System.Drawing.Bitmap bmp = null, Action<string> updateLog = null)
	{
		double[,] matA = Matrix1;
		double[,] matB = Matrix2;
		double[,] result = new double[rowCount, colCount2];

		Stopwatch s = new Stopwatch();
		s.Start();

		int matACols = matA.GetLength(1);
		int matBCols = matB.GetLength(1);
		int matARows = matA.GetLength(0);

		// A basic matrix multiplication.
		// Parallelize the outer loop to partition the source array by rows.
		Parallel.For(0, matARows, i =>
		{
			for (int j = 0; j < matBCols; j++)
			{
				double temp = 0;
				for (int k = 0; k < matACols; k++)
				{
					temp += matA[i, k] * matB[k, j];
				}
				result[i, j] = temp;
			}
		});

		s.Stop();
		RunTime = s.Elapsed;
	}
}

For the second model, the only big difference is replacing the first outer loop with a Parallel.For. I’ve also overridden the SampleName but other than that it just uses values it inherited from the base.

What is really exciting to me is that to add a new sample there are only two places you have to modify.

SamplerViewMode.ctor:

public SamplerViewModel()
{
	Samples = new ObservableCollection();
	Sampler = new Sampler();
	Sampler.Samples.Add(new LineSample());
	Sampler.Samples.Add(new LineParallelSample());
	Sampler.Samples.Add(new GreyScaleSample());
	Sampler.Samples.Add(new GreyScaleParallelSample());
	Sampler.Samples.Add(new GreyScaleDoubleParallelSample());
	Sampler.Samples.Add(new MatricesMultiplicationSample());
	Sampler.Samples.Add(new MatricesMultiplicationParallelSample());
	ResetSampler();
}

SamplerViewModelFactory.maps dictionary:

private static Dictionary<Type, Func<Sample, SampleViewModel>> maps = new Dictionary<Type, Func<Sample, SampleViewModel>>()
{
	{ typeof(LineSample), (q) => new SampleViewModel((Sample)q)},
	{ typeof(LineParallelSample), (q) => new SampleViewModel((Sample)q)},
	{ typeof(GreyScaleSample), (q) => new SampleViewModel((Sample)q)},
	{ typeof(GreyScaleParallelSample), (q) => new SampleViewModel((Sample)q)},
	{ typeof(GreyScaleDoubleParallelSample), (q) => new SampleViewModel((Sample)q)},
	{ typeof(MatricesMultiplicationSample), (q) => new SampleViewModel((Sample)q)},
	{ typeof(MatricesMultiplicationParallelSample), (q) => new SampleViewModel((Sample)q)}
};

In the first code sample I add an instance of each of the two new models to the constructor of our ViewModel. In the second code sample I add a mapping of the sample type to the view model.

And that is it. Because we’re using the same ViewModel and View for these two models adding them is easy. Now binding just takes care of wiring everything up.

Starting Matrices Multiplication
Completed Matrices Multiplication
Matrices Multiplication ran in 00:00:01.8307101

Starting Matrices Multiplication Parallel
Completed Matrices Multiplication Parallel
Matrices Multiplication Parallel ran in 00:00:00.3797019

As with our other samples, where we give the parallel loops enough work we get significant benefits when using the TPL and Parallel.For, nearly 5 times faster in my case, though as always you may see better or worse times based on your situation.

In the next post we’ll go over a sample that shows how to handle exceptions in a Parallel.For and Parallel.ForEach using AggregateException.

Thanks,
Brian

Part One: Starting with MVVM
Part Two: The MVVM solution structure and basic framework
Part Three: Base Classes
Part 4: Sampler View, View Model and Model
Part 5: Running and working with the TPL samples
Part 6: Parallel.For Sample

In the last post we discussed where using a Parallel.For isn’t effective. The answer is fairly straightforward, Parallel.For (and by extension Parallel.ForEach) isn’t effective when you can’t give it enough work. Spinning off threads from the Thread Pool has its own overhead and if you can’t give the threads enough work it doesn’t make sense. Today we are going to discuss using Parallel.For effectively and what you have to change to convert from using a for to a Parallel.For.

GreyScaleSample.Run()

public override void Run(System.Drawing.Bitmap bmp = null, Action<string> UpdateLog = null)
{
	if(bmp == null)
		throw new InvalidOperationException("Bitmap must be defined.");
	
	Stopwatch s = new Stopwatch();
	s.Start();

	System.Drawing.Imaging.BitmapData bmData = bmp.LockBits(new System.Drawing.Rectangle(0, 0, bmp.Width, bmp.Height), System.Drawing.Imaging.ImageLockMode.ReadWrite, System.Drawing.Imaging.PixelFormat.Format24bppRgb);
	int stride = bmData.Stride;
	System.IntPtr Scan0 = bmData.Scan0;
	unsafe
	{
		byte* p = (byte*)(void*)Scan0;
		byte red, green, blue;

		for (int y = 0; y < bmp.Height; ++y)
		{
			for (int x = 0; x < bmp.Width; ++x)
			{
				blue = p[0];
				green = p[1];
				red = p[2];

				p[0] = p[1] = p[2] = (byte)(.299 * red
					+ .587 * green
					+ .114 * blue);

				p += 3;
			}
		}
	}
	bmp.UnlockBits(bmData);

	s.Stop();
	RunTime = s.Elapsed;
}

In the above sample we iterate over the image, starting at the first row (which is Scan0 but is redefined as “p” for pixel just for clarity of the code) and then iterating over the columns in that row. A bitmap is made up of a long byte array where every three bytes is the blue, green and red colors (which seems opposite of what we expect) that make up a pixel. The width of the row is defined by the stride but this is really the same thing as the width of the bitmap. We get the RGB values and reset the pixels to the gray value of the color. We then increment the pixel by 3 (since it represents the 3 bytes of RGB) and move on to the next one.

There is some messy pointer stuff here but all-in-all the code should be clear in what we’re doing.

GreyScaleParallelSample.Run()

public override void Run(System.Drawing.Bitmap bmp = null, Action<string> UpdateLog = null)
{
	if(bmp == null)
		throw new InvalidOperationException("Bitmap must be defined.");
	
	Stopwatch s = new Stopwatch();
	s.Start();

	System.Drawing.Imaging.BitmapData bmData = bmp.LockBits(new System.Drawing.Rectangle(0, 0, bmp.Width, bmp.Height), System.Drawing.Imaging.ImageLockMode.ReadWrite, System.Drawing.Imaging.PixelFormat.Format24bppRgb);
	int stride = bmData.Stride;
	System.IntPtr Scan0 = bmData.Scan0;
	unsafe
	{
		byte* start = (byte*)(void*)Scan0;

		int height = bmp.Height;
		int width = bmp.Width;

		Parallel.For(0, height, y =>
		{
			byte* p = start + (y * stride);
			for (int x = 0; x < width; ++x)
			{
				byte blue = p[0];
				byte green = p[1];
				byte red = p[2];

				p[0] = p[1] = p[2] = (byte)(.299 * red
					+ .587 * green
					+ .114 * blue);

				p += 3;
			}
		});
	}
	bmp.UnlockBits(bmData);

	s.Stop();
	RunTime = s.Elapsed;
}

In the Parallel.For sample things are a bit different and these differences are important.

First off we have to remember that each loop of the Parallel.For is a seperate thread. As such there can’t be any variables that will be modified that are common between the loops(at least not without using Interlocked but that’s a different post). Imagine if the pointer to the pixel was common between the threads like it is in the first sample. If the thread pool spawns off 10 threads they would all have that same initial value for the pixel. This is problematic and as such the code is changed here to recalculate the pixel at the start of the row at the beginning of each iteration.

Second we move the declaration of the bytes for blue, green and red into the inner loop. This was only done originally merely for more evident code is isn’t really a functional change.

GreyScaleDoubleParallelSample.Run()

public override void Run(System.Drawing.Bitmap bmp = null, Action<string> UpdateLog = null)
{
	if(bmp == null)
		throw new InvalidOperationException("Bitmap must be defined.");
	
	Stopwatch s = new Stopwatch();
	s.Start();

	System.Drawing.Imaging.BitmapData bmData = bmp.LockBits(new System.Drawing.Rectangle(0, 0, bmp.Width, bmp.Height), System.Drawing.Imaging.ImageLockMode.ReadWrite, System.Drawing.Imaging.PixelFormat.Format24bppRgb);
	int stride = bmData.Stride;
	System.IntPtr Scan0 = bmData.Scan0;
	unsafe
	{
		byte* start = (byte*)(void*)Scan0;

		int height = bmp.Height;
		int width = bmp.Width;

		Parallel.For(0, height, y =>
		{
			Parallel.For(0, width, x =>
			{
				byte* p = (start + (y * stride)) + (x * 3);
				byte blue = p[0];
				byte green = p[1];
				byte red = p[2];

				p[0] = p[1] = p[2] = (byte)(.299 * red
					+ .587 * green
					+ .114 * blue);
			});
		});
	}
	bmp.UnlockBits(bmData);

	s.Stop();
	RunTime = s.Elapsed;
}

Finally we have a sample that works pretty much like LineParallelSample.Run() (except here we’re setting the pixel to gray instead of black). The code spins off a thread for each row and then within that thread spins off a thread for setting each pixel. Again, we have to move the pixel declaration internal to the inner Parallel.For since this value will be modified and must be unique to each thread.

Running the samples you will get results similar to:

Reseting Image
Starting Grey Scale Sample
Completed Grey Scale Sample
Grey Scale Sample ran in 00:00:00.0268376

Reseting Image
Starting Grey Scale Parallel Sample
Completed Grey Scale Parallel Sample
Grey Scale Parallel Sample ran in 00:00:00.0020127

Reseting Image
Starting Grey Scale Double Parallel Sample
Completed Grey Scale Double Parallel Sample
Grey Scale Double Parallel Sample ran in 00:00:00.0037469

This is the results run with the included image of my son which is 93KB, a small image.

I have another image I test against which is ~8MB. This results in:

Reseting Image
Starting Grey Scale Sample
Completed Grey Scale Sample
Grey Scale Sample ran in 00:00:02.2118701

Reseting Image
Starting Grey Scale Parallel Sample
Completed Grey Scale Parallel Sample
Grey Scale Parallel Sample ran in 00:00:00.1626661

Reseting Image
Starting Grey Scale Double Parallel Sample
Completed Grey Scale Double Parallel Sample
Grey Scale Double Parallel Sample ran in 00:00:00.2232706

You can see by these results the Parallel.For sample runs nearly 14 times faster. This is major. Now looking at the Parallel.For sample and the Double Parallel.For sample the results are actually detrimental in this case. Running the sample, as with the ParallelLine sample you don’t get any benefit to adding the interal Parallel.For just to set a pixel. Again, depending on how you use the Parallel.For, you may have an example where you can give the internal threads enough work that it may be beneficial, just not here.

Up next I’m going to add two new models showing Matrix multiplication. This sample is actually similar to the GreyScale samples here but I added it to the original source because we do a lot of matrix operations and I wanted to so a clear, real-world example that was directly applicable to the work we do.

Thanks,
Brian

The Task Parallel Library Sampler – Part 6: Parallel.For Sample

Part One: Starting with MVVM
Part Two: The MVVM solution structure and basic framework
Part Three: Base Classes
Part 4: Sampler View, View Model and Model
Part 5: Running and working with the TPL samples

In the solution directory Models, you will find the LineSample and LineParallelSample models. These are fairly straight forward samples.

LineSample.Run()

public override void Run(System.Drawing.Bitmap bmp = null, Action<string> UpdateLog = null)
{
	if(bmp == null)
		throw new InvalidOperationException("Bitmap must be defined.");
	
	double X1 = 0, Y1 = 0;
	double X2 = bmp.Width - 1, Y2 = bmp.Height - 1;

	Stopwatch s = new Stopwatch();
	s.Start();

	double slope = (Y2 - Y1) / (X2 - X1);
	double beta = Y1 - (slope * X1);

	System.Drawing.Imaging.BitmapData bmData = bmp.LockBits(new System.Drawing.Rectangle(0, 0, bmp.Width, bmp.Height), System.Drawing.Imaging.ImageLockMode.ReadWrite, System.Drawing.Imaging.PixelFormat.Format24bppRgb);
	int stride = bmData.Stride;
	System.IntPtr Scan0 = bmData.Scan0;
	unsafe
	{
		byte* startPos = (byte*)(void*)Scan0;
		for (int x = (int)X1; x <= X2; x++)
		{
			GeneralMathOperations.DrawPixelByPointSlope(slope, beta, stride, startPos, x);
		}
	}
	bmp.UnlockBits(bmData);

	s.Stop();
	RunTime = s.Elapsed;
}

We start a stop watch. Then calculate our slope and beta for the point slope formula. Since we want to use the image so that we can compare the same operation between the two samples we have to use some pointer operations that makes all this a lot easier. Then we run through a for loop that just moves from the top left to the bottom right and sets each pixel in a line to black. We then unlock the image, stop the stop watch and then set our RunTime to the time it took for the loop to run.

LineParallelSample.Run()

public override void Run(System.Drawing.Bitmap bmp = null, Action UpdateLog = null)
{
	if(bmp == null)
		throw new InvalidOperationException("Bitmap must be defined.");

	double X1 = 0, Y1 = 0;
	double X2 = bmp.Width - 1, Y2 = bmp.Height - 1;

	Stopwatch s = new Stopwatch();
	s.Start();

	double slope = (Y2 - Y1) / (X2 - X1);
	double beta = Y1 - (slope * X1);

	System.Drawing.Imaging.BitmapData bmData = bmp.LockBits(new System.Drawing.Rectangle(0, 0, bmp.Width, bmp.Height), System.Drawing.Imaging.ImageLockMode.ReadWrite, System.Drawing.Imaging.PixelFormat.Format24bppRgb);
	int stride = bmData.Stride;
	System.IntPtr Scan0 = bmData.Scan0;
	unsafe
	{
		byte* startPos = (byte*)(void*)Scan0;
		Parallel.For((int)X1, (int)X2 + 1, x =>
		{
			GeneralMathOperations.DrawPixelByPointSlope(slope, beta, stride, startPos, x);
		});
	}
	bmp.UnlockBits(bmData);

	s.Stop();
	RunTime = s.Elapsed;
}

In LineParallelSample the only difference is the Parallel.For at line 21. Parallel.For’s first parameter is where to start, the second parameter is where to end and the third parameter is the body of the action. It’s important to remember that the “from” is inclusive meaning it includes the value and the “to” is exclusive meaning it goes to this amount doesn’t include it. That is why in LineSample we run to x <= X2 but in LineParallelSample we run to X2 + 1 which ends up being x < X2 + 1. We want to make sure we include that last pixel. What is really interesting is the results. Running the samples a few times with the included image (one of my son) you will see similar results such as these:

Reseting Image
Starting Line Sample
Completed Line Sample
Line Sample ran in 00:00:00.0009594

Reseting Image
Starting Parallel Line Sample
Completed Parallel Line Sample
Parallel Line Sample ran in 00:00:00.0009714

Changing to a much larger image you end up with similar results.
So, what is so exciting? Well, the Parallel.For doesn’t help. But, but… well, that can’t be right.
Ah, however, it is right. When using the Parallel.For and Parallel.ForEach you have to remember that there is a bit of overhead when managing the threads, spinning up the threads and context switching. I wrote this sample to explicitly show that the TPL isn’t a magic bullet. In order to maximize your use of the TPL you have to give each thread enough work. In this sample all it is doing is drawing a single point. This is pretty simple to do and the overhead of the threading doesn’t justify using the TPL in this instance.

In the next post we’ll go over the three grey scale samples where using a Parallel.For and Parallel.ForEach make a huge difference.

Thanks,
Brian

Part One: Starting with MVVM
Part Two: The MVVM solution structure and basic framework
Part Three: Base Classes
Part 4: Sampler View, View Model and Model

There is a new version of the solution available.

Finally, before getting in the actual TPL samples there is one last thing to cover. How do we run the samples? Well, as previously discussed the Submit button in SamplerView is bound to the SamplerViewModel.Submit().

public async void Submit()
{
	CurrentState = "Running";
	await Sampler.RunSamples();
	CurrentState = "Completed";
}

Using async/await, the model then runs the samples in the model via Sampler.RunSamples(). The async/await is critical here so the UI isn’t locked while the samples are run.

Here is Sampler.RunSamples()

public async Task RunSamples()
{
	await Task.Run(() =>
	{
		System.Drawing.Bitmap bmp = null;
		foreach (var sample in Samples)
		{
			if(!sample.IsEnabled)
				continue;

			if (sample.ImageRequired)
			{
				UpdateResults("Reseting Image");
				ResetDestinationImage();
				bmp = GetBitmapFromDestinationImage();
			}
			UpdateResults("Starting " + sample.SampleName);
			sample.Run(bmp, UpdateResults);
			UpdateResults("Completed " + sample.SampleName);
			UpdateResults(sample.SampleName + " ran in " + sample.RunTime.ToString() + Environment.NewLine);
		}

		if (bmp != null)
		{
			SetDestinationImageFromBitmap(bmp);
		}
	});
}

private void UpdateResults(string result)
{
	Results += result + Environment.NewLine;
}

The async keyword in the method declaration is what allows the ViewModel to run this method asynchronously. In order to define a method as asynchronous, it must contain an await call. To do this I call Task.Run with an await so the framework knows to spin off a thread and wait for it to return. There’s a bit more than that but I’ll discuss async/await in more detail in the TPL sample for it.

The really interesting thing here is passing the “UpdateResults” method into the run method of the sample model. You’ll recall in the “base classes” post that the abstract sample model takes a bitmap and Action<string>. The SamplerView has a text box that is bound to the Results property of our Sampler model. This way we can get real-time updating from the samples as they run. Since their running in their own thread (via the await Task.Run) it won’t block the UI and the binding takes care of invoking the update to the text so that it happens on the main thread without having to worry about updating the UI on the wrong thread.

And that’s it. It’s pretty simple. Next up I’ll go over the first three samples included in the solution that demonstrate when to and when not to use Parallel.For (and by extension Parallel.ForEach).

Thanks,
Brian