Monitor Design Pattern with Semaphore

This continues my series on ways you’ve probably used design patterns in real-life and may not have even known it. The previous post was on the Locking, Double-Checked Locking, Lazy and Proxy Design Patterns.

In the last post we began to look at design patterns outside of the standard creational, structural and behavioral, venturing into concurrency design patterns. The concurrency design patterns are so critical because of the rise of multi-core processors and easy to use threading libraries.

In the last write-up I used the concept of a context switch to show the benefits using double-checked locking. The problem with this concept is that there may not be a context switch but may actually be two threads that are working in the same piece of code at the same time. Context switching was necessary when we had single core processors that gave the appearance of doing multi-threading but everything was still operating sequentially. Of course we still have context switching. We usually only have 4 or 8 cores so to support hundreds of threads we’ll still need context switching.

But now we run into the very real problem of multiple threads attempting to access what may be limited resources at the same time. As we saw in the last post we can lock around specific code to limit the access to the code across threads. The monitor design pattern consists of an object, the monitor, that handles controlling when threads can access code blocks. Locking in .NET is the the easiest form of the monitor design pattern in that lock(lockobj) ends up just being syntactic sugar wrapping a Monitor object. In being the easiest form, however, it is also the simplest in that it doesn’t provide a lot of flexibility.

There are a lot of different types of monitors as shown in the reference above but I wanted to talk about one specific type, the Semaphore. The premise of the Semaphore is pretty straight forward, limit to a specific number the amount of threads that have access to a block of code. It’s like lock but you can control the number of threads. In .NET we have two different types of semaphores, Semaphore and SemaphoreSlim. I suspect, in general, you will want a SemaphoreSlim as it is lightweight and fast. But it’s intended to be used only where wait times for the resources will be short. But how short is short? Well, that’s something you will have to investigate on your own as there is no guidance from Microsoft other than “short”. For more information on the differences between Semaphore and SemaphoreSlim please read the Microsoft article addressing it.

Using a SemaphoreSlim is incredibly easy. Here is a sample I’ve incorporated into my TPLSamples.

SemaphoreSlim pool = new SemaphoreSlim(2);  //limit the access to just two threads
List<Task> tasks = new List<Task>();
for (int i = 0; i < 5; i++)
{
	Task t = Task.Run(() =>
	{
		UpdateLog("Starting thread " + Thread.CurrentThread.ManagedThreadId);
		pool.Wait();  //wait until we can get access to the resources
		string result = new WebClient().DownloadString("http://msdn.microsoft.com");
		UpdateLog("Thread " + Thread.CurrentThread.ManagedThreadId + ": Length = " + result.Length);
		pool.Release();  //and finally release the semaphore
	});
	tasks.Add(t);
}

Task.WaitAll(tasks.ToArray());

This results in the output:

Starting Semaphore Slim Sample
Starting thread 9
Starting thread 10
Starting thread 14
Starting thread 15
Starting thread 13
Thread 10: Length = 25146
Thread 9: Length = 25152
Thread 14: Length = 25152
Thread 15: Length = 25146
Thread 13: Length = 25146
Completed Semaphore Slim Sample
Semaphore Slim Sample ran in 00:00:02.0143689

It’s not obvious by this output that we’re limiting the download to just two threads at a time but we are. Now we may not be able to use such a contrived example and may need to actually use this in production code. In that case you’ll want to do this asyncronously with async/await.

SemaphoreSlim pool = new SemaphoreSlim(2); //limit the access to just two threads
List<Task> tasks = new List<Task>();
for (int i = 0; i < 5; i++)
{
	Task t = Task.Run(async () =>
	{
		UpdateLog("Starting thread " + Thread.CurrentThread.ManagedThreadId);
		await pool.WaitAsync();  //wait until we can get access to the resources
		string result = await new HttpClient().GetStringAsync("http://msdn.microsoft.com");
		UpdateLog("Thread " + Thread.CurrentThread.ManagedThreadId + ": Length = " + result.Length);
		pool.Release();  //and finally release the semaphore
	});
	tasks.Add(t);
}

Task.WaitAll(tasks.ToArray());

This results in the output:

Starting Semaphore Slim Async Sample
Starting thread 9
Starting thread 15
Starting thread 14
Starting thread 13
Starting thread 17
Thread 19: Length = 25143
Thread 19: Length = 25149
Thread 21: Length = 25149
Thread 21: Length = 25143
Thread 21: Length = 25143
Completed Semaphore Slim Async Sample
Semaphore Slim Async Sample ran in 00:00:02.1258904

Wait a minute! How can the thread ids when we have the web page be different than what the thread started? The framework tries to help you out by maintaining a SynchronizationContext. Covering the SyncronizationContext is outside of the scope of a article on using Semaphore but I would highly recommend reading the article It’s All About the SynchronizationContext where they cover the problems with using Task.Run and losing the current SyncronizationContext. See, if we have a SyncronizationContext then awaiting isn’t a big deal, the framework will use the current SyncronizationContext. When we use Task.Run to spin off a new task that current SyncronizationContext is lost and we no longer return to the same thread when we use await.

For us, in using a SemaphoreSlim, we don’t care. Losing the SyncronizationContext can be a big deal, however, especially in UI where you need the main thread.

So that’s really it, using a SemaphoreSlim to demonstrate the Monitor design pattern. I’ve updated my TPL Sampler to include these two samples.

Thanks for reading,
Brian

Comment (1)

  1. […] Monitor Design Pattern with Semaphore (Brian Mullen) […]

Leave a Reply