CCR 101 - Part 8: Dispatchers

I've not covered the Dispatcher class before in any detail, and in all the examples so far, I've only used DispatcherQueues via their default constructor which causes all tasks to be scheduled over the CLR thread-pool. The CCR Dispatcher class provides a means of scheduling tasks over your own custom thread-pool.

Being in control of the thread-pool creation gives you some control over the properties of the threads themselves (albeit as a homogenous block - you can't set these properties on individual threads). These include:

  • The name of the threads. This is a useful diagnostic property.
  • Whether the threads will run in the background. Background threads will not keep the application active, should all the other foreground threads exit.
  • The priority the threads will run at.
  • The COM apartment state of the threads. This is useful for interop.

First though, let's adapt our first sample for use with a standard dispatcher:

class Sample0801 {
static void Main(string[] args) {
using (Dispatcher dispatcher = new Dispatcher()) {
using (DispatcherQueue dq = new DispatcherQueue("Default", dispatcher)) {
dq.Enqueue(new Task(delegate() {
Console.WriteLine("Hello world.");
}));
}
Console.ReadLine();
}
}
}

Straightforward enough. First we create our dispatcher. We use the default constructor. This will create a fixed number of threads (more on this later). All threads will run in the foreground at normal priority, their apartment state will be not be set and their names will be empty strings. Notice how we dispose of the dispatcher when we are finished with it. This ensures that all the threads are properly destroyed.


So why use a dispatcher at all? Why not just use a default-constructed DispatcherQueue and schedule items over the CLR thread-pool? Well, you might want to control some of aspects of the thread-pool via the properties discussed above. You might want to isolate your CCR task-execution threads from 3rd-party libraries that use the CLR thread-pool (or even other CCR tasks, via two or more dispatchers).


Another important reason is that the execution policies which govern how task-execution is constrained only work with custom dispatcher pools. We'll have a look at an example of this shortly.


You might even need the extra performance. Some of the flexibility of the CLR thread-pool - such as its dynamic sizing capabilities - comes at a small performance cost. Dispatcher-based thread-pools have a fixed number of threads and because there is less management of the threads, they have been observed to be more performant in some situations. Your mileage may very.


So how many threads in a dispatcher-pool? Well, the default constructor will create 1 thread-per-CPU on a multi-processor box, and two threads on a uniprocessor box. You can use one of the constructor overloads to specify an exact number of threads, or you can set the static Dispatcher.ThreadsPerCpu property prior to using the default constructor. And even with the overloads you can always pass 0 to the number of threads, and let the CCR work it out for you.


COM Interop

Whilst it might be preferable to have all your application in managed code, there's a good chance that you still have COM components that form part of your existing application and which you'll need to interop with in the short term. In principle, if your components are 'Both' or 'Free' threaded, then you don't have to do anything - they'll play happily in the same threads as your tasks without creating any COM proxies, and should be addressable without said proxies from those threads.


If however, those components are marked as 'Apartment' then you need to tread carefully (as always). If you create these components on a vanilla dispatcher thread, you'll get a proxy. When you call across through this proxy, you'll incur the marshaling cost, but more importantly, the calling thread will be forced to sleep, whilst the recipient thread processes the call. What's worse, is that in the absence of an appropriate STA thread, the COM runtime will spin one up for you and put the COM object in that thread. In fact, it will spin up only one and put all STA instances on that thread, forming another serialization bottleneck.


What you do about this depends on how you use the COM object. If you can get away with a create-use-destroy model, then you can create n STA threads under a dispatcher register tasks using that model and just schedule receives to do the work. The following (albeit contrived) example creates 2 STA threads and then posts a receive which creates an instance of the Windows Media Player, calls a method on it and then releases it.

class Sample0802 {
static void Main(string[] args) {
using (Dispatcher dispatcher = new Dispatcher(
2, ThreadPriority.Normal, DispatcherOptions.None, ApartmentState.STA, "STAPool")) {
using (DispatcherQueue dq = new DispatcherQueue("Default", dispatcher)) {
Port<EmptyValue> signalPort = new Port<EmptyValue>();
Arbiter.Activate(dq, Arbiter.Receive(true, signalPort, delegate(EmptyValue ev) {
IWMPPlayer player = new WindowsMediaPlayerClass();
Console.WriteLine(player.versionInfo);
Marshal.ReleaseComObject(player);
}));

string line = Console.ReadLine();
while (line != "quit") {
signalPort.Post(EmptyValue.SharedInstance);
line = Console.ReadLine();
}
}
}
}
}

If the COM object is long-lived, then you need a different strategy, which basically involves creating single-threaded STA dispatchers and the judicious use of IterativeTasks, so I shall defer that sample until I've covered iterators properly.


Task Execution Policies

The following examples shows how task execution policies can be used to constrain scheduling. The first generates an int every second, but the receiver takes two seconds to process it. We use a scheduling policy that constrains the queue depth to 1, which effectively means that a newly-arrived item will replace an unprocessed item. If the receiver can't keep up, it will only have to process the latest item.

class Sample0803 {
static void Main(string[] args) {
using (Dispatcher dispatcher = new Dispatcher(1, "PolicyDemo")) {
using (DispatcherQueue dq = new DispatcherQueue(
"Default", dispatcher, TaskExecutionPolicy.ConstrainQueueDepthDiscardTasks, 1)) {
Port<int> intPort = new Port<int>();
Arbiter.Activate(dq, Arbiter.Receive(true, intPort, delegate(int i) {
Console.WriteLine(i.ToString());
Thread.Sleep(TimeSpan.FromSeconds(2));
}));

for (int i = 0; i < 20; ++i) {
intPort.Post(i);
Thread.Sleep(TimeSpan.FromSeconds(1));
}
Console.ReadLine();
}
}
}
}

You might have noticed that I've created a single threaded Dispatcher here. That's really to force the demo to work. If I use a default dispatcher on the machine I'm currently sitting at, which has a single hyper-threaded CPU, I get two threads, and two threads is enough to process all the items in turn. Even if one thread blocks for two seconds, the other thread is free to process the next item.


That's all I want to cover about Dispatchers for now. I'll come back to them soon when I demonstrate a strategy for dealing with long-running STA objects, but the next topic will be about Iterators or IterativeTasks, arguably one of the most powerful features of the CCR.


1 comments:

  1. Anonymous said,

    Who knows where to download XRumer 5.0 Palladium?
    Help, please. All recommend this program to effectively advertise on the Internet, this is the best program!

    on November 14, 2009 at 4:18 PM