Concurrent programming is becoming more affordable to developers now a days within the .NET  ecosystem. I am starting a series of blog posts that will show you how and when to use different forms of concurrency in .NET. In this first post we will make a smooth entry to concurrency by covering motivations, definitions, forms of concurrency, key components in the .NET, and starting recommendations. Subsequent posts will delve into parallel concepts and programming, and  asynchronous programming with C# .NET, and Reactive programming, etc.

  1. Insights on .NET Concurrency: Introduction
  2. Insights on .NET Concurrency: Parallel Programming
  3. Insights on .NET Concurrency: Asynchronous Programming
  4. Insights on .NET Concurrency: Reactive Programming


  • Complexity abstracted away. If you ask anyone who worked with threads in the past about their experience probably would say two things: 1- It’s difficult, 2- It is difficult. Modern development frameworks and debugging tools have simplified handling threads and asynchronous operations.
  • CPU cores proliferation. CPU prices now a days are much more affordable, you would struggle to smartphone or laptop fitted with a single core CPU.
  • Improve performance by effectively utilise the underlying hardware (CPU & Memory utilisation, Execution Time, Throughput, Latency, etc.). Because .NET is a managed execution environment memory utlisation is equally important as CPU.
  • In summary: Modern applications’ requirements thereby modern developer’s skills

When you start thinking about using concurrency first question come to mind is: where your code spends its time running.  If the time it takes to complete an operation is spent in the CPU doing the number crunching then it is called CPU bound. On the other hand, if the time is spend waiting for input/output operation to complete then it is called IO Bound. Mathematical computations like finding prime number is CPU bound, whilst reading a file or reading database are IO bound. It is important to know what sort of bound operation you are trying to deal as it affects your choice of concurrency form.


According to the Oxford English dictionary Concurrency is: “The fact of two or more events or circumstances happening or existing at the same time”. In computers’ land in order to execute one operation you need one thread, and for multiple operations you need multiple threads which is one form of concurrency, hence the term multi-threading. The main forms of concurrency, which are meant to address different problems, are:

Parallel processing means dicing a task into multiple smaller tasks and let multiple threads execute them concurrently. Assuming one thread is used to iterate through an array of n elements and takes n units of clock time, if 2 threads were used the array will have to be diced into two smaller arrays; one thread deals with the first half of the array and second thread with the second half, potentially could take n/2 units of time, if 3 threads then n/3, if n threads then 1 unit of time. In other words, if a CPU has a single core then only one thread can be executed a time, if two cores then two threads can be executed at a time, quad core then 4 threads, and so on.

Asynchronous programming uses mechanisms such as futures or callbacks to know when an operation is completed thereby releasing the calling thread to do other work. For example, a UI thread handles a button click, the event handler of the click goes and retrieves some data at the other end of your ethernet port, arguably this could take from few milliseconds to few second. So instead of blocking the UI thread until the results come back we could utilise asynchronous programming to release the UI thread, let it handle other user interactions, and once the request comes back resume the UI thread where originally started the asynchronous operation i.e. click event handler.

Reactive programming is a programming paradigm that built on the concept of asynchronous data streams. Each asynchronous event feeds into a stream of events, where the stream can be observed, manipulated, filtered etc. It is a push not a pull i.e. data arrives to the stream gets pushed to the subscribers instead of pulling data from a data source. Data streams could be anything from events e.g. mouse clicks, window position updates, to variables as in simulation outputs, stock exchange ticks, or news feed.

Building Blocks

.NET 4.0 included the Task Parallel Library. TPL is centered on a new type called Task, to support a new way of programming; Task Based Programming. Despite Task being at the heart of parallel and asynchronous programming, they have different meanings for developers. For parallel execution a Task takes a delegate (Action/Func) to run, whilst for asynchronous execution a Task represents an operation that will complete in the future; future promise. Note a Task is not a thread, but a task will be executed by a thread.

Every time a task is created is added to a global queue. Then a TaskShedular pops out a Task whenever the ThreadPool has a free worker thread to push it all the way to a CPU core. More details will be covers in subsequent posts of this series.

task thread sheduling

On the other hand, reactive programming in .NET relies on an extension library called Rx. At the time of writing, reactive extension library is not  part of the .NET system DLLs. The two fundamental components is library IObservable<T> interface notifies the subscribed IObserver<T> interface whenever an event occurs.


Before setting off writing concurrent code, in any of its forms to optimise your application, take into account the following:

  • Choose the right performance metrics. Is it wall clock time, CPU utlisation, latency, throughput, etc.? A UI application optimises for latency hence responsiveness, whilst a server app is concerned to increase throughput; how many request can be process in a time period.
  • Get the numbers. Use profiling tools to measure performance and identify bottlenecks. A bottleneck could be related to CPU, IO, Memory (GC) , or a mix of things. Coding for performance might break abstractions. Optimising for memory could potentially make parts of your API leak abstraction.
  • Don’t start optimise early for performance unless it is a functional requirement in your application. Without any measurements you might causing yourself more harm than good.
  • Set SMART goals. For example, when clicking a button to read from a database should not freeze the UI buttons and progress bar, or 90% of database request should have a latency less than 100ms.