SoFunction
Updated on 2025-03-07

How to minimize the complexity of multi-threaded C# code

Branch or multithreaded programming is one of the most difficult and right things to do when programming. This is due to their parallel nature, which requires a completely different mindset than linear programming using single threads. For this problem, the appropriate analogy is to throw and catch multiple balls in the air without letting them interfere with each other. This is a major challenge. However, with the right tools and mindset, this challenge is addressed.

This article will introduce in-depth some of the tools I wrote to simplify multithreading programming and avoid other problems such as race conditions and deadlocks. It can be said that the toolchain is based on syntactic sugar and magical delegation. To quote the great jazz musician Miles Davis, however, “In music, no sound is more important than having a sound.” A miracle occurs when the sound is interrupted.

From another perspective, it is not necessarily about what can be encoded, but about what you can choose not to encode, because you want to do a little miracle by interrupting lines of code. To quote Bill Gates, “Measuring work quality by line of code is like measuring aircraft quality by weight.” So, I hope to help developers reduce the amount of code, rather than teaching developers how to write more code.

Synchronous Challenge

The first problem encountered in multithreaded programming is synchronizing access to shared resources. This problem occurs when two or more threads share access to an object and may try to modify it at the same time. When C# was first released, the lock statement implemented a basic approach that ensures that only one thread can access the specified resource (such as a data file) and works well. The lock keyword in C# is easy to understand, and it alone subverts the way we think about this problem.

However, a simple lock has a major drawback: it does not distinguish between read-only access and write access. For example, you might want to read 10 different threads from a shared object and authorize these threads to access the instances simultaneously through the ReaderWriterLockSlim class in the namespace without causing the problem to occur. Unlike lock statements, this class can be used to specify whether the code writes content to an object or only reads content from the object. This allows multiple readers to enter at the same time, but deny any write code access until all other read and write threads have completed their work.

The problem now is: if you use the ReaderWriterLock class, the syntax becomes very troublesome, the large amount of repeated code not only reduces readability, but also increases maintenance complexity over time, and there are usually multiple try and finally blocks in the code. Even simple spelling errors can have catastrophic effects that are sometimes extremely difficult to detect in the future.

By encapsulating ReaderWriterLockSlim into a simple class, this problem is solved instantly, not only does it no longer appear in duplicate code, but it also reduces the risk of small spelling errors destroying the fruits of a day of labor. The class in Figure 1 is entirely based on the lambda technique. It can be said that this is syntactic sugar for some delegated applications (assuming there are several interfaces). Most importantly, it helps to implement the principle of avoiding duplicate code (DRY) to a large extent.

1: Encapsulation ReaderWriterLockSlim

public class Synchronizer<TImpl, TIRead, TIWrite> where TImpl : TIWrite, TIRead {
  ReaderWriterLockSlim _lock = new ReaderWriterLockSlim ();
  TImpl _shared;

  public Synchronizer (TImpl shared) {
    _shared = shared;
  }

  public void Read (Action<TIRead> functor) {
    _lock.EnterReadLock ();
    try {
      functor (_shared);
    } finally {
      _lock.ExitReadLock ();
    }
  }

  public void Write (Action<TIWrite> functor) {
    _lock.EnterWriteLock ();
    try {
      functor (_shared);
    } finally {
      _lock.ExitWriteLock ();
    }
  }
}

There are only 27 lines of code in 1, but it makes sure objects are synchronized across multiple threads. This type assumes that there are read interfaces and write interfaces in the type. If for some reason it is impossible to change the underlying class implementation that requires access to be synchronized, you can also repeat the template class itself three times and use it this way. The basic usage is shown in 2.

2: Use the Synchronizer class

interface IReadFromShared {
  string GetValue ();
}

interface IWriteToShared {
  void SetValue (string value);
}

class MySharedClass : IReadFromShared, IWriteToShared {
  string _foo;

  public string GetValue () {
    return _foo;
  }

  public void SetValue (string value) {
    _foo = value;
  }
}

void Foo (Synchronizer<MySharedClass, IReadFromShared, IWriteToShared> sync) {
   (x => {
     ("new value");
  });
   (x => {
     ( ());
  })
}

In the 2 code, no matter how many threads are executing the Foo method, the Write method will not be called as long as another Read or Write method is executed. However, multiple Read methods can be called simultaneously without having to spread multiple try/catch/finally statements in the code, and without having to repeat the same code constantly. I'm here to declare that it doesn't make sense to use it through a simple string because it's immutable. I use simple string objects to simplify the example.

The basic idea is that all methods that can modify the instance state must be added to the IWriteToShared interface. At the same time, all methods that only read content from the instance should be added to the IReadFromShared interface. By spreading problems like this across two different interfaces and implementing both interfaces for the underlying type, the Synchronizer class can be used to synchronize access to the instance. This makes it easier to synchronize access to code, and it can basically be done in a more declarative way.

When it comes to multithreading programming, syntactic sugar can be a success or failure. Debugging multithreaded code is often extremely difficult, and creating unit tests for synchronized objects can be futile.

If needed, you can create an overloaded type that contains only one generic parameter, not only inherited from the original Synchronizer class, but also pass one of its generic parameters as a type parameter to its base class three times. In this way, there is no need to read the interface or write the interface, because the specific implementation of the type can be directly used. However, this approach requires manual processing of parts that require the use of the Write or Read method. Additionally, while it is slightly less secure, it does make it easy to wrap unchangeable classes into Synchronizer instances.

Lambda collection for branches

After taking the first step to using the magic lambdas (or "delegates" in C#), it's not hard to imagine that they can be used to do more. For example, a common multithreading theme that recursively occurs is to have multiple threads contact other servers to extract data and return the data to the caller.

The simplest example is that the application reads data from 20 web pages and returns HTML to a thread that creates some sort of aggregated results based on the content of all web pages. Unless a thread is created for each search method, this code runs much slower than expected: 99% of all execution time may be spent waiting for an HTTP request to return.

Running this code on a thread is inefficient and the thread creation syntax is very error-prone. As you support multiple threads and their assistant objects, the challenge becomes even more severe, and developers have to use duplicate code when writing. Once you realize that you can create a collection of delegates and a class that wraps these delegates, you can create all threads using one method call. This makes it much easier to create threads.

A piece of code in 3 creates two such lambdas running in parallel. Note that this code actually comes from my first version of the Lizzie scripting language unit test (/2FfH5y8).

3: Create a lambda

public void ExecuteParallel_1 () {
  var sync = new Synchronizer<string, string, string> ("initial_");

  var actions = new Actions ();
   (() =>  ((res) => res + "foo"));
   (() =>  ((res) => res + "bar"));

   ();

  string result = null;
   (delegate (string val) { result = val; });
   (true, "initial_foobar" == result || result == "initial_barfoo");
}

If you look at this code carefully, you will find that the calculation results do not assume that the execution order of my two lambdas is in. The execution order is not explicitly specified, and these lambdas are executed on different threads. This is because, using the Actions class in Figure 3, you can add a delegate to it so that you can later decide whether to execute the delegate in parallel or in sequence.

To do this, many lambdas must be created and executed using the preferred mechanism. In Figure 3, you can see the Synchronizer class mentioned above, which is used to synchronize access to shared string resources. However, it uses a new method Assign for Synchronizer, which I did not add in the list in Figure 1 for the Synchronizer class. The Assign method uses the same "lambda tricks" used in the previous Write and Read methods.

To study the implementation of Actions class, be sure to download Lizzie version 0.1, because I completely rewrite the code in the later version to make it a standalone programming language.

Functional programming in C#

Most developers tend to think that C# is almost synonymous or at least closely related to object-oriented programming (OOP), and it is obviously true. However, it becomes easier to solve some problems by rethinking how to use C# and gaining insight into its various features. The current form of OOP is not easy to reuse, for many reasons because it is strongly typed.

For example, if you reuse a class, you have to reuse each class referenced by the initial class (in both cases, the class is used by combination and inheritance). In addition, class reuse will force reuse all classes referenced by these third-party classes, etc. If these classes are implemented in different assembly settings, various assemblies must be added to gain access to a single method on a type.

I once looked at an analogy that could illustrate this problem: "While what I want is a banana, what I end up with is a gorilla holding the banana in hand, and the rainforest where the gorilla lives." Compare this situation with the use of more dynamic languages ​​such as JavaScript, which doesn't care about the type, as long as it implements the functions used by the function itself. The code generated by a slightly loose type approach is more flexible and easier to reuse. Delegation can achieve this.

C# can be used to improve the process of reusing code across multiple projects. It is only necessary to understand that functions or delegates can also be objects, and a collection of these objects can be controlled through weak type.

Back in the November 2018 issue of MSDN Magazine, I published an article titled “Create Your Own Scripting Language with Symbol Delegation” (/magazine/mt830373). The ideas about commission mentioned in this article are formed based on this article. This article also introduces Lizzie, my homemade scripting language, and its existence is attributed to this delegate-centric mindset. If I created Lizzie using OOP rules, I would think it might be at least an order of magnitude larger in size.

Of course, nowadays OOP and strong genres are dominant, and it is nearly impossible to find a job description that does not require it. I'm here to declare that I've been creating OOP code for more than 25 years, so I'm just as guilty as anyone about being biased against strong typing. However, now I am more pragmatic in coding methods and lose interest in the final appearance of the class hierarchy.

It’s not that I don’t appreciate the beautifully-looking class hierarchy, but that the returns are diminishing. The more classes you add to the hierarchy, the more bloated it becomes until it collapses from overwhelming weight. Sometimes, excellent designs use fewer methods, fewer classes, and most loosely coupled functions, so that the code can be easily extended, without the need to "introduce gorillas and rainforests".

Go back to the recurring topic of this article (inspired by Miles Davis’s approach to music): less is more (“No sound is more important than having sound”). Code is no exception. Intermittent lines of code often produce miracles, and the best solution is measured more by not encoding than by encoding. Even fools can blow the trumpet, but only a few people can play music with the trumpet. There are even fewer people like Miles who can do miracles.

Original author: Thomas Hansen

Original address:Minimize Complexity in Multithreaded C# Code

The above is all the content of this article. I hope it will be helpful to everyone's study and I hope everyone will support me more.