Last modified on 15 August 2014, at 16:27

Programming Languages/Concurrent Languages

Concurrent programming is a computer programming technique that provides for the execution of operations concurrently - either within a single computer, or across a number of systems. In the latter case, the term distributed computing is used. Multiprocessor machines achieve better performance by taking advantage of this kind of programming.

Concurrent programming is now the only way to make a CPU-intensive program run twice as fast. (Historically another popular approach was to wait a couple of years for a CPU with twice the clock speed to show up, but that trend ended around 2003).[1][2]

In parallel programming, single tasks are split into a number of subtasks that can be computed relatively independently and then aggregated to form a single coherent solution. Parallel programming is most effective for tasks that can easily broken down into independent tasks such as purely mathematical problems, e.g., factorisation.

One way to achieve parallel programming is through distributed computing, which is a method of information processing in which work is performed by separate computers linked through a communications network.

Mutual ExclusionEdit

Definition
Formally defined and solved by E.W. Dijkstra in 1965 [D65].
We have a collection of processes that alternate repeatedly between two sections of code: non-critical section and critical section(CS). The problem is to provide a synchronisation algorithm in the form of a trying and an exit protocol to be executed, respectively, immediately before and after the CS:
  • non-critical section
  • trying protocol
  • critical section
  • exit protocol
, so that the following properties hold:
  • Mutual Exclusion
No two processes are in their CS at the same time.
  • Lockout-freedom
If a process enter the trying protocol, then it eventually enters the CS.
  • Absence of unnecessary delay:
A process attempting to enter its critical section is prevented from doing so only by another process that is executing its critical section or by other processes attempting to enter their critical sections.
  • Bounded exit
If a process enters the exit protocol, then it enter the non-critical section within a bounded number of its own steps.

Solutions of Mutual ExclusionEdit

According to [Y03], there are thousands of solutions of mutual exclusion, some invented by clever experts, some found by brute force approaches with computer.
Here are some famous one:
Dijkstra's algorithm
The first try.
According to [K66][D67], this algorithm allow the possibility that an individual computer, when contend for access might have to wait indefinitely.
Lamport's bakery algorithm

For easy understand, we begin with 2 processes with a modified version of original bakery algorithm:

For 2 processes
 Process p0 {
   // non-critical section
   // entry protocol
   turn0 = 1;
   turn0 = turn1 + 1;
   while( turn1 != 0 && turn0 > turn1);
   //critical section
   turn0 = 0;
 }
 
 Process p1 {
   // non-critical section
   // entry protocol
   turn1 = 1;
   turn1 = turn0 + 1;
   while( turn0 != 0 && turn1 >= turn0);
   //critical section
   turn1 = 0;
 } 

Informal proof
  • Suppose p0 is attempting to enter its critical section while p1 is in its non-critical section. Then, p0 gains access because turn1 is 0;
  • Suppose p0 is attempting to enter its critical section while p1 is in its critical section. Then, p0 will busy wait because turn 1 is not 0 and turn0 was set to be greater than turn1. When p1 finishes its critical section, it sets turn1 to 0. There are two subcases. In each case, p0 will enter its critical section.
    • If p1 is executing in its non-critical section when p0 tests turn1(i.e., turn1 is 0), then p0 will enter its critical section.
    • On the other hand, p1 might finish its non-critical section and set turn1 to turn0+1 as part of its entry protocol. When p0 retests this condition, it will find it false because turn1 is greater than turn0. So, p0 will enter its critical section.
  • Suppose that p0 and p1 begin execution of their entry protocols at about the same time and each completes their first assignment statement; so, each turn variable is 1. Suppose further that, say p0 completes its second assignment statement and evaluates its condition before p1 executes its second assignment statement; so, turn0 is 2 and turn1 is 1. Then p0 will wait until p1 continues execution and set turn1 to 3. Note how one process setting its turn variable to 1 forces the other process to wait until the first process has finished picking its value for the turn variable.
  • Suppose p0 and p1 in the previous case, complete their first assignment statements, which set each turn variable to 1. Suppose now that execution of the second assignment statements is interleaved at the machine instruction level, that is , the following execution ordering can occur:
    • p0 reads turn1's value (1);
    • p1 reads turn0's value (1);
    • p0 adds 1 and stores the result (2) into turn0;
    • p1 adds 1 and stores the result (2) into turn1;
although the turn variables have equal values. p1 will defer to p0 because its condition uses >= whereas p0's uses >.
Why underline codes?
The necessity of underline codes might not be obvious.
See UCDAVIS: Bakery Algorithmfor an in depth discussion.
For N processes
 // declaration and initial values of global variables
 Entering: array [1..N] of bool = {false};
 Number: array [1..N] of integer = {0};
     
 lock(integer i)
 {
   Entering[i] = true;
   Number[i] = 1 + max(Number[1], ..., Number[N]);
   Entering[i] = false;
   for (j = 1; j <= N; j++) {
     // Wait until thread j receives its number:
     while (Entering[j]) { /* nothing */ }
     // Wait until all threads with smaller numbers or with the same
     // number, but with higher priority, finish their work:
     while ((Number[j] != 0) && ((Number[j], j) < (Number[i], i))) {
     /* nothing */
     }
   }
 }
 unlock(integer i) { Number[i] = 0; }
 
 Thread(integer i) {
   while (true) {
       lock(i);
       // The critical section goes here...
       unlock(i);
       // non-critical section...
   }
 }

In pseudocode there is a comparison in the form:

(a, b) < (c, d)

which is equivalent to:

(a < c) or ((a == c) and (b < d))

Why call it bakery algorithm?
Lamport envisioned a bakery with a numbering machine at its entrance so each customer is given a unique number. Numbers increase by one as customers enter the store. A global counter displays the number of the customer that is currently being served. All other customers must wait in a queue until the baker finishes serving the current customer and the next number is displayed. When done shopping, the customer loses their number and can then do whatever they want, except for shopping without getting a new number.

ReferencesEdit

Clipboard

To do:
mention C. A. R. Hoare and his research in concurrent programming

[D65]Dijkstra, E.W., Solutions of a problem in concurrent programming control. 1965
[D67]deBruijn, N.G., Additional comments on a problem in concurrent programming control. 1967
[K66]Knuth, D.E., Additional comments on a problem in concurrent programming control. 1966
[O04]Ronald Olsson, Aaron Keen, The JR programming language, concurrent programming in an extended Java. 2004
[Y03]Yoah Bar-David, Gadi Taubenfeld, Automatic Discovery of Mutual Exclusion Algorithms. 2003

Concurrent programming in specific languagesEdit

Further readingEdit