Concurrent programming - 6

Abstract

Issue 5 of this series ended with a small program where two processes exchanged ten numbers through a message queue, thus being a synchronized producer-consumer couple. This time I am going to show and comment the code of a very simple communication simulator written in C. The code leverages IPC queues to allow multiple processes to talk each other while running concurrently.

The simulator

The program simulates a messaging switch, where multiple users (child processes) connect to a central switch (parent process) and send text messages (SMS-like) to other users through it. The switch receives messages and route them if the recipient is reachable (process is running). Moreover, the switch can perform a timing operation on a user, checking how much time the user needs to answer a message, or select it for termination. Last, the switch counts how many times a user sends a message to an unreachable user and when a threshold is reached terminates it. Switch and user decisions ... more


Concurrent programming - 5

Abstract

In the past articles we introduced the concept of concurrent programming and studied a first solution to the problem of intercommunication: semaphores. As explained, the use of semaphores allows us to manage access to shared resources, and this is a first and very simple form of synchronization. In this issue, we will go one step further introducing a richer tool for concurrent programming: message queues.

Limits of semaphores

Ruling access to a resource through semaphores is a good form of synchronization, even if it is very limited. Indeed it prevents two or more processes simultaneously modifying shared data but does not allow them to exchange information.

Moreover, the basic use of semaphores does not guarantee the absence of dangerous situations. As an example remember that, in a multitasking environment, we do not know in advance which process will be executed and when. This means that a resource could be inaccessible for a process since it is always ... more


Concurrent programming - 4

Abstract

In the past issue, of this series of articles about concurrent programming we started to concern ourselves with the problems of synchronization between processes. In this installment, we will investigate further the subject introducing some structures and functions collectively known as Unix System V IPC.

IPC: InterProcess Communication

Communication between processes, either running on the same machine or over a network, is one of the most interesting topics in computer science and, despite its age, new solutions to this problem keep arising. With the widespread availability of Internet access, the subject is now a little shifted towards pure network communication, which represents just a part of the techniques known as IPC: InterProcess Communication.

This abbreviation encompasses several different scopes in the field of multiprocessing, the most important ones being synchronization, shared data ... more


Concurrent programming - 3

Abstract

In the first article, we stepped into the world of multitasking, going over its meaning and the reasons behind its existence. In the second article, we met the fundamental fork operation and wrote our first multitasking code. In this article, we will go ahead and introduce ourselves in the topic of synchronization: the problem we face now indeed is to release the full power of multitasking, that is to share the work between processes.

In the first part of the article, I want to make a step back to the basics of multitasking to clarify the single/multiple CPU theme. Then we will start talking about shared data between processes, first looking at the problems arising and then writing some code.

Simultaneous execution

We should never forget that the main reason behind multitasking on a single CPU is to give the impression of simultaneous execution, not to speed up execution.

Let's clarify this statement. A single CPU can execute only one operation at a time, so ... more


Concurrent programming - 2

Abstract

In the past article I introduced the concept of process and how important it is for our operating system: in this new issue of the Concurrent Programming series we will go on and begin to write multitasking code; we will start from one of the basic concepts of the whole picture, the forking operation. Afterwards, we will start to introduce the problem of process synchronization and communication, which we will deeply face in a later instalment.

The C standard library

The C standard library, nicknamed libc, is a collection of functions for the C language which has been standardized by ANSI and later by ISO. An implementation of it can be found in any major compiler or operating system, especially Unix-like ones.

In the GNU ecosystem, which Linux distributions are grounded on, the library is implemented by glibc (GNU libc). The library reached version 2.17 in December 2012 and is free software released under LGPL, a less restrictive version of the GPL which most ... more


Concurrent programming - 1

Abstract

I already published this series of articles on concurrent programming on LinuxFocus from November 2002 and now I carefully reviewed them. Feel free to notify me any error you will find.

The purpose of this article (and the following ones in this series) is to introduce the reader to the concept of multitasking and to its implementation in the Linux operating system. Thus, we will start from the theoretical concepts at the base of multitasking and from those we will move towards advanced concept like multithreading and interprocess communication. We will practice some code and some Linux internals along this path, and I will try constantly to exemplify how real world operating system realize the concepts I am introducing.

Prerequisites

Prerequisites for the understanding of the article are:

  • Minimal knowledge of the text shell use (bash)
  • Basic knowledge of C language (syntax, loops, libraries)

Even if I wrote the articles ... more