Embedded Control Systems Design/Categories of system complexity
This chapter will discuss issues around system complexity.
What is system complexity?
Before dealing with issues as categorising it is important to define the meaning of complexity. One could be tempted to say that the bigger the system, the more complex it is. Lets take a look at an example:
An electrician is about to wire two buildings. One is a small house, the other is an apartment building. Obviously, a lot more work will go into wiring the second and the fusebox of the latter will look more complex but the key idea is the same for both: Get the electricity to the light bulbs and plugs. The amount of plugs doesn’t matter. If you can do it for one, you can do it for a thousand. The scale is larger but the complexity is the same!
When looking at complexity this way it’s clear that complexity is not defined by scale (in this case: the amount of wires).
In this chapter complexity is defined by the number of asynchronous subsystems and the amount of their different communication and synchronization needs: every exchange of information between two activities that run in parallel disturbs in one way or another both running processes, and the handling of this disturbance in a predictable way is not at all obvious. (These aspects are explained in more detail in the chapter on asynchronous activities.) When looking at it this way, both the house and apartment building will classify as a non-complex system because there’s only one system and no need for communication.
How to classify complexity?
A possible classification, using the definition above, is given here.
- Centralized hardware — Centralized Control (C1): in this kind of system all information is concentrated in one place. There’s no need for an operating system. These are systems that can be designed and developed by one or two engineers in a very small company, because the whole design and development can fit in one persons head.
- Distributed Hardware — Centralized Control (C2): these systems consist of several smaller systems. Each subsystem has its own control unit. But every subsystem communicates with the main computer which takes all the decisions for the system. These systems require a team of, say, a dozen engineers in a large company. Specialization is most often inevitable.
- Distributed Hardware — Distributed Control (C3): these systems also consist of several subsystems, but each system can take its own decisions, so there’s no need for a central main computer. These are the most complex systems-of-systems for which hundreds of developers have to work together, possibly in many different companies.
The goal of this classification is not to be exhaustive or complete, but to give technical project managers a first-order idea about complexity (and hence cost) of various embedded systems, and about the kind of technical and managerial challenges that can be expected for systems in each category.
Centralized Hardware — Centralized Control
editExamples of this class are vending machines, elevator controllers, conveyor belts, traffic barriers, etc. A very important element in controlling these systems is a Programmable Logic Controller (PLC). It is the traditional device behind the interfacing and the control of simple systems.
This class of embedded system (or rather, embedded device) is characterized by the following:
- it has one single processor, often of the embedded type.
- all hardware peripherals are connected to the processor board via direct signal cables.
- interfacing the hardware can be done via memory-mapped IO.
- the software consists of one single thread, that executes the same program loop over and over again.
- so, it needs no operating system.
- the device is used for very well defined tasks in very well defined environments, that is, with little need for adaptation or variation.
Distributed Hardware — Centralized Control
editA robot or machine tool controller is a typical representative of this class.
This class of embedded system is characterized by the following:
- it has one single processor of the PC type, with possibly several embedded processors in some of the hardware peripherals.
- some hardware peripherals are connected to the processor board via field busses.
- interfacing of that type of hardware requires a software stack for the reception and sending of messages over the fieldbus, and for interpreting the content of the messages.
- some of the peripheral hardware uses interrupts.
- the software consists of multiple threads (sharing memory) or multiple processes (running in protected memory spaces), that each execute a program on their own, but that require synchronization and data exchange between each other.
- so, an operating system is required.
- the device has reprogrammability possibilities so that it can achieve various tasks in rather well defined environments.
Distributed Hardware — Distributed Control
editA multi-agent system of robots, an air traffic control system, or an airport luggage handling system are typical representatives of this class.
This class of embedded system is characterized by the following:
- the total system consists of a software and communication layer around a number of sub-systems, each of which belongs to one of the previous classes in itself.
- each of the sub-systems often has multiple processors, of the embedded and of the PC type.
- the sub-systems are communicating via various networking technologies, such as wired or wireless ethernet, radio communication, etc.
- the communication is typically asynchronous.
- the communication software requires extensive protocols that encode the knowledge to control the total system.
- some of the peripheral hardware uses interrupts.
- the software consists of multiple programs, of the service oriented architecture type.
- multiple operating systems can be used in the various subsystems.
- the overall system has extensive reprogrammability possibilities (up to replacing subsystems or their software on the fly) so that it can achieve various tasks in very different environments.
When designing this class of system there’s a strong need for model driven engineering. You need to be able to divide the system into subsystems and divide the functions because you’re working in different development teams. However at this point there are no adequate models available.
Dealing with complexity
When designing complex system several issues come up.
A key aspect of dealing with complexity is that the users of the system should not be confronted with the complexity of the system. This seems to be no problem as you could make the system function like a black box. Unfortunately, most systems need to be monitored. This proves to be a problem as the interface needs to be as simple as possible but still needs to give the right amount of information.
The accident at Three Mile Island illustrates the previous statement. One of the components of the Three Mile Island reactor stopped working, which lead to many other system components to shut down. Indicator lamps in the control room showed these problems and the operators were suddenly overwhelmed with information, much of it irrelevant or misleading. The operators incorrectly interpreted the indicators and were so confused that they made the wrong decisions, causing a partial core meltdown.
A possible way of dealing with this problem is monitoring functionality instead of components. Only when there’s something wrong with the functionality there’s a need of looking at the components.
The risk of failure increases with increasing complexity. More interaction leads to a greater risk of errors and a larger complexity complicates it for a user to see what’s wrong. There are three possible reactions to errors:
- Finding the error to prevent disaster
- Looking at what you should do first to repair the systems components
- Finding the design error that caused system failure
An example of complexity: The car
At first, a car was hardly more than an engine powered chariot. There were no asynchronous subsystems involved nor was there decision making by computers. It’s clear that, when using the previous explained concepts, this type of vehicle ranks as a non-complex system of type 1.
Over the years, the car has evolved a lot. Since the 80’s there’s a trend towards more computers in a car. There’s the ECU, the ABS and ESP units and even a computer regulating the air conditioning. These are all asynchronous subsystems making the car a system of type 3. (note that you can find arguments to declare the car a type 2 system.)
What hasn’t really changed over the years is the user interface. There still is a steering wheel, a gearshift nob and some gauges showing coolant temperature, fuel level and oil pressure. This illustrates nicely that it is possible not to show the user the full complexity of the vehicle.
There’s also a possible fourth category of centralised hardware and distributed control. A modern car is hard to characterise as any of the systems above, but it could be developing into this fourth system. A cars main computer needs to run several programs simultaneously. These programs don’t interact in any way.