||This page or section is an undeveloped draft or outline.
You can help to develop the work, or you can ask for assistance in the project room.
Packet-switched networks, such as those based on IP, are fundamentally stateless. This means that each new piece of data that flows through the network is theoretically treated entirely independently of all the other data that has gone before it. This makes it very easy to build a large network up from lots of independent parts, but it has a drawback when it comes to allocating limited resources. If one user of a network is sending as many packets as the network can support, then other users may theoretically end up unable to transfer any data at all. Lots of low-priority traffic from mail, Usenet or DNS updates may drown out high-priority flows such as Voice over IP (VoIP). If a DNS refresh or email gets delayed by 30 seconds, nobody is affected, but if the same thing happens to VoIP packets the call can be dropped.
Traffic control is the general term for techniques that attempt to fight this general weakness of the network by classifying data and blocking, redirecting or limiting it according to rules set by the network administrator.
A queue is a reserved block of memory that can hold a certain number of objects that are waiting to be processed sequentially. The typical model used for a queue is that objects are pushed in at one end and pulled out for processing from the other. This means that objects are processed in exactly the same order they are put in: First In First Out or FIFO. In the context of traffic control, the objects in the queue are network packets.
FIFO queues by themselves don't do any traffic control. In fact, the default behaviour of the network stack (with no traffic control configured) is to use a single FIFO queue, since this provides a neutral way of processing the network packets without favouring one source over another. In practice, a network stack can't be implemented without at least one FIFO queue, since packets can come in at a much faster rate than the network stack can cope with. There has to be a buffer to store the packets in during busy times, or lots of packets will have to be dropped. Since the queue is finite, it's always possible for packets to be dropped if they come in too fast for too long a time, but with a reasonably sized queue this becomes a rare occurrence.
Why must networks be based on FIFO queues? The opposite of FIFO is LIFO, or Last In First Out, often called a stack. Consider what happens if we use LIFO storage for our buffer. Packets A, B, C and D arrive in quick succession (packet D arriving before the network stack has had a chance to start processing them). Since we start with the last packet, the network stack will naturally process packet D then packet C. At the other end of the link, the packets will arrive out of order. Although higher-level protocols are designed to recover from packets arriving out of order, this can lead to packets being dropped and re-sent, so it can waste resources and should never be relied upon when there is an easy way to avoid it.
There's a deeper problem with using a stack, though: if the packets continue to come in at exactly the rate the processor can cope with, then we always process the most recent packet each time. Packets A and B will sit in the stack and never get processed. Eventually the clients at either end will assume the packet has been lost and (if using a protocol like TCP to guarantee transmission) re-send the packets. This means resources have been wasted sending A and B twice, and the buffer is clogged up with packets that have been forgotten about.
A queue tends to ensure that the times each packet spend waiting to be processed are as similar to each other as possible (i.e. the worst case isn't too much worse than the best case, though it can still vary widely if traffic levels are inconsistent). A stack, on the other hand, tends to give a situation where the best case and the worst case are wildly different, which is never what we want in networking.
Tokens and bucketsEdit
Packets and framesEdit
The terms packet and frame both refer to chunks of information that are transferred across the network; the difference between the two terms is just that they are used at different levels of the network stack. At layer 2 (e.g. ethernet) the chunks being transferred are called frames, while one layer up the stack at the network layer (e.g. IP) they are referred to as packets.
At any point during transfer over a physical link, it's equally true to say that both types are being transferred, one within the other (e.g. an IP packet wrapped within an ethernet frame), but to simplify the implementation model it's usual to only think of one model at a time. Ethernet frames contain some payload (which in fact may be IP packets), but the ethernet protocol implementor can ignore the implementation details of the data it is carrying. Likewise, IP packets have to be transferred by some physical means (which in fact may be via ethernet frames), but the IP protocol implementor can ignore this detail.
You might be tempted to ignore the difference between packets and frames, since in practice there's almost never an ambiguity. Indeed, lots of informal documentation and marketing material tends to blur the distinction somewhat. However, this book will attempt to use the terms consistently, and encourages you to do so as well.