Computer network technologies and services/Quality of service
Quality of service is the set of technologies to try[1] to guarantee specific requirements about packet delay and jitter[2] for multimedia networking applications generating inelastic traffic.
- Main approaches
Three approaches have been proposed for quality of service:
- integrated services (IntServ): it requires fundamental changes to the network infrastructure so that the application can reserve end-to-end bandwidth → new complex software in hosts and routers;
- differentiated services (DiffServ): it requires fewer changes to the network infrastructure;
- laissez-faire: who cares about delays and quality of service, the network will not never be congested → all the complexity at application layer.
Principles
edit- Packet marking needed for router to distinguish between different classes; and new router policy to treat packets accordingly.
- Provide protection (isolation) for one class from the other ones.
- While providing isolation, it is desirable to use resources as efficiently as possible.
- The flow declares its needs by a call admission, then the network may block the call (e.g. busy signal) if it can not meet the needs.
Mechanisms
editPacket scheduling mechanisms
editThe goal of scheduling mechanisms is to manage the priorities for incoming packets.
FIFO scheduling
editIt is easy to implement and efficient only if there is a sophisticated discard policy:
- tail drop: drop always the arriving packet;
- random: drop a random packet in the queue;
- priority: drop the lowest-priority class packet.
Priority scheduling
editOne buffer is available per class, and always the highest-priority class packet is served.
It does not grant isolation and may introduce starvation: packets in low-priority buffers are never served because high-priority packets keep arriving. Moreover, if temporarly the high-priority queue is empty allowing to start transmitting a packet in the low-priority queue, but a high-priority packet arrives just after the start of the transmission, the latter will have to wait for the transmission to end, especially if the packet is long → transmission delays are introduced.
Round robin scheduling
editIt scans cyclically the class queues, serving one for each class (if available).
It grants isolation and it is fair, but it does not grant priority.
Weighted fair queuing
editIt generalizes round robin by combining it with priority scheduling.
Each class gets weighted amount of service in each cycle, and the bandwidth for the class having weight is given by the following formula (the empty queues have null weight):
However this solution is not very scalable because the formula, involving floating-point operations, needs to be computed for every single packet.
Policing mechanisms
editThe goal of policing mechanisms is to limit the traffic so that it does not exceed declared parameters, such as:
- (long-term) average rate: how many packets can be sent per unit time;
- peak rate: measured in packets per minute;
- (maximum) burst size: maximum number of packets sent consecutively (with no intervening idle).
Token bucket is the technique used to limit input to specified burst size and average rate:
- a bucket can hold tokens;
- tokens are generated at rate tokens/s, unless the bucket is full;
- over an interval of length , the number of packets admitted is less than or equal to .
IntServ
edit- Resource reservation
Basically a host asks for a service that requires some resources (path message): if the network can provide this service it will serve the user, otherwise it will not serve it (reservation message).
Resource reservation is a feature which is not native in IP.
- Call admission
The arriving session uses the Resource Reservation Protocol (RSVP) signaling protocol to declare:
- R-spec: it defines the quality of service being requested;
- T-spec: it defines the traffic characteristics.
The receiver, not the sender, specifies the resource reservation.
This is definitely not a scalable solution. It still has major problems, and there are currently no reasons to better implement IntServ.
DiffServ
editDifferentiated Services (DiffServ) is an architecture proposed by IETF for quality of service: it moves the complexity (buckets and buffers) from the network core to the edges routers (or hosts) → more scalability.
Architecture
editDiffServ architecture is made up of two major components performing per-flow traffic management:
- edge routers: they mark the packets as in-profile (high-priority voice traffic) or out-profile (low-priority data traffic);
- core routers: they perform buffering and scheduling based on the marking done at the edges, giving preference to in-profile packets.
Marking
editMarking is performed by edge routers in the Differentiated Service Code Point (DSCP) field, lying at the 6 most significant bits in the 'Type of Service' field for the IPv4 header and in the 'Priority' field for the IPv6 one.
It would be better to let the source, at application layer, perform the marking because just the source exactly knows the traffic type (voice traffic or data traffic), but most users would declare all their packets as high-priority because they would not be honest → marking needs to be performed by gateways which are under the control of the provider. However some studies have been found that routers can properly recognize at most 20-30% traffic, due for example to encrypted traffic → distinction can be simplified for routers by connecting the PC to a port and the telephone to another port, so the router can mark the traffic based on the input port.
PHB
editSome Per Hop Behaviours (PHB) are being developed:
- expedited forwarding: the packet departure rate of a class equals or exceeds a specified rate;
- assured forwarding: four classes of traffic: each one guarantees minimum amount of bandwidth and three drop preference partitions.
PHBs specify the services to be offered, not how to implement them.