1.2 Protocols in multi-service networks: introduction
Early automatic telephone networks were built to carry only voice traffic and to provide a very simple telephone service – now called plain old telephone service (POTS). When computer networks started to appear, either they were separate from telephone networks or the data carried between computers was a small proportion of the traffic on the telephone network. There are various estimates for the growth of voice and data traffic, and various dates have been given for when data traffic will exceed voice traffic.
Figure 1 illustrates the relative growth rates of voice and data traffic from 1996 to 2005. It assumes that data traffic exceeds voice traffic during 2002 (Coffman and Odlyzko, 1998), and that voice traffic grows at a rate of 5 per cent per year and data traffic at a rate of 80 per cent per year. Because it is difficult to get accurate values for traffic, in the figure traffic is shown relative to the level of voice traffic in 1996 rather than as absolute levels. The rapid increase in data traffic can cause problems on telephone networks that were specifically designed to carry voice networks.

In many cases it makes economic sense for a single physical network to carry both voice and data traffic. In the 1980s this led to the publication of the requirements for an integrated services digital network (ISDN). Although ISDNs do exist in many countries, they have never become widespread. At the time of writing (2002), the requirements of broadband ISDNs (B-ISDNs) are being published, which extend the services of ISDNs to include high data rates. All you need to appreciate about ISDNs at this stage is the concept of the traffic being divided into specific categories, called services, according to the communication requirements of the data: for example, the type of data, the transfer mode and the transfer rate. An ISDN is an example of a multi-service network, which can be loosely defined as a network that provides a range of services over a common transport mechanism – that is, a common means of transferring data between devices. This may mean that different services receive different treatment to ensure that certain assurances about quality are fulfilled.
The Internet may be regarded as another example of a multi-service network, although the quality of service may not meet users' requirements for some applications. Private networks that operate to the same specification as the Internet can offer users a better quality of service, and the network operators can exercise greater control over the traffic. Such networks are called intranets.
In traditional telephone networks, a reserved transmission path is established between terminals before the users can talk to each other. Typically, the transmission path is capable of transmitting at a data rate of 64 kbit/s simultaneously in both directions. Because only one person usually talks at any one time and there are natural gaps in conversation, over half the transmission capacity is wasted. However, because the transmission path is reserved, once a connection is established there is no queuing delay waiting for resources to become available. In addition, the end-to-end delay is constant because, once a path is set up, it does not change throughout a call. For voice traffic it is important that any delay in transport is constant as well as being as small as possible.
The transfer mode in which a path is reserved exclusively for a single communication is called circuit switching, and the type of service in which a connection has to be established before the exchange of data (whether for voice or computer applications) is called connection oriented. As you will see later in this section, these terms are not quite synonymous: circuit switching implies a connection-oriented service, but a connection-oriented service does not necessarily imply a circuit-switched transfer mode.
The requirement for a constant delay was not important in the early computer applications, and this encouraged the development of packet switching. In packet switching the message to be sent is divided into convenient groups of data, called packets, which are transferred independently over the network. Transmission capacity is not reserved. Instead, at each stage through the network, if transmission capacity is not available to send a packet immediately, the packet is stored until sufficient capacity becomes available, at which point it is forwarded on to the next stage. This process is sometimes called store-and-forward. Since capacity is not reserved for specific paths between users, if there are idle periods in the transmission of packets between two users, that capacity is available to other users and is not wasted.
However, waiting for transmission capacity to become available introduces a queuing delay in the transport of data. Moreover, because the amount of delay depends on the level of traffic from other users, which depends on factors that change with time, the delay will vary randomly and may differ for each packet.
In most cases, packet switching is more efficient than circuit switching at using transmission capacity, and it can be more resilient to network failures and congestion. For instance, if a switch detects that a link has become faulty, packets can be diverted to another link without any intervention by the users and without their knowledge. In circuit switching, if a link develops a fault, the transmission of data must be stopped and the circuit re-established. Although re-establishment may be performed automatically and very quickly, it may result in loss of data.
The type of packet switching described above is called connectionless service, or datagram service because packets are called datagrams in the internet protocol. A problem with this service is that there is no direct control over the level of traffic accepted by a network and, because the paths taken by packets can vary, the order in which they arrive can also vary. Where it is desirable to have more control over the transfer of packets, a connection-oriented service is used. The advantages of establishing a connection are that all the packets between the users follow a single path, and that if a switch has too much traffic it can refuse a request to set up a new connection. Note that the setting up of a connection does not necessarily reserve transmission capacity for a switch to forward a packet whenever it receives one, although actions may be taken which reduce the processing delay.
Figure 2 illustrates the differences between circuit switching and the two types of packet switching in the form of signal sequence diagrams. For each of the three cases the signals between four switches labelled S1 to S4 are shown. The figure gives just one example of transferring data for each mode. In practice, the efficiency of each mode will depend on the characteristics of the data being transferred, for example the amount and whether it arrives continuously or in bursts.

Figure 2 also shows the transmission delay, propagation delay and processing delay for each signal transmitted between the switches:
-
The transmission delay is the time between the first and last bits of a signal leaving a switch:
-
The propagation delay is the time taken for each bit to travel along the transmission medium:
-
The processing delay is the time required for a switch to deal with the signal. It may include queuing delay in the case of packet-switched networks.
There are no commonly agreed definitions of the three types of delay introduced above, and you may see the terms in other sources with different meanings. For instance, the term ‘transmission delay’ is very often used to mean the total delay incurred in sending and receiving a message. The context in which the terms are used should indicate their meaning.
In Figure 2 I have assumed that a switch does not start processing the next packet until it has finished transmitting the first. This is to simplify the diagrams. In practice, the receipt and transmission of packets may not require any intervention by a switch's processor. Also, the packet-switched, connectionless-mode example shows all three packets following the same path, which means that all packets arrive in the order in which they were transmitted; however, this is not necessarily always the case.
There is another type of switching in which a connection is not established explicitly by the user but is established by switches in the network for reasons of efficiency. This is called flow switching. Packet switches automatically detect a flow of packets between two devices. Once a flow has been detected, a connection is set up between two compatible switches along the path taken by the packets, and a label identifying this connection is attached to each subsequent packet. This allows switches to forward packets by looking only at these labels, which simplifies the forwarding and reduces the time it takes. Another variation of the same basic principle is possible, whereby a group of switches agree between themselves a means of tagging packets to reduce the forwarding delay. Section 4 of this unit introduces a form of flow switching called label switching. Note that the term ‘forwarding’ is used in packet switching for the transfer of packets between switches.
Message switching is another form of packet switching, in which complete messages are transferred as single packets. An early example of message switching was a telegraph network that included message centres where an incoming message was punched on paper tape; the paper tape was later transferred (switched) to a paper tape reader for retransmission to the final destination or the next message centre.
SAQ 1
Complete Table 1, which reviews features of connection-oriented and connectionless modes of packet switching.
Feature | Packet switched connection-oriented | Packet-switched connectionless |
---|---|---|
Reserved transmission capacity | No | |
Data rate constantly available | No | |
Store-and-forward | ||
Constant route for packets | ||
Connection set-up | ||
Variable delay | ||
Control of quality of service |
Answer
Table 2 contains my review of the features.
Feature | Packet-switched connection- oriented mode | Packet-switched connectionless mode |
---|---|---|
Reserved transmission capacity | Maybe Footnotes 1 | No |
Data rate constantly available | No | No |
Store-and-forward | Yes | Yes |
Constant route for packets | Yes | No |
Connection set-up | Yes | No |
Variable delay | Yes Footnotes 2 | Yes |
Control of quality of service | Yes | Maybe Footnotes 3 |
1 Reservation of buffer space may take place at the time of connection set-up.
2 The variability in delay may be lower for the connection-oriented mode because of greater control over the quality of service.
3 Some measures of quality of service may be provided on a packet-by-packet basis by giving priority to some types of packet over others. This means that some applications may be given a better service than others, but it is difficult to provide absolute guarantees of quality of service.
SAQ 2
Draw to scale a signal sequence diagram of the form of Figure 2 that shows a packet sent between systems A and B via system C. You should assume the following:
-
the data rate between systems A and C is 500 kbit/s
-
the data rate between systems C and B is 1 Mbit/s
-
the packet contains 200 bytes;
-
the distance between A and C and between C and B is 100 km;
-
the velocity of propagation is 2 × 108 m s−1;
-
the processing delay at C is 0.2 ms;
-
the processing at C does not start until the complete packet has been received.
Answer
See Figure 3.

The time required to transmit a packet of 200 bytes at a data rate of 500 kbit/s is given by:

The time required to transmit a packet of 200 bytes at a data rate of 1 Mbit/s is given by:

The time required for a signal to travel between A and C and between C and B is given by:

The rest of this unit will look at a selection of communication protocols – the procedures that are essential for devices and systems to be able to communicate.