Contents

Introduction

In packet-switched networks, traffic sources split data into smaller pieces called packets, and transmit them attached to a header with control information. Per each packet received, packet-switching nodes read its header and take appropriate forwarding decisions, according to the routing plan configured. In real networks, traffic is highly unpredictable and thus modelled as random processes. When we say that a traffic source d generates hd traffic units, we refer to a time average. Instantaneous traffic generated oscillates very fast and uncontrollably from peak intervals (generating more traffic than the average) to valley intervals (generating less traffic than the average).

The traffic carried by a link is the aggregation (multiplexing) of the traffic from all the sources routed through this link. At a given moment, it is very likely that some of the sources aggregated are in a peak, while others are in a valley. They compensate each other. Thus, you do not need a link capacity equal to the sum of the peak traffics of all the sources, but a capacity between the sum of the averages and the sum of the peaks. We say that multiplexing sources provides an statistical multiplexing gain.

Statistical multiplexing gain is the powering fact propeling packet switching. Since it is very common that sources have peaks of traffic several times higher than their average (e.g. 2 or 3 times). However, at unpredictable moments in time, peak traffic intervals coincide and link capacities are not enough to forward traffic. Then, nodes store packets in queues, so they are delayed until they can be transmitted (this delay is known as queuing delay). If this situation remains, queues are filled provoking packet drops. We say that the link is congested or saturated. Note that if the link capacity is below the sum of the average traffic generated by the sources traversing the link, a large amount of packet drops will always occur, whatever buffer you allocate in the nodes. Network designs must enforce always that link capacities are not below the sum of the averages of the traffics to be carried.

Network design tries to model statistically delays and drops in order to minimize their effects. Traffic models capture not only the average of each traffic source, but also a measure of its burstiness. Intuitively, it is clear that the more steep and long that the peak-valley intervals are (or the more bursty the traffic is), the higher the queuing delay. This is because, during low load intervals, the link is underutilized with negligible delays, but during peak traffic intervals packets need to be buffered and can suffer large queuing delays or drops. Naturally, a zero queuing delay occurs when the traffic is perfectly constant (not random).

In the following sections we describe the link, node and traffic model applied in this report to estimate average packet delay measured.

Link and node model

Each node has a first-in-first-out queue for each output links, where packets are stored prior to transmission. We assume that buffers are infinite, and thus no packet losses occur (unless the average traffic routed through a link is higher than its capacity, in which case the queuing delay would grow to infinite in our model).

The delay of the packets traversing a link is composed of three parts:

Traffic and queuing delay model

We assume that each demand d is an inelastic source of traffic of average load hd. Inelastic source means that the amount of traffic injected to the network, does not depend on the network state (e.g. the source does not inject more traffic if the network capacity is doubled, nor less traffic if the network capacity is halved). Net2Plan assumes that the value hd coming from the traffic matrices is measured in Erlangs. The traffic average in bps is obtained multiplying this quantity by the binaryRateInBitsPerSecondPerErlang value (configured in the File-Options menu). Average packet size in bits (which we denote as L in this report), is directly taken from the averagePacketLengthInBytes value configurable in the File-Options menu.

The traffic in each link is the aggregation of the traffic from the demands routed through each link. We assume that the packet arrivals in each link are independent from each other, and follow a self-similar pattern with Hurst parameter H ∈ [0.5, 1). Roughly speaking, self-similarity in the traffic means that the traffic is bursty at different time scales. That is, if we observe the accummulated traffic each millisecond, we see oscillations between "peak" milliseconds and "valley" milliseconds, which follow a similar statistical pattern as the oscillations we would observe if we saw the traffic in accummulations of 10 milliseconds, or 100 milliseconds (that is the reason of using the word "self-similar"). In contrast, in classical Poisson trafic models, when we observe the traffic in higher accumulation intervals, the traffic becomes more constant (more predictable).

Self-similar distributions are characterized by the Hurst parameter H ∈ [0.5, 1). The higher H (H≈1), the more self-similar the traffic is. A Hurst parameter H=0.5 characterizes non self-similar traffic (i.e. Poisson traffic has a parameter H=0.5). Some measurements in traffic volumes in the Internet links has shown self-similar distributions of Hurst parameters between H=0.6 and H=0.9.

There are many models to estimate queuing delays for queues fed with self-similar traffic. Estimations are usually very complex, out of the interest of this report. What we use in this report is the simple estimation in [1] for the average queuing delay Teb given by:

Where ρe=ye/ue is the average utilization of the link. When H=0.5, the previous formula matches the average queuing delay for a M/M/1 queue.

End-to-end path delay model

The average end-to-end delay Tp for a path p is the sum of the average link delays for the traversed links of the path:

Average network delay

It is possible to obtain an average network delay measure, defined as the average end-to-end delay suffered by a packet chosen randomly in the network. The average network delay T is given by:

where ye is the total amount of traffic carried in a link e. Note that we are assuming that all the traffic hd of each demand is carried, and thus the avarage traffic in each link is below the link capacity. In our report we provide two measures for T: one considering queuing, transmission and propagation delays in the links, and one considering only propagation delays. In general, the higher the bit rate, the smaller that queuing and transmission delays become. In contrast, the larger (in km) that the links are, the higher the propagation delays become. In WAN core networks, characterized by high bit rates and large distances, it is common to neglect queuing and transmission delays in the calculations. Providing the two values for T helps us to estimate if neglecting all the delays but propagation is a valid approximation.

Information tables

Link delay information

#linkDelayTable#

Path delay information

#pathDelayTable#

Network-wide delay information

#networkDelayTable#

References

[1] W. Stallings, High-Speed Networks and Internets: Performance and Quality of Service, Prentice Hall, 2002.