Table of Contents

Lab 2 - UDP and TCP throughput

Setup of the tftp server

To start a tftp server use the lab server below the table. The files will be stored in the /var/lib/tftpboot directory. Starting and stopping is done via

Make sure that you can create new files in the /var/lib/tftpboot directory.

change in file /etc/default/tftpd-hpa
TFTP_OPTIONS="--secure -c"

Disable the firewall

sudo ufw disable

Start the tftpd service

sudo service tftpd-hpa status
sudo service tftpd-hpa start
sudo service tftpd-hpa stop

The tftp server will listen on port 69 for connections. You can test the setup on the localhost by opening a new shell and doing

echo "Hallo hier bin ich!!!" > hallo.txt
tftp localhost
trace
> put hallo.txt
> quit

This will create a new file “hallo.txt”. This file is then transferred to server and is visible in the /var/lib/tftpboot directory.

In order to create a zero filled file “neu.txt” with 50MByte size you can use

dd if=/dev/zero of=neu.txt bs=1M count=50

Now start the server one of the laptops and the client on the other laptop. Test the tftp connection by transferring “hallo.txt” from the client to the server. Run wireshark on the client and on the server to observe the packets which are transferred. Sketch a message sequence chart.

Now do the big thing:

  1. Create a file with a size of 50 MByte
  2. Use ping to estimate the round trip time between the two computers
  3. Estimate the expected transfer time for the file
  4. Transfer the file and compare the measurement data with your calculation.

Modify the ethernet speed to 10 MBit/s and measure again. Set the ethernet speed back to 100MBit/s after the measurement.

Bridge Setup

The lab pc will be used a network emulation device for the connection between the two laptops. Configure the lab pc with the two network cards as bridge. Connect the two laptops directly to the lab pc.

sudo brctl addbr mybridge
sudo brctl addif mybridge eth1
sudo brctl show
sudo brctl addif mybridge eth2
sudo ifconfig eth1 0.0.0.0
sudo ifconfig eth2 0.0.0.0
sudo ifconfig mybridge up

The bridge should now be up and running. Test the connection between the two laptops with ping and measure the RTT. Do the tftp transfer test again - now the software bridge between the two laptops.

All further modifications regarding latency and packet loss should be done on the lab pc and not on the laptops. The reason is that wireshark will tap the traffic after the netem traffic control module.

Using Netem for modelling latency and packet loss on the channel

In order to add 50 ms delay to the outgoing traffic.

sudo tc qdisc add dev eth2 root netem delay 50ms

To modify the delay setting use

sudo tc qdisc change dev eth2 root netem delay 10ms

To add a packet loss, you can do

sudo tc qdisc change dev eth2 root netem delay 10ms loss 10.0%

Make measurements and show the relation between configured latency and measured RTT.

tftp Delay Tests

tftp packet loss tests

tftp with packet loss and latency

TCP transfer

Analyze and compile the tcp client and server programs “client.c” and “server.c”.

TCP contention window tracking

One important part of the tcp protocol design is the behavior in network congestion situations. TCP controls the transmit rate by modifying the sliding window size and ACK clocking. The transmit window size is determined by the available local memory, the receiver window size and the congestion window size (cwnd). The congestion window size parameter is internal to the tcp protocol software, i.e. it can not be identified by looking at the transmitted packets. In contrast the receiver window size is announced during packet exchange.

In order to monitor the status of the contention window size, the linux kernel provides a tracing module called “tcp_probe”. Once the module is loaded you can trace some tcp connection parameters. See the configuration in Adding tcpprobe traffic log.

You can then use “gnuplot” to produce graphics charts from the data. Please do all network latency and throughput emulation on the bridge and use the tracing with “tcp_probe” on the transmitting computer. The trace data contains the contention window size “cwnd” and the slow start threshold “ssthr”.

  1. Setup the bridge on the computer with the two network cards
  2. Measure the tcp throughput with iperf
  3. Add the cwnd probing by adding the tcp_probe module on the computer that runs the iperf client.
  4. Log the trace data to a file
  5. Produce iperf traffic for about 20 seconds.
  6. Produce a chart with gnuplot to show the development of the cwnd size and the ssthr.

You should see a fast increase of the window size until a stable value is reached.

TCP slow start analysis

It is easier to view the slowstart behavior with some latency in the connection. The linux kernel sets an initial contention window size of 10 MSS in tcp.h.

  1. Add a latency of 100 ms to each outgoing network device in the bridge with netem
  2. Change the tcp congestion avoidance algorithm to “reno” according to TCP Congestion avoidance algorithm selection in the computers running iperf.
  3. Trace 20 seconds of iperf traffic and produce a chart of cwnd again.
  4. Change the initial contention window size to 1 MSS according to Initial TCP contention window size
  5. Trace and produce a chart again. You should now see the slowstart as in the books…

Congestion Avoidance Algorithm

The congestion avoidance can not be observed when no packets are lost inside the network.

  1. Setup the bridge with token bucket filters for a rate of 10MBit/s and use netem to add a latency of 20ms in each direction
  2. Configure the tcp transmit and receive memory restrictions to 16 MByte according to TCP configuration on the computers running iperf (not the bridge). Make sure still reno as congestion avoidance algorithm is selected.
  3. Start the tcp_probe tracing on the computer which runs the iperf client
  4. Run iperf for about 30 seconds
  5. Produce a log of the cwnd and ssthr value according to Adding tcpprobe traffic log

I could not produce packet loss using netem and tbf traffic shaping on the same computer where iperf runs with a wired ethernet card. I guess this is due to internal flow control, i.e. no packets are dropped inside the kernel. I could however produce this situation when using a wireless connection.

In the next setup the dynamic behavior of congestion avoidance and rate control is analyzed

  1. Setup the bridge with 20MBit/s rate limitation and 20ms latency
  2. Add one switch to each side of the bridge and add two computers to each switch. Now computer A and B should be left of the bridge and C and D on the right side.
  3. Start a traffic flow from computer A to C with iperf including tcp_probe tracing
  4. Now start a second flow from computer B to D with iperf including tcp_probe tracing
  5. The first flow should end while the second flow is still running.
  6. Produce a graph with cwnd and ssthr showing the dynamic adaption of the rate