[[mscom_lab2]]

This is an old revision of the document!


To start a tftp server use the lab server below the table. The files will be stored in the /var/lib/tftpboot directory. Starting and stopping is done via

sudo service tftpd-hpa status
sudo service tftpd-hpa start
sudo service tftpd-hpa stop

The tftp server will listen on port 69 for connections. You can test the setup on the localhost by opening a new shell and doing

echo "Hallo hier bin ich!!!" > hallo.txt
tftp localhost
trace
> put hallo.txt
> quit

This will create a new file “hallo.txt”. This file is then transferred to server and is visible in the /var/lib/tftpboot directory.

In order to create a zero filled file “neu.txt” with 50MByte size you can use

dd if=/dev/zero of=neu.txt bs=1M count=50

Now start the server on the lab pc and the client on your laptop. Test the tftp connection by transferring “hallo.txt” from the client to the server. Run wireshark on the client and on the server to observe the packets which are transferred. Sketch a message sequence chart.

Now do the big thing:

  1. Create a file with a size of 50 MByte
  2. Use ping to estimate the round trip time between the two computers
  3. Estimate the expected transfer time for the file
  4. Transfer the file and compare the measurement data with your calculation.

Modify the ethernet speed to 10 MBit/s and measure again. Set the ethernet speed back to 100MBit/s after the measurement.

In order to add 50 ms delay to the outgoing traffic.

sudo tc qdisc add dev eth2 root netem delay 50 ms

To modify the delay setting use

sudo tc qdisc change dev eth2 root netem delay 10 ms

To add a packet loss, you can do

sudo tc qdisc change dev eth2 root netem delay 10 ms loss 10.0%

Make measurements and show the relation between configured latency and measured RTT.

  • Create a file with 1 MByte size
  • Remove all netem traffic shapers
  • Measure the tftp transfer time
  • Do a ping RTT analysis. What is RTT according to ping?
  • Add a delay of 20 ms to each ethernet device
  • Measure ping RTT again.
  • Measure the tftp transfer time again.
  • Set the delay to 30 ms and to 40 ms and measure again.
  • Make a prediction model to predict transfer time from latency
  • Setup the link with no additional delay
  • Add packet loss with netem
  • Measure the impact on transfer time
  • Analyze the behaviour in case of packet loss with wireshark
  • Show the behaviour for all possible packet loss cases (data,ack)
  • Modify the packet loss rates and show the impact on the transfer time
  • Explain the behaviour and make a prediction model for transfer time and packet loss rate
  • Produce a prediction model for combined latency and packet loss influence
  • Predict the transfer time with your model
  • Check your predictions with measurements for different latencies and packet loss rates.

Analyze and compile the tcp client and server programs “client.c” and “server.c”.

  • Run the server on one PC and the client on the laptop
  • Run wireshark to analyze the tcp connection. Sketch a message sequence diagram
  • Modify the server and client code to transfer about 1 MByte of data as fast as possible
  • Measure the transfer time over the tcp connection over the bridge with no delay.
  • Measure the speed with iperf
  • Add a delay of 10ms to each ethernet device in the bridge and do the measurement again.
  • Measure with 20ms, 30ms and 40ms

One important part of the tcp protocol design is the behavior in network congestion situations. TCP controls the transmit rate by modifying the sliding window size and ACK clocking. The transmit window size is determined by the available local memory, the receiver window size and the congestion window size (cwnd). The congestion window size parameter is internal to the tcp protocol software, i.e. it can not be identified by looking at the transmitted packets. In contrast the receiver window size is announced during packet exchange.

In order to monitor the status of the contention window size, the linux kernel provides a tracing module called “tcp_probe”. Once the module is loaded you can trace some tcp connection parameters. See the configuration in Adding tcpprobe traffic log.

You can then use “gnuplot” to produce graphics charts from the data. Please do all network latency and throughput emulation on the bridge and use the tracing with “tcp_probe” on the transmitting computer. The trace data contains the contention window size “cwnd” and the slow start threshold “ssthr”.

  1. Setup the bridge on the computer with the two network cards
  2. Measure the tcp throughput with iperf
  3. Add the cwnd probing by adding the tcp_probe module on the computer that runs the iperf client.
  4. Log the trace data to a file
  5. Produce iperf traffic for about 20 seconds.
  6. Produce a chart with gnuplot to show the development of the cwnd size and the ssthr.

You should see a fast increase of the window size until a stable value is reached.

It is easier to view the slowstart behavior with some latency in the connection. The linux kernel sets an initial contention window size of 10 MSS in tcp.h.

  1. Add a latency of 100 ms to each outgoing network device in the bridge with netem
  2. Change the tcp congestion avoidance algorithm to “reno” according to TCP Congestion avoidance algorithm selection in the computers running iperf.
  3. Trace 20 seconds of iperf traffic and produce a chart of cwnd again.
  4. Change the initial contention window size to 1 MSS according to Initial TCP contention window size
  5. Trace and produce a chart again. You should now see the slowstart as in the books…

The congestion avoidance can not be observed when no packets are lost inside the network.

  1. Setup the bridge with token bucket filters for a rate of 10MBit/s and use netem to add a latency of 20ms in each direction
  2. Configure the tcp transmit and receive memory restrictions to 16 MByte according to TCP configuration on the computers running iperf (not the bridge). Make sure still reno as congestion avoidance algorithm is selected.
  3. Start the tcp_probe tracing on the computer which runs the iperf client
  4. Run iperf for about 30 seconds
  5. Produce a log of the cwnd and ssthr value according to Adding tcpprobe traffic log

I could not produce packet loss using netem and tbf traffic shaping on the same computer where iperf runs with a wired ethernet card. I guess this is due to internal flow control, i.e. no packets are dropped inside the kernel. I could however produce this situation when using a wireless connection.

In the next setup the dynamic behavior of congestion avoidance and rate control is analyzed

  1. Setup the bridge with 20MBit/s rate limitation and 20ms latency
  2. Add one switch to each side of the bridge and add two computers to each switch. Now computer A and B should be left of the bridge and C and D on the right side.
  3. Start a traffic flow from computer A to C with iperf including tcp_probe tracing
  4. Now start a second flow from computer B to D with iperf including tcp_probe tracing
  5. The first flow should end while the second flow is still running.
  6. Produce a graph with cwnd and ssthr showing the dynamic adaption of the rate

Configure the lab pc on the top shelf with the two network cards as bridge.

sudo brctl addbr mybridge
sudo brctl addif mybridge eth2
sudo brctl show
sudo brctl addif mybridge eth3
sudo ifconfig eth2 0.0.0.0
sudo ifconfig eth3 0.0.0.0
sudo ifconfig mybridge up

The bridge should now be up and running. Cable the pc as bridge between lab pc and laptop and test the connection.

  • mscom_lab2.1429690259.txt.gz
  • Last modified: 2015/04/22 10:10
  • by beckmanf