===== Lab 2 - UDP and TCP throughput ===== ==== Setup of the tftp server ==== To start a tftp server use the lab server below the table. The files will be stored in the /var/lib/tftpboot directory. Starting and stopping is done via Make sure that you can create new files in the /var/lib/tftpboot directory. change in file /etc/default/tftpd-hpa TFTP_OPTIONS="--secure -c" Disable the firewall sudo ufw disable Start the tftpd service sudo service tftpd-hpa status sudo service tftpd-hpa start sudo service tftpd-hpa stop The tftp server will listen on port 69 for connections. You can test the setup on the localhost by opening a new shell and doing echo "Hallo hier bin ich!!!" > hallo.txt tftp localhost trace > put hallo.txt > quit This will create a new file "hallo.txt". This file is then transferred to server and is visible in the /var/lib/tftpboot directory. In order to create a zero filled file "neu.txt" with 50MByte size you can use dd if=/dev/zero of=neu.txt bs=1M count=50 Now start the server one of the laptops and the client on the other laptop. Test the tftp connection by transferring "hallo.txt" from the client to the server. Run wireshark on the client and on the server to observe the packets which are transferred. Sketch a message sequence chart. Now do the big thing: - Create a file with a size of 50 MByte - Use ping to estimate the round trip time between the two computers - Estimate the expected transfer time for the file - Transfer the file and compare the measurement data with your calculation. Modify the ethernet speed to 10 MBit/s and measure again. Set the ethernet speed back to 100MBit/s after the measurement. ==== Bridge Setup ==== The lab pc will be used a network emulation device for the connection between the two laptops. Configure the lab pc with the two network cards as bridge. Connect the two laptops directly to the lab pc. sudo brctl addbr mybridge sudo brctl addif mybridge eth1 sudo brctl show sudo brctl addif mybridge eth2 sudo ifconfig eth1 0.0.0.0 sudo ifconfig eth2 0.0.0.0 sudo ifconfig mybridge up The bridge should now be up and running. Test the connection between the two laptops with ping and measure the RTT. Do the tftp transfer test again - now the software bridge between the two laptops. All further modifications regarding latency and packet loss should be done on the lab pc and not on the laptops. The reason is that wireshark will tap the traffic after the netem traffic control module. ==== Using Netem for modelling latency and packet loss on the channel ==== In order to add 50 ms delay to the outgoing traffic. sudo tc qdisc add dev eth2 root netem delay 50ms To modify the delay setting use sudo tc qdisc change dev eth2 root netem delay 10ms To add a packet loss, you can do sudo tc qdisc change dev eth2 root netem delay 10ms loss 10.0% Make measurements and show the relation between configured latency and measured RTT. ==== tftp Delay Tests ==== * Create a file with 1 MByte size * Remove all netem traffic shapers * Measure the tftp transfer time * Do a ping RTT analysis. What is RTT according to ping? * Add a delay of 20 ms to each ethernet device * Measure ping RTT again. * Measure the tftp transfer time again. * Set the delay to 30 ms and to 40 ms and measure again. * Make a prediction model to predict transfer time from latency ==== tftp packet loss tests ==== * Setup the link with no additional delay * Add packet loss with netem * Measure the impact on transfer time * Analyze the behaviour in case of packet loss with wireshark * Show the behaviour for all possible packet loss cases (data,ack) * Modify the packet loss rates and show the impact on the transfer time * Explain the behaviour and make a prediction model for transfer time and packet loss rate ==== tftp with packet loss and latency ==== * Produce a prediction model for combined latency and packet loss influence * Predict the transfer time with your model * Check your predictions with measurements for different latencies and packet loss rates. ==== TCP transfer ==== Analyze and compile the tcp client and server programs "client.c" and "server.c". * Run the server on one PC and the client on the laptop * Run wireshark to analyze the tcp connection. Sketch a message sequence diagram * Modify the server and client code to transfer about 1 MByte of data as fast as possible * Measure the transfer time over the tcp connection over the bridge with no delay. * Measure the speed with iperf * Add a delay of 10ms to each ethernet device in the bridge and do the measurement again. * Measure with 20ms, 30ms and 40ms ==== TCP contention window tracking ==== One important part of the tcp protocol design is the behavior in network congestion situations. TCP controls the transmit rate by modifying the sliding window size and ACK clocking. The transmit window size is determined by the available local memory, the receiver window size and the congestion window size (cwnd). The congestion window size parameter is internal to the tcp protocol software, i.e. it can not be identified by looking at the transmitted packets. In contrast the receiver window size is announced during packet exchange. In order to monitor the status of the contention window size, the linux kernel provides a tracing module called "tcp_probe". Once the module is loaded you can trace some tcp connection parameters. See the configuration in [[mscom_network_start#Adding tcpprobe traffic log]]. You can then use "gnuplot" to produce graphics charts from the data. Please do all network latency and throughput emulation on the bridge and use the tracing with "tcp_probe" on the transmitting computer. The trace data contains the contention window size "cwnd" and the slow start threshold "ssthr". - Setup the bridge on the computer with the two network cards - Measure the tcp throughput with iperf - Add the cwnd probing by adding the tcp_probe module on the computer that runs the iperf client. - Log the trace data to a file - Produce iperf traffic for about 20 seconds. - Produce a chart with gnuplot to show the development of the cwnd size and the ssthr. You should see a fast increase of the window size until a stable value is reached. ==== TCP slow start analysis ==== It is easier to view the slowstart behavior with some latency in the connection. The linux kernel sets an initial contention window size of 10 MSS in [[https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/include/net/tcp.h|tcp.h]]. - Add a latency of 100 ms to each outgoing network device in the bridge with netem - Change the tcp congestion avoidance algorithm to "reno" according to [[mscom_network_start#TCP Congestion avoidance algorithm selection]] in the computers running iperf. - Trace 20 seconds of iperf traffic and produce a chart of cwnd again. - Change the initial contention window size to 1 MSS according to [[mscom_network_start#Initial TCP contention window size]] - Trace and produce a chart again. You should now see the slowstart as in the books... ==== Congestion Avoidance Algorithm ==== The congestion avoidance can not be observed when no packets are lost inside the network. - Setup the bridge with token bucket filters for a rate of 10MBit/s and use netem to add a latency of 20ms in each direction - Configure the tcp transmit and receive memory restrictions to 16 MByte according to [[mscom_network_start#TCP configuration]] on the computers running iperf (not the bridge). Make sure still reno as congestion avoidance algorithm is selected. - Start the tcp_probe tracing on the computer which runs the iperf client - Run iperf for about 30 seconds - Produce a log of the cwnd and ssthr value according to [[mscom_network_start#Adding tcpprobe traffic log]] I could not produce packet loss using netem and tbf traffic shaping on the same computer where iperf runs with a wired ethernet card. I guess this is due to internal flow control, i.e. no packets are dropped inside the kernel. I could however produce this situation when using a wireless connection. In the next setup the dynamic behavior of congestion avoidance and rate control is analyzed - Setup the bridge with 20MBit/s rate limitation and 20ms latency - Add one switch to each side of the bridge and add two computers to each switch. Now computer A and B should be left of the bridge and C and D on the right side. - Start a traffic flow from computer A to C with iperf including tcp_probe tracing - Now start a second flow from computer B to D with iperf including tcp_probe tracing - The first flow should end while the second flow is still running. - Produce a graph with cwnd and ssthr showing the dynamic adaption of the rate