[[mscom_lab2]]

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
mscom_lab2 [2014/05/20 06:39]
beckmanf [Congestion Avoidance Algorithm]
mscom_lab2 [2016/05/02 20:46] (current)
beckmanf Moved Bridge setup
Line 4: Line 4:
  
 To start a tftp server use the lab server below the table. The files will be stored in the /​var/​lib/​tftpboot directory. Starting and stopping is done via To start a tftp server use the lab server below the table. The files will be stored in the /​var/​lib/​tftpboot directory. Starting and stopping is done via
 +
 +Make sure that you can create new files in the /​var/​lib/​tftpboot directory. ​
 +<​code>​
 +change in file /​etc/​default/​tftpd-hpa
 +TFTP_OPTIONS="​--secure -c"
 +</​code>​
 +
 +Disable the firewall
 +<​code>​
 +sudo ufw disable
 +</​code>​
 +
 +Start the tftpd service
  
 <​code>​ <​code>​
Line 29: Line 42:
 </​code>​ </​code>​
  
-Now start the server ​on the lab pc and the client on your laptop. Test the tftp connection by transferring "​hallo.txt"​ from the client to the server. Run wireshark on the client and on the server to observe the packets which are transferred. Sketch a message sequence chart. ​+Now start the server ​one of the laptops ​and the client on the other laptop. Test the tftp connection by transferring "​hallo.txt"​ from the client to the server. Run wireshark on the client and on the server to observe the packets which are transferred. Sketch a message sequence chart. ​
  
  
Line 39: Line 52:
   - Transfer the file and compare the measurement data with your calculation. ​   - Transfer the file and compare the measurement data with your calculation. ​
  
-Modify the ethernet speed to 10 MBit/s and measure again. Set the ethernet speed back to 100MBit/s after the measurement. ​+Modify the ethernet speed to 10 MBit/s and measure again. Set the ethernet speed back to 100MBit/s after the measurement. 
  
 ==== Bridge Setup ==== ==== Bridge Setup ====
  
-Configure the lab pc on the top shelf with the two network cards as bridge. ​+The lab pc will be used a network emulation device for the connection between the two laptops. ​Configure the lab pc with the two network cards as bridge. Connect the two laptops directly to the lab pc.
  
 <​code>​ <​code>​
 sudo brctl addbr mybridge sudo brctl addbr mybridge
-sudo brctl addif mybridge ​eth2+sudo brctl addif mybridge ​eth1
 sudo brctl show sudo brctl show
-sudo brctl addif mybridge ​eth3+sudo brctl addif mybridge ​eth2 
 +sudo ifconfig eth1 0.0.0.0
 sudo ifconfig eth2 0.0.0.0 sudo ifconfig eth2 0.0.0.0
-sudo ifconfig eth3 0.0.0.0 
 sudo ifconfig mybridge up sudo ifconfig mybridge up
 </​code>​ </​code>​
  
-The bridge should now be up and running. ​Cable the pc as bridge between lab pc and laptop and test the connection+The bridge should now be up and running. ​Test the connection between the two laptops with ping and measure the RTT. Do the tftp transfer test again - now the software ​bridge between ​the two laptops. 
 + 
 +All further modifications regarding latency and packet loss should be done on the lab pc and not on the laptopsThe reason is that wireshark will tap the traffic after the netem traffic control module. 
 + 
 +==== Using Netem for modelling latency and packet loss on the channel ====
  
-In order to add 50 ms delay to the outgoing traffic ​on the bridge+In order to add 50 ms delay to the outgoing traffic. ​
  
 <​code>​ <​code>​
-sudo tc qdisc add dev eth2 root netem delay 50 ms+sudo tc qdisc add dev eth2 root netem delay 50ms
 </​code>​ </​code>​
  
Line 66: Line 84:
  
 <​code>​ <​code>​
-sudo tc qdisc change dev eth2 root netem delay 10 ms+sudo tc qdisc change dev eth2 root netem delay 10ms
 </​code>​ </​code>​
  
Line 72: Line 90:
  
 <​code>​ <​code>​
-sudo tc qdisc change dev eth2 root netem delay 10 ms loss 10.0%+sudo tc qdisc change dev eth2 root netem delay 10ms loss 10.0%
 </​code>​ </​code>​
 +
 +Make measurements and show the relation between configured latency and measured RTT. 
  
 ==== tftp Delay Tests ==== ==== tftp Delay Tests ====
  
   * Create a file with 1 MByte size   * Create a file with 1 MByte size
-  * Configure the bridge with no delay +  * Remove all netem traffic shapers 
-  * Measure the tftp transfer time over the bridge+  * Measure the tftp transfer time
   * Do a ping RTT analysis. What is RTT according to ping?    * Do a ping RTT analysis. What is RTT according to ping? 
-  * Add a delay of 20 ms to each ethernet device ​on the bridge computer+  * Add a delay of 20 ms to each ethernet device
   * Measure ping RTT again. ​   * Measure ping RTT again. ​
   * Measure the tftp transfer time again. ​   * Measure the tftp transfer time again. ​
-  * Set the delay to 30 ms and to 40 ms and measure again. ​+  * Set the delay to 30 ms and to 40 ms and measure again
 +  * Make a prediction model to predict transfer time from latency 
 + 
 +==== tftp packet loss tests ==== 
 + 
 +  * Setup the link with no additional delay 
 +  * Add packet loss with netem 
 +  * Measure the impact on transfer time 
 +  * Analyze the behaviour in case of packet loss with wireshark 
 +  * Show the behaviour for all possible packet loss cases (data,​ack) 
 +  * Modify the packet loss rates and show the impact on the transfer time 
 +  * Explain the behaviour and make a prediction model for transfer time and packet loss rate 
 + 
 +==== tftp with packet loss and latency ==== 
 + 
 +  * Produce a prediction model for combined latency and packet loss influence 
 +  * Predict the transfer time with your model 
 +  * Check your predictions with measurements for different latencies and packet loss rates
  
 ==== TCP transfer ==== ==== TCP transfer ====
Line 102: Line 139:
 One important part of the tcp protocol design is the behavior in network congestion situations. TCP controls the transmit rate by modifying the sliding window size and ACK clocking. The transmit window size is determined by the available local memory, the receiver window size and the congestion window size (cwnd). The congestion window size parameter is internal to the tcp protocol software, i.e. it can not be identified by looking at the transmitted packets. In contrast the receiver window size is announced during packet exchange. ​ One important part of the tcp protocol design is the behavior in network congestion situations. TCP controls the transmit rate by modifying the sliding window size and ACK clocking. The transmit window size is determined by the available local memory, the receiver window size and the congestion window size (cwnd). The congestion window size parameter is internal to the tcp protocol software, i.e. it can not be identified by looking at the transmitted packets. In contrast the receiver window size is announced during packet exchange. ​
  
-In order to monitor the status of the contention window size, the linux kernel ​provide ​tracking ​module called "​tcp_probe"​. Once the module is loaded you can trace some tcp connection parameters. See the configuration in [[mscom_network_start#​Adding tcpprobe traffic log]]. ​+In order to monitor the status of the contention window size, the linux kernel ​provides ​tracing ​module called "​tcp_probe"​. Once the module is loaded you can trace some tcp connection parameters. See the configuration in [[mscom_network_start#​Adding tcpprobe traffic log]]. ​
  
 You can then use "​gnuplot"​ to produce graphics charts from the data. Please do all network latency and throughput emulation on the bridge and use the tracing with "​tcp_probe"​ on the transmitting computer. The trace data contains the contention window size "​cwnd"​ and the slow start threshold "​ssthr"​. ​ You can then use "​gnuplot"​ to produce graphics charts from the data. Please do all network latency and throughput emulation on the bridge and use the tracing with "​tcp_probe"​ on the transmitting computer. The trace data contains the contention window size "​cwnd"​ and the slow start threshold "​ssthr"​. ​
Line 120: Line 157:
  
   - Add a latency of 100 ms to each outgoing network device in the bridge with netem   - Add a latency of 100 ms to each outgoing network device in the bridge with netem
 +  - Change the tcp congestion avoidance algorithm to "​reno"​ according to [[mscom_network_start#​TCP Congestion avoidance algorithm selection]] in the computers running iperf.
   - Trace 20 seconds of iperf traffic and produce a chart of cwnd again.   - Trace 20 seconds of iperf traffic and produce a chart of cwnd again.
   - Change the initial contention window size to 1 MSS according to [[mscom_network_start#​Initial TCP contention window size]]   - Change the initial contention window size to 1 MSS according to [[mscom_network_start#​Initial TCP contention window size]]
   - Trace and produce a chart again. You should now see the slowstart as in the books... ​   - Trace and produce a chart again. You should now see the slowstart as in the books... ​
- 
 ==== Congestion Avoidance Algorithm ==== ==== Congestion Avoidance Algorithm ====
  
Line 129: Line 166:
  
   - Setup the bridge with token bucket filters for a rate of 10MBit/s and use netem to add a latency of 20ms in each direction   - Setup the bridge with token bucket filters for a rate of 10MBit/s and use netem to add a latency of 20ms in each direction
-  - Configure the tcp transmit and receive memory restrictions to 16 MByte according to  [[mscom_network_start#​TCP configuration]]. ​+  - Configure the tcp transmit and receive memory restrictions to 16 MByte according to  [[mscom_network_start#​TCP configuration]] ​on the computers running iperf (not the bridge). Make sure still reno as congestion avoidance algorithm is selected.
   - Start the tcp_probe tracing on the computer which runs the iperf client   - Start the tcp_probe tracing on the computer which runs the iperf client
   - Run iperf for about 30 seconds   - Run iperf for about 30 seconds
   - Produce a log of the cwnd and ssthr value according to [[mscom_network_start#​Adding tcpprobe traffic log]]   - Produce a log of the cwnd and ssthr value according to [[mscom_network_start#​Adding tcpprobe traffic log]]
 +
 +I could not produce packet loss using netem and tbf traffic shaping on the same computer where iperf runs with a wired ethernet card. I guess this is due to internal flow control, i.e. no packets are dropped inside the kernel. I could however produce this situation when using a wireless connection. ​
  
  
Line 143: Line 182:
   - The first flow should end while the second flow is still running. ​   - The first flow should end while the second flow is still running. ​
   - Produce a graph with cwnd and ssthr showing the dynamic adaption of the rate   - Produce a graph with cwnd and ssthr showing the dynamic adaption of the rate
- 
- 
  
  
  • mscom_lab2.1400560782.txt.gz
  • Last modified: 2014/05/20 06:39
  • by beckmanf