is a camera system designed for ultra-high speed digital imaging applications in the visible wavelength range, where a low spatial resolution is acceptable. The sensor of the camera is a two-dimensional Avalanche PhotoDiode (APD) array. The detector is variable upto 128 channels and consist of 1 to 4 4by 8 channels single detector array.The output from each diode of the detector is amplified by a low-noise amplifier chain and digitised inside the camera. The resulting digital data flow is organized into a UDP packages and transmitted to a PC via a digital optical fibre link using 10 Gigabit Ethernet protocol. The UDP stream is produced by a special 10G Ethernet controller card (Adimtech Communication and Control card) using built in FPGA. The aim of this report is to characterise the maximum data flow that can be saved into a PC memory using 10G protocol. UDP protocol does not contain acknowledgment feedback, so it is not reliable. It is used mainly for audio-, video streams. The expected data flow from the APD camera is 8.96Gb/sec (128 channels * 5Ms* 14bit). One has to note that the aim of this test is not the test of Ethernet network facilities (routers, switches) and their effect of the observed bandwidth, but to identify the accessible data rate between directly connected computers or other 10G devices. At this stage of the development only on APDCAM-10G is planned to be connected to on DAQ PC, and other network devices are not installed in the communication line. The physical layer of the communication is practically a single duplex optical fiber. 2 Test method As a first approach two PCs were connected directly. The IP addresses were set manually to the same subnet. Although in the camera UDP packages are used for streaming, in the tests we often used TCP protocol as it is a reliable protocol. The transfer rate measured with TCP connection measurements will be lower limit for UDP package transfer rate, as TCP acknowledgement and resending of lost packages causes traffic overhead compared to UDP communication. Special notes: Switch off every firewall and packet filter applications! It is important to install the 10Gb card to a PCI-E slot with at least 8x mode. 3 Test hardware At this stage of the development of the Adimtech C&C card is still in the development phase and not available for tests. An alternative solution to perform these tests is to use to two PCs with 10G Ethernet cards and directly connect them with optical fibers. The details of the test hardware is shown below: PC1 RAM: 8 GB DDR3 CPU: Intel Core i5-2400 3.1 GHz
Deluxe OS: Fedora 17, Window 7 Professional NIC: Myricom Myri-10G-PCIE-8A-R PCI-E x8 PC2 RAM: 8 GB DDR3 CPU: Intel Core i5-2400 3.2 GHz MB: ASUS P8P67 Deluxe OS: Ubuntu 10.4, Windows 7 Professional NIC: Myricom Myri-10G-PCIE-8A-R PCI-E x8 4 Test results The measured data flow is strongly dependent on the hardware and software environment. A Linux and Windows operating system were tested. The free traffic analyzer Iperf was used on both operating systems. This program is capable for measuring maximum TCP and UDP bandwidth performance. Iperf allows the tuning of various parameters and UDP characteristics. Iperf reports bandwith, delay jitter, datagram loss. 4.1 Linux OS For optimal performance the network buffer parameters should be increased before using 10G network. Please add the following lines to /etc/sysctl.conf net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 net.core.netdev_max_backlog = 250000 and execute the command sysctl -p /etc/sysctl.conf The first test is running server and client on the same PC to test PCI-E bandwidth. PC1: iperf ‐c 127.0.0.1 ‐P 1 ‐i 1 ‐p 5001 ‐f M ‐t 10 ‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐ Client connecting to 127.0.0.1, TCP port 5001 TCP window size: 0.16 MByte (default) ‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐ [ 3] local 127.0.0.1 port 50903 connected with 127.0.0.1 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0‐ 1.0 sec 5086 MBytes 5086 MBytes/sec [ 3] 1.0‐ 2.0 sec 5145 MBytes 5145 MBytes/sec [ 3] 2.0‐ 3.0 sec 5144 MBytes 5144 MBytes/sec [ 3] 3.0‐ 4.0 sec 5127 MBytes 5127 MBytes/sec [ 3] 4.0‐ 5.0 sec 5124 MBytes 5124 MBytes/sec
1 datagrams received out‐of‐order The TCP test was successful without changing default settings of Iperf. In case of UDP connection the bandwidth was around 8 Gbit/s with default settings so it was necessary to increase UDP buffer size to 16 MB. To maximize the throughput of the protocol biggest available packet size is set. With these parameters the desired bandwidth was measured using UDP protocol. 4.2 Windows OS The first test is running server and client on the same PC to test PCI-E bandwidth. iperf ‐s ‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐ Server listening on TCP port 5001 TCP window size: 8.00 KByte (default) ‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐  local 127.0.0.1 port 5001 connected with 127.0.0.1 port 1149 [ ID] Interval Transfer Bandwidth  0.0‐10.0 sec 7.23 GBytes 6.21 Gbits/sec C:\Users\Pfolab\Desktop>iperf ‐c 127.0.0.1 ‐P 1 ‐i 1 ‐p 5001 ‐f M ‐t 10 ‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐ Client connecting to 127.0.0.1, TCP port 5001 TCP window size: 0.01 MByte (default) ‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐  local 127.0.0.1 port 1149 connected with 127.0.0.1 port 5001 [ ID] Interval Transfer Bandwidth  0.0‐ 1.0 sec 726 MBytes 726 MBytes/sec  1.0‐ 2.0 sec 745 MBytes 745 MBytes/sec  2.0‐ 3.0 sec 718 MBytes 718 MBytes/sec  3.0‐ 4.0 sec 743 MBytes 743 MBytes/sec  4.0‐ 5.0 sec 750 MBytes 750 MBytes/sec  5.0‐ 6.0 sec 730 MBytes 730 MBytes/sec  6.0‐ 7.0 sec 747 MBytes 747 MBytes/sec  7.0‐ 8.0 sec 745 MBytes 745 MBytes/sec  8.0‐ 9.0 sec 749 MBytes 749 MBytes/sec  9.0‐10.0 sec 753 MBytes 753 MBytes/sec  0.0‐10.0 sec 7404 MBytes 740 MBytes/sec TCP Test PC1: Server iperf ‐s ‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐ Server listening on TCP port 5001 TCP window size: 8.00 KByte (default) ‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐  local 220.127.116.11 port 5001 connected with 18.104.22.168 port 1153 [ ID] Interval Transfer Bandwidth  0.0‐10.0 sec 6.70 GBytes 5.76 Gbits/sec
22.214.171.124 ‐P 1 ‐i 1 ‐p 5001 ‐w 1280K ‐f M ‐t 10 ‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐ Client connecting to 126.96.36.199, TCP port 5001 TCP window size: 1.25 MByte ‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐  local 188.8.131.52 port 1153 connected with 184.108.40.206 port 5001 [ ID] Interval Transfer Bandwidth  0.0‐ 1.0 sec 686 MBytes 686 MBytes/sec  1.0‐ 2.0 sec 684 MBytes 684 MBytes/sec  2.0‐ 3.0 sec 685 MBytes 685 MBytes/sec  3.0‐ 4.0 sec 685 MBytes 685 MBytes/sec  4.0‐ 5.0 sec 688 MBytes 688 MBytes/sec  5.0‐ 6.0 sec 685 MBytes 685 MBytes/sec  6.0‐ 7.0 sec 684 MBytes 684 MBytes/sec  7.0‐ 8.0 sec 686 MBytes 686 MBytes/sec  8.0‐ 9.0 sec 684 MBytes 684 MBytes/sec  9.0‐10.0 sec 699 MBytes 699 MBytes/sec  0.0‐10.0 sec 6865 MBytes 685 MBytes/sec The UDP test shows similar results to TCP connection. According to the manufacturer of the 10 Gb network cards Microsoft’s desktop operating systems like Windows Xp, 7 and Vista are not ready to serve 10 Gb network devices. Only the server operating systems (Windows Server 2003, 2008, 2008 R2) have the capability to benefit of 10G Ethernet devices. The measurement results of the Windows server OS is planned to be made later. 5 Conclusion The measured data flow is strongly dependent on the hardware and software environment. For any application development 64 bit is strongly suggested as the memory usage can be very high. The 8x PCI-E Bus was capable to serve 10 Gb cards so it will not limit the performance. As the test results show only Linux desktop operating systems are ready to handle 10 Gb networks. A further test should be done with a Microsoft server operating system to recognize the capabilities of it. According to the results the first version of DAQ software is suggested to be developed for 64 bit Linux OS.