In general terms, overhead means anything extra that shouldn’ t be. So what is overhead in networking or how it’s affecting the entire network performance? It does not seem irrelevant for us to know something that most of us deal in everyday life knowingly or unknowingly.
The technology like Ethernet that we use most to transfer data over the network is the main focus on this paper to measure the overhead. We all know the bandwidth we get from ISP, that’s what we paid for, but are we fully able to use the full bandwidth? No, we can’t but why? This paper will give the general idea of some networking terms along with details about network overhead. It also will answer those questions those I initiated here by providing some analysis and experiments. This paper may help us to understand some critical factors that need to be accounted during network design period for best optimal network performances.
Overhead is an important term in networking for the design, implementation, and performance issues. Understanding overhead properly is basic to understanding the methodology employed by various technologies to get information from one place to another and the cost involved. According to PC Magazine, overhead is the amount of processing or transmitting time used by the system software, database manager or network protocols that trans mits additional codes in order to control and manage the data transfer over the network (PC Magazine).
Keep in mind, when assessing the performance of networks there is always a difference between theoretical speed ratings, and real world throughput. If we are able to design the seamless network, then the difference should be relatively small but still significant, otherwise it can be extremely large and not negligible from the perspective of network performance. As a networking student, my job in the real world should involve measuring the network performances by calculating the overhead more precisely and extensively.
This paper will cover the basics of overhead affiliated with TCP/IP based network by briefly explaining the theoretical and analytical performance analysis of a real world scenario that was conducted in a lab environment. This paper briefly describes some networking terminology including networking performance requirements with performance impact factors both in theoretical and real world scenarios. The complete paper is broken down into five different sections with a few subsections.
The first section provides some critical issues that establishes why overhead is a concern in network design. Section two briefly describes some networking terminology that may help us to better understand the different terms and analysis provided in this research paper. Section three distinguishes the relationship between overhead and network application types and its behavior by providing some details and simple analysis. Section four provides the brief scenario about the conducted experiment design procedure, experimental results, and data analysis.
WHY OVERHEAD IS AN IMPORTANT FACTOR IN NETWORK DESIGN?
Overhead is simply what a network or communication method is supposed to be able to do and what it actually does. In a connection oriented TCP/IP based packet networking system, during the course of an IP packet’s journey from transmitting end to receiving end, IP packets encapsulated and de-encapsulated into and out of framing headers and trailers that define how he packet will make its way to the next hop in the path.
So every network has to have some degree of normal network overhead in order to establish and maintain the connection, which guarantees that we will never be able to use all of the bandwidth of any connection for data transmissions. For example, in a 10Mbps Ethernet connection the line may be able to transmit 10 million bits every second, but not all of those bits are data! Some of those bits are used for addressing and controlling purposes because we can’t just throw data into the network in raw form.
Also, many of those bits are used for general overhead activities, dealing with SYN and ACK, collisions on transmission, error checking, re-transmission, and so on. Beyond those, there are several other issues that greatly impacting the network performances like hardware and software; overhead exists at each of its layer from the application and operating system to significant hardware configuration. Some of the visible and familiar issues are ability of the hardware to process the data and bandwidth limitations that exist in the chain AS M K AR IM P AGE – 3 of data transmission.
Bandwidth limitations cause network throughput issues because the entire network can only run as fast as its slowest link. These bottlenecks create reduced network performance. Another great factor is the asymmetry which offers higher bandwidth in one direction then other for internet access. This was developed in terms of common user internet uses behavior that people download more than they upload to the internet. From the better network design perspective it’s important to know the speed rating for both directi ons. To give a practical example I used the web based network speed test application form www. peedtest. org and conducted five different tests from my home router to five different servers located in five different states.
The bar graph on screen shot-1 displays the higher bandwidth speed represented by the color Blue and upload speed is represented by the color Yellow. Screen shot-1: Download and Upload speed comparisons. The screen shot-2, on the next page displays the distance between my home routers and the connected servers located in five different states. This screen shot also establishes the point why and how overhead is affecting the networks.
If we nalyze the data on that image, we will see that the server located in the greatest distance is suppose to have higher latency but less download and upload speed. But that’s not what we see from the highlighted part of the image, right? Sever located in PA has 100 miles less distance than the server located in NY. But the PA server shows the higher latency and less bandwidth speed for download and upload. Bandwidth and Latency comparison in terms of distance What is causing the speed to reduce? Why is it behaving completely opposite then the theory?
SOME COMMON NETWORK TERMINOLOGY
Metrics are used to measure aspects of network and protocol performance. The values for such metrics in various scenarios indicate the level of performance of a network application. This section defines terms and metrics used industry-wide for measuring network application performance. These terms and metrics are used throughout this paper.
Bandwidth simply is a measure of how fast data transferred on our network. Think about water flowing through a pipe; the wider the inside of a pipe the more water can get through it. So, bandwidth is a measure of the diameter of this pipe, and it represents the overall capacity of the connection that is measured in bits per second (bps). Bandwidth can refer in terms of actual and theoretical throughput. For example, traditional Ethernet networks can theoretically support 10Mbps or more, but actual throughput can’t be achieved due to overhead in the computer hardware and operating system.
For easy understanding, I used a simple calculation by manually configuring the NIC speed to 10Mbps with full duplex mode to reduce the collisions. From the screen shot-3 on next page; we can see that the computer measured data in Bytes rather than bits. To measure the bandwidth it takes to transfer 110 KB of data over the network, first we added 20% overhead that means 10bits/Byte instead of 8bits/Byte, than converted Bytes into bits (10X110KBs =1100bits). My NIC can transmit 10Mbps or 10,000Kbps of data (only data transfer occurring) so, it will take 1100Kbits/10,000kbps = 0. 11 seconds to transfer the document.
According to About. com, the term Latency refers to any of several kinds of delays typically incurred in processing of network data. Low latency network connection like Ethernet generally experiences small delays rather than high latency connection like satellite internet (Mitchell). Excessive latency creates bottlenecks that prevent data from filling the network pipe, thus decreasing effective bandwidth.
ROUND TRIP TIME (RTT)
Time in milliseconds for a request to makes a trip from a source to a destination host and back again is RTT. Lower values indicated better performance. Forward and return path times are not necessarily equal. Ping is another term used in round trip time. RTT values are affected by network infrastructure, distance between nodes, network conditions, and packet size. Packet size, congestion and payload compressibility impact RTT when measured on slow links, such as dial-up connections. Other factors affect RTT, including forward error correction and data compression, which introduce buffers & queues that increase RTT, and decrease performance.
Network throughput refers to the volume of data that can flow through a network. Network throughput is constrained by factors such as the network protocols used, the capabilities of routers and switches, and the type of cabling, such as Ethernet and fiber optic. Network throughput in wireless networks is constrained further by the capabilities of NICs on client systems. This section will describe some different application types those used mostly for network data transmission.
It also explains the TCP/IPs relationship with overhead. There are two fundamental types of network applications: transactional and streaming. These application types are also called interactive and batch processing applications respectively. According to Windows Development Center, transactional applications are stop-and-go applications. They usually perform request/reply operations, often ordered. Examples of transactional applications include synchronous remote procedure called RPC, as well as some HTTP and Domain Name System (DNS) implementations (Recongnizing Slow Applications).
Streaming applications move data. To describe streaming applications with a parallel term, streaming applications adhere to a pedal-to-the-metal data transmission philosophy, usually with little concern for data ordering. Examples of streaming applications include network backup and file transfer protocol (FTP). Transactional applications are affected by the overhead required for connection establishment and termination. For example, each time a connection is established on an Ethernet network, three packets of approximately 60 bytes each must be sent, and approximately one RTT is required for the exchange.
When termination of a connection occurs, four packets are exchanged. This is for each connection an application that opens and closes connections often generates this overhead on each occurrence. Another aspect that initiates overhead is TCP/IP, which has characteristics that enable the protocol to operate as its standardized implementation requirements dictate. A TCP/IP optimization called the Nagle Algorithm can also limit data transfer speed on a connection. The Nagle Algorithm is designed to reduce protocol overhead for applications that send small amounts of data, such as Telnet, which sends a single character at a time.
After defining the overhead, this paper explains how overhead is related to the network and how it’s affecting the network performance. Throughout the paper, a few examples are used to demonstrate the existence of network and explain why overhead is an important factor for the network performance.
In short, some different applications and protocols are explained to demonstrate the relationship with overhead, as well as different OSI layer context being explained. Important networking terminologies such as bandwidth and latency are explained and demonstrated to establish the relationship between overhead. At the end of the paper, a very brief and detailed analysis provides the most important and vital information about the Ethernet network by providing the real world concept and analytical tools. Results of this experiment demonstrate that overhead is a vital factor in networking for desi gn, implementation, and performance issues.
It also helps us to understand the different methodology employed by various technologies for data transmission over the network. Details of this experiment provide us the core factors as well as general factors that need to be considered in order to properly design the network.
- Mitchell, B. (n. d. ). Network Bandwidth and Latency. Retrieved 4 2012, from About. com: http://compnetworking. about. com/od/speedtests/a/network_latency. tm
- Network Switching Tutorial. (n. d. ). Retrieved 4 2012, from www. technick. net: http://www. technick. net/public/code/cp_dpage. php? aiocp_dp=guide_networking_switching
- PC Magazine. (n. d. ). Retrieved from www. pcmag. com: Pc magazine. (n. d. ). Retrieved from http://www. pcmag. com/encyclopedia_term/0,1233,t=overhead&i=48685,00. asp theoreticla speed vs practical throughput. (n. d. ).
- Retrieved 4 2012, from www. appleinseder. ocm: http://www. appleinsider. com/articles/08/03/28/exploring_time_capsule_theoretical_speed_vs_practic al_throughput. html