Bandwidth Management and Traffic Optimization

Bandwidth is a product in the present day corporate world. Users are approaching the drive for more and more bandwidth while managers are trying to take power over the IT budgets. Following are a few examples:

  • Web-browsing that is not business critical but that is potentially intensive in bandwidth
  • Web-content that is for corporate is more bandwidth intensive and it needs to be pushed out to the potential customers
  • Critical applications that need to run over the corporate VPN

The solution to bandwidth drawbacks has all along been to increase the bandwidth or to put up with the slow speed. In the first solution, users get the wanted bandwidth, but cost a lot and there is a financial burden. The other solution has the accountants satisfied. But this would impact the customers as they have the feeling of over subscription telecommunication circuits. On the whole, one of them loses out.

Moving on to Bandwidth management, it is the process of controlling and measuring communication (traffic, or packets) on a network link in order to avoid filling up the link either to its capacity or crossing the capacity would result in a network congestion and a poor performance. In other words, bandwidth management is the means of allocating bandwidth resources for applications important on a network. Without the management of bandwidth, an application or a user has the chance to control all available bandwidth on using the network.

Typically, a bandwidth management functions by sorting outbound network traffic into various classes by the service and application type. Traffic is then planned out according to the minimum and maximum bandwidth that is configured for each of the traffic types.

The need for Bandwidth Management

It is very useful for corporate networking that use intranets. Intranets are used for sharing information and web navigation that have a high demand for bandwidth. Just adding connections would not address the issue relating to bandwidth as the adequate availability is not assured. Almost all network links are shared by more than one user- this shows that there is enough available bandwidth that is shared between all users and applications.

Making use of bandwidth management to assign bandwidth to users during peak periods can prevent traffic clog on the network. The temporary network jam can be eased by making use of bandwidth management. However, if in case there is a continuous jamming, those links that could be improved with a higher capacity are suggested.

Unified Bandwidth Management

UBM or Unified Bandwidth Management is a term coined for a new trend in network management. There are new UBM appliances that incorporate multiple network management and also control operations that include network load balancing, with built-in DNS, network redundancy, packet/traffic shaping, tunnel reliability and optimization. Also, some appliances include additional network monitoring functions for understanding the status of the servers and also reporting on utilization of network. Conventionally, these operations were once handled by a multiple appliances or software based systems. But still, they are and have emerged as the next sign in network management and control solutions.

Increasingly, a large number of organizations use UBM for their growing problems. These include the required bandwidth and remote user base which are on the rise.

How can organizations use UBM?

Acceleration of critical application

UBM accelerates either applications that drive business or the most commonly used applications. This method involves portioning to enhance bandwidth for mission-critical applications.

Control end-user bandwidth access

UBM makes use of monitoring system to bring down speeds for non critical users. It also increases bandwidth for applications that require it the most. This decisive ends in rate limits for the non critical uses.

Inexpensive bandwidth multiplier

Increase in bandwidth is another benefit that has very little impact on the network infrastructure per se. The reallocation of bandwidth takes place seamlessly. It also takes place without the introduction of new protocols or ISP coordination.

Network redundancy

Unified Bandwidth Management puts together network redundancy for management of bandwidth also. The Edge platform automatically provides network redundancy between each link in order to result in the ensuring of continuous WAN connectivity in case of a network outrage. This happens with an addition to increased speed delivered my multiple WAN links.

Advantages & Disadvantages

The advantages of using UBMs are

  • There is a one point solution for network balancing, tunnel optimization and traffic shaping needs. This way the manageability is easier for IT personnel.
  • There are many UBM vendors who sell their products with unlimited user licenses. This could be a major saving factor while trying to judge the type of product that can be purchased.

The disadvantage of using UBM

  • What to do if the UBM appliance fails? For this reason, most UBM appliances have built-in HA (high availability) solutions to provide instant failover to a secondary or backup unit if the primary unit should stop working. This is measured as a disadvantage.

The Evolution

Ever since the start of the age of communication and technology, data has been delivered over shared networks. For ease of communication and information, the public switched telephone network (PSTN) was framed with more access points than the actual switching capacity. The operators therein designed their networks that were based on peak usage. They invented mathematics (e.g. Erlang distribution) to help for communication during peak times.

The PSTN was dealt with the call admission control- this call refers to one that wasn’t admitted to the network as long as end-to-end capacity was there to handle it. Although this model was absolutely acceptable for voice circuit-switched calls, it would not hold good for the modern network that would match today’s voice and data demands.

Data access followed the evolution

Data access brought with it some complexities to the network as well. The access to data or network resources was not determined by human-driven needs any more for e.g. making a phone call. Voice and data destination and the number paths grew to a very great extent.

Management of this data network was more difficult when the applications congregate to a network. The stress of triple-play, converged voice data and video delivered over a pipe to a huge number of destinations were experienced by the networks.

New applications with different quality and timeliness assurances added another facet and complexity to the network. A model called as the admission control was not sufficient any more. The end-to-end network changed rigorously with mobile IP and other applications challenging for the same limited bandwidth.

Quality of Service (QoS)

Quality of Service is a definite assessment of performance in a data communications system. An example of this is to make sure that real-time voice and video are distributed without any distractions- annoying blips, say. A traffic contract is settled between the customer and the network provider that assures a minimum bandwidth with the huge delay that could be taken in milliseconds.

As a result of dedicated channels being set up between parties, the plain old telephone system (POTS) gave the highest Quality of service for many years. When data are broken into packets that travel through routers in a LAN or WAN, QoS mechanisms are made use of to prioritize to real-time data such as voice over IP (VoIP) than to non-real time data such as file downloads. Yet another choice in packet switching is to overbuild the network that ensures that it would hold all the traffic data that is fed into it.

Note: packet switching – This refers to a network technology that divides a message into small packets for communication. Each packet in a packet-switched network has a destination address. Therefore, all packets in one message don’t have to move on the same path. The target computer re-organizes the packets into their suitable sequence. Packet switching is used to make best use of the bandwidth existing in a network and to curtail the latency (The lingering time between the order for reading/writing some information from/to a specified place and the starting of the data-read/write operation).

As a result of traffic conditions changing, they can be dynamically routed through different paths in the network- they can even reach out of orderliness. Examples of Network protocols such as IP and IPX were designed for packet based networks. In the present scenario, everything is built around IP. There are combinations of methods that compete to provide QoS in IP networks. ATM was initially one of the first packet technologies, it was to build in modes of service.

QoS vs. CoS

Quality of Service is the mechanisms in the network software that determine which packets could be prioritized. Whereas class of service is a reference to feature sets, groups of services that are consigned to users according to company policy. If suppose a feature set includes a priority transmission of data, then the class of service would wind up that is being carried out in quality of service functions within the set of routers and moves in the network.

Traffic management

In the early days of consumer data access, traffic optimization was not given much priority. There was congestion and it was limited to the number of ports and switching capability on the PSTN.

But over a period of time, as dial-up access matured, PSTN lines for their modems, were prioritized and used by subscribers. This led to a conflict in the access to dial-up equipment and phone switches also. The content moved from proprietary forums to the web. Broadband emerged as a new means to increase capacity and lower the cost for the providers.

Very soon, users who were tech-savvy with fast computer hardware found how to make digital copies of music. They also learned how to shred music from compact discs. Then music sharing grew over to meet the digital age with the new so called “peer-to-peer” (P2P) technology that powered free music sharing sites like Napster. Other file-sharing networks like WinMX, Kazaa and Gnutella came into sight and became popular. There in, the bandwidth subscriber rates also soared.

The broadband service providers added the capacity for access with the view to meet subscriber growth and implemented access limits on the TCP port numbers that are used by these bandwidth-intensive applications.

A continue process

There are many millions of new subscribers around the globe who are connected to the Internet. They are drawn by popular applications like P2P file sharing, online gaming and digital media such as YouTube, voice-over-IP (VoIP) services like Skype, bandwidth consumption which were all high up.

The growing number of applications, each with its own unique characteristics and demand for delivery that were competing for available bandwidth, the packets were dropped and the quality of service (QoS) suffered. Very few users could cause problems in quality for a very wide range of applications, as a result of this.

As quality of service is threatened, the service providers did a great amount of investment in intelligent network tools in order to get a better understanding of application traffic and subscribers. Network intelligence was the foremost process in balancing out the competing network demand as well as establishing reasonable network management practices.

Models of traffic optimization

Broadband adoption continued to grow globally. Service providers began to leverage their policy-management infrastructure in order to improve on operational efficiencies in areas like network security. Traffic optimization was static as long as the service provider left enough capacity for consumers to have access to the content they desired.

But the rise of mobile data meant large amounts and costs and very less access resources were being shared by the unknown and various users. Services providers responded to this by added user-based management technologies to make sure of fairness and provide a stable quality of experience (QoE) for all its users which is very essential for the running of business and competition in the markets.

Traffic Optimization models currently in use

Application-based

This type of traffic optimization makes use of the properties on each of the networks protocol in order to provide the minimum bandwidth that assures acceptable quality. In this, bulk file transfer applications are prioritized the lowest as they are non-interactive and also lives long. An example of this is – a one-way bulk non-interactive kind of application like a file download is given a low priority, where as on-way streaming media such as a YouTube may be next in line in priority and VoIP with an interactive application has the highest priority.

The prioritization becomes very important as the network becomes very much congested – this could be a result of the application being degraded, if not prioritized. The Application-based optimization puts forth excellent overall quality and satisfaction of subscriber.

User-based

This model of traffic optimization is measured over short time periods. This type of model provides the service provider a very strong tool to make sure about the dependable quality on an individual subscriber basis. But a strictly user-based model could not be fair to the large number of users, as their traffic is treated indiscriminately irrespective of the application they use. A better solution to this would be to put together application and user-based models thereby facilitating the users to maintain their overall bandwidth behaviour and control over the applications that are impacted as a result of congestion.

Application- and user-based

The application and user based method refers to the bandwidth access to the service provider and the end-user as well. The service provider insists on user-to-user allocation, and the end-user controls on how their individual traffic operates within that allocation. An example could be – when a user wishes to prioritize their access to VPN more than that of their HTTP, while another user may select online gaming as their foremost priority. During the times of network congestion, the application and user –based model would ensure one end-user’s prioritized application doesn’t have to impact another one’s.

This traffic optimization model would have an effect of increase in subscriber satisfaction by having personalized service to offer, letting en-users to have more control over their priorities. This model might involve a proportion of QoS points or be presented as a Webpage. This would give particular weightage per application or per application class. There is no change in billing plans to function this service- it would make it more realistic with the present day modern technology and the consumer education level.

This model is most favourable because it gives a network-neutral and consumer-transparent bandwidth sharing.

Future trends

Internet traffic optimization has definitely come a long way from the yester years of dial-up access, in terms of demand as well as complexity. End-user controls give an insistent fairness to inter-user that provides subscribers the ability to prioritize their own applications as how they would be fit enough – this would mean effectively taking out any bias that the service provider may inflict on applications.

As soon as traffic optimization gets to a stage where both needs of the operator as well as the needs of the end-user are effectively balanced, traffic optimization will come into play once again. Then the new model may look like an economic free market that would guarantee justice through the coalition of every party’s interests. The prime factor in calculating the best possible network solution in al situations is transparency where the concern is quality of experience.