Edge provider

De CAFEDU.COM
Aller à : navigation, rechercher


Les éléments de référence en anglais en cours d'annotation viennent de définitions de TechTarget

---


In communication services, an edge provider is a website, web service, web application, online content hosting or online content delivery service that customers connect to over the internet. Edge providers, which include Google, Amazon, Netflix and Facebook, use the customer's internet service provider (ISP) to deliver content.

In the United States, edge providers are regulated by the Federal Trade Commission (FTC). The choice of the word edge is intended to differentiate content, application and web service providers who operate at the edge of a network from those companies that provide the intranet's core infrastructure.

  • Customer data management and protection of personally identifiable information (PII) is an important aspect of the debate that surrounds the concept of Net Neutrality. If ISPs are categorized as edge providers or contract carriers instead of as common carriers, they are allowed to share and/or sell customer data as long as the customer has opted in.
  • Another important aspect of the Net Neutrality debate is whether or note broadband carriers can block or degrade service for certain edge providers or charge higher fees for prioritizing an edge provider's content delivery. Critics have argued that paid prioritization places an unnecessary financial stress on edge providers and allows ISPs to become powerful gatekeepers who can control and manipulate the free market.



Traffic engineering the service provider network

Traffic engineering distributes bandwidth load across network links. Learn about the evolution of traffic engineering and its role in networks transitioning from Layer 2 to IP technology. Then dive into MPLS traffic engineering and all the benefits it provides for network engineers and designers, as well as MPLS TE myths and half-truths.

Traffic engineering distributes bandwidth load across network links. Learn about the evolution of traffic engineering and its role in networks transitioning from Layer 2 to IP technology. Then dive into MPLS traffic engineering and all the benefits it provides for network engineers and designers, as well as MPLS TE myths and half-truths. Traffic engineering's role in next-generation networks

Traditional service provider networks provided Layer 2 point-to-point virtual circuits with contractually predefined bandwidth. Regardless of the technology used to implement the service (X.25, Frame Relay or ATM), the traffic engineering (optimal distribution of load across all available network links) was inherent in the process.

In most cases, the calculation of the optimum routing of virtual circuits was done off-line by a network management platform; advanced networks (offering Frame Relay or ATM switched virtual circuits) also offered real-time on-demand establishment of virtual circuits. However, the process was always the same:

  • The free network capacity was examined.
  • The end-to-end hop-by-hop path throughout the network that satisfied the contractual requirements (and, if needed, met other criteria) was computed.
  • A virtual circuit was established along the computed path.

Internet and most IP-based services, including IP-based virtual private networks (VPNs) implemented with MPLS VPN, IPsec or Layer 2 transport protocol (L2TP), follow a completely different service model:

  • The traffic contract specifies ingress and egress bandwidth for each site, not site-to-site traffic requirements.
  • Every IP packet is routed through the network independently, and every router in the path makes independent next-hop decisions.
  • Once merged, all packets toward the same destination take the same path (whereas multiple virtual circuits toward the same site could traverse different links).

Simplified to the extreme, the two paradigms could be expressed as follows:

  • Layer 2 switched networks assume that the bandwidth is expensive and try to optimize its usage, resulting in complex circuit setup mechanisms and expensive switching methods.
  • IP networks assume that the bandwidth is "free" and focus on low-cost, high-speed switching of a high volume of traffic.

The significant difference between the cost-per-switched-megabit of Layer 2 network (for example, ATM) and routed (IP) network has forced nearly all service providers to build next-generation networks exclusively on IP. Even in modern fiber-optics networks, however, bandwidth is not totally free, and there are always scenarios where you could use free resources of an underutilized link to ease the pressure on an overloaded path. Effectively, you would need traffic engineering capabilities in routed IP networks, but they are simply not available in the traditional hop-by-hop, destination-only routing model that most IP networks use.

Various approaches (including creative designs, as well as new technologies) have been tried to bring the traffic engineering capabilities to IP-based networks. We can group them roughly into these categories:

  • The network core uses Layer 2 switched technology (ATM or Frame Relay) that has inherent traffic engineering capabilities. Virtual circuits are then established between edge routers as needed.
  • IP routing tricks are used to modify the operation of IP routing protocols, resulting in adjustments to the path the packets are taking through the network.
  • Deployment of IP-based virtual circuit technologies, including IP-over-IP tunnels and MPLS traffic engineering.

The Layer 2 network core design was used extensively when the service providers were introducing IP as an additional service into their WAN networks. Many large service providers have already dropped this approach because it does not result in the cost reduction or increase in switching speed that pure IP-based networks bring.


MPLS TE technology overview

Excerpted from the Cisco Press book, QoS for IP/MPLS Networks, Chapter 2

QoS for IP/MPLS networksMPLS TE Technology Overview presents a review of Multiprotocol Label Switching Traffic Engineering (MPLS TE) and describes the basic operation of the technology. This description includes details of TE information distribution, path computation, and signaling of TE LSPs. Subsequent sections present how Differentiated Services (DiffServ)-Aware traffic engineering (DS-TE) helps integrate the implementation of DiffServ and MPLS TE. This chapter closes with a review of the fast reroute capabilities in MPLS TE.

The IP routing tricks try to shift the traffic load to underutilized links by artificially lowering their cost, thus making them look more attractive to routing protocols like OSPF or IS-IS. Fine-tuning the link costs in a complex network to achieve good traffic distribution is almost impossible, so this approach works only in niche situations. Significantly better results can be achieved with Border Gateway Protocol (BGP) thanks to a rich set of attributes it can carry with every IP route. Note that BGP was originally designed to support various routing policies, so you could implement rudimentary traffic engineering as yet another routing policy.

Virtual circuits implemented with IP-over-IP tunnels (using a variety of technologies) are approximately as complex as routing protocol cost-tuning and so are better avoided (although they could still represent a valuable temporary fix). MPLS traffic engineering (MPLS TE), on the other hand, is a complete implementation of traffic engineering technology rivaling the features available in advanced ATM or Frame Relay networks. For example:

  • The MPLS TE network tracks available resources on each link using extensions to IP routing protocols (only OSPF and IS-IS are supported, as MPLS TE needs full visibility of network topology, which is not available with any other routing protocol).
  • Whenever a new tunnel (the MPLS TE terminology for virtual circuit) needs to be established, the head-end router computes the end-to-end path through the network based on the reported state of available resources.
  • The tunnel establishment request is signaled hop-by-hop from the tunnel head-end to the tunnel tail router, reserving resources on every hop.
  • After the tunnel is established, the new path is seamlessly integrated with the routing protocols running in the network.

The support for MPLS TE is available in high-end and midlevel routers from multiple vendors. It's therefore highly advisable that you consider the requirements of MPLS TE (OSPF or IS-IS, for example) in your network design. If you implement the basic infrastructure needed by MPLS TE during the network deployment, you'll have it ready to use when you need to shift the traffic to cope with unexpected increases in bandwidth usage or delayed deployment of higher-speed links.


MPLS traffic engineering essentials

MPLS (Multi-protocol Label Switching) is the end result of the efforts to integrate Layer 3 switching, better known as routing, with Layer 2 WAN backbones, primarily ATM.. Even though the IP+ATM paradigm is mostly gone today because of the drastic shift to IP-only networks in the last few years, MPLS retains a number of useful features from Layer 2 technologies. One of the most notable is the ability to send packets across the network through a virtual circuit called Label Switched Path, or LSP, in MPLS terminology.

NOTE: While the Layer 2 virtual circuits are almost always bidirectional (although the traffic contracts in each direction can be different), the LSPs are always unidirectional. If you need bidirectional connectivity between a pair of routers, you have to establish two LSPs.

The LSPs in MPLS networks are usually established based on the contents of IP routing tables in core routers. However, there is nothing that would prevent LSPs being established and used through other means, provided that:

  • All the routers along the path agree on a common signaling protocol.
  • The router where the LSP starts (head-end router) and the router where the LSP ends (tail-end router) agree on what's traveling across the LSP.

NOTE: The other routers along the LSP do not inspect the packets traversing the LSP and are thus oblivious to their content; they just need to understand the signaling protocol that is used to establish the LSP.

With the necessary infrastructure in place, it was only a matter of time before someone would get the idea to use LSPs to implement MPLS-based traffic engineering -- and the first implementation in Cisco IOS closely followed the introduction of base MPLS (which at that time was called tag switching). The MPLS traffic engineering technology has evolved and matured significantly since then, but the concepts have not changed much since its introduction:

  • The network operator configures an MPLS traffic engineering path on the head-end router. (In Cisco's and Juniper's devices, the configuration mechanism involves a tunnel interface that represents the unidirectional MPLS TE LSP.)
  • The head-end router computes the best hop-by-hop path across the network, based on resource availability advertised by other routers. Extensions to link-state routing protocols (OSPF or IS-IS) are used to advertise resource availability.

NOTE: The first MPLS TE implementations supported only static hop-by-hop definitions. These can still be used in situations where you need a very tight hop-by-hop control over the path the MPLS TE LSP will take or in networks using a routing protocol that does not have MPLS TE extensions.

  • The head-end router requests LSP establishment using a dedicated signaling protocol. As is often the case, two protocols were designed to provide the same functionality, with Cisco and Juniper implementing RSVP-TE (RSVP extensions for traffic engineering) and Nortel/Nokia favoring CR-LDP (constraint-based routing using label distribution protocol).
  • The routers along the path accept (or reject) the MPLS TE LSP establishment request and set up the necessary internal MPLS switching infrastructure.
  • When all the routers in the path accept the LSP signaling request, the MPLS TE LSP is operational.
  • The head-end router can use MPLS TE LSP to handle special data (initial implementations only supported static routing into MPLS traffic engineering tunnels) or seamlessly integrate the new path into the link-state routing protocol.

The tight integration of MPLS traffic engineering with the IP routing protocols provides an important advantage over the traditional Layer 2 WAN networks. In the Layer 2 backbones, the operator had to establish all the virtual circuits across the backbone (using a network management platform or by configuring switched virtual circuits on edge devices), whereas the MPLS TE can automatically augment and enhance the mesh of LSPs already established based on network topology discovered by IP routing protocols. You can thus use MPLS traffic engineering as a short-term measure to relieve the temporary network congestion or as a network core optimization tool without involving the edge routers.

In recent years, MPLS traffic engineering technology (and its implementation) has grown well beyond features offered by traditional WAN networks. For example:

  • Fast reroute provides temporary bypass of network failure (be it link or node failure) comparable to SONET/SDH reroute capabilities.
  • Re-optimization allows the head-end routers to utilize resources that became available after the LSP was established.
  • Make-before-break signaling enables the head-end router to provision the optimized LSP before tearing down the already established LSP.

NOTE: Thanks to RSVP-TE functionality, the reservations on the path segments common to old and new LSP are not counted twice.

  • Automatic bandwidth adjustments measure the actual traffic sent across an MPLS TE LSP and adjust its reservations to match the actual usage.

10 MPLS traffic engineering myths and half truths

As with any complex technology, network engineers, designers and consultants tend to misunderstand some nuances of MPLS traffic engineering, resulting in myths and half-truths that are propagated throughout the industry. Here I will address some of the more common ones. The analysis is based on MPLS TE technology as described in various Internet Engineering Task Force (IETF) documents, as well as the current implementation available in Cisco IOS releases 12.4T and 12.2S.

1. Myth: MPLS TE is a quality-of-service feature. While MPLS TE can be used to shift traffic from overloaded paths to alternate paths with free bandwidth, it contains no inherent quality-of-service (QoS) features like guaranteed bandwidth, policing or shaping. The quality-of-service features have to be designed and deployed separately on top of the MPLS TE infrastructure. The deployment of MPLS TE in a network does not (by itself) improve the quality of its services.

2. Half-truth: MPLS TE improves network convergence. The MPLS Fast Reroute functionality provides a temporary fix to a link or node failure by shifting the MPLS TE-encapsulated traffic to a preconfigured bypass (no rerouting is provided for regular IP traffic). The convergence and subsequent network topology re-optimization is still performed by the IP routing protocols.

3. Myth: MPLS TE has to be deployed throughout the network. You can use MPLS TE in tactical situations, for example, between a pair of routers to shift the traffic away from a congested link or to provide a fast reroute protection of a critical link in your network.

4. Half-truth: MPLS TE can solve the network congestion issues. MPLS TE does not create new bandwidth; it only allows you to use the existing bandwidth more efficiently. You can use the MPLS TE tunnels to shift the traffic from the lowest-cost path computed by IP routing protocols to an alternate less-utilized path, temporarily relieving the congested link. But that action might cause the congestion of the alternate path, resulting in a domino effect throughout the network.

5. Myth: Bandwidth reserved by an MPLS TE tunnel will be available to the tunneled traffic. Although the MPLS TE technology uses extensions to the Resource Reservation Protocol (RSVP), which was originally designed to provide end-to-end QoS in IP networks, the MPLS TE RSVP reservations serve solely as an accounting mechanism in the MPLS TE module. This prevents link oversubscription by MPLS TE paths. MPLS TE reservations do not result in any QoS actions on the intermediate nodes. Lacking manual configuration on intermediate nodes, MPLS TE traffic is treated indistinguishably from the regular IP or MPLS traffic.

6. Myth: To use MPLS TE, you have to deploy MPLS in your network. MPLS TE can work without network-wide MPLS deployment. Traffic can be sent across MPLS TE tunnels without a label distribution protocol (LDP or TDP). Note: If you're running MPLS-based Virtual Private Networks (VPNs), you have to run LDP over an MPLS TE tunnel unless it terminates at the edge of your network on a Provider Edge (PE) router.

7. Half-truth: MPLS TE only works with OSPF and IS-IS routing protocols. MPLS TE paths can be configured manually (specifying all hops in the path) and independently of the IP routing protocol deployed in the network. However, if you want to have automatic path calculations and automatic rerouting of IP traffic onto MPLS TE paths, you have to use OSPF or IS-IS.

8. If you use the MPLS TE Fast Reroute, the quality of service will not degrade following a network failure. The MPLS TE Fast Reroute shifts MPLS TE tunnels established across a failed link or node onto preconfigured backup tunnels. The overall quality-of-service will not degrade only if:

  • These tunnels have adequate bandwidth;
  • There is enough free capacity on the backup paths
  • The quality-of-service mechanisms guarantee the bandwidth to the backup tunnels.

In all other cases, either the rerouted traffic or the traffic traversing the backup path prior to node or link failure will encounter degraded quality-of-service.

9. Half-truth: You can use MPLS TE only within a single OSPF area. Inter-area MPLS traffic engineering is available (see also detailed description), but has severe limitations:

  • The MPLS TE path cannot be computed automatically. You have to manually specify at least the Area Border Routers (ABRs) the MPLS TE path crosses;
  • Automatic mapping of IP traffic onto MPLS TE paths (autoroute) is not available, as the router establishing the MPLS TE path does not know the exact topology of other OSPF areas.
  • Inter-area MPLS TE paths cannot be re-optimized after they have been established.

Note: The same is true for IS-IS. Truly dynamic MPLS TE tunnels can be established within a single IS-IS level, but they can cross the level boundary if you manually configure the transition points.

10. No longer true: You can't differentiate customer traffic based on Class-of-Service if you use MPLS TE The technology itself never had this limitation, but Cisco IOS did not support multiple parallel tunnels carrying different traffic classes for a long time.

About the Author
Ivan Pepelnjak, CCIE No. 1354, is a 25-year veteran of the networking industry. He has more than 10 years of experience in designing, installing, troubleshooting and operating large service provider and enterprise WAN and LAN networks and is currently chief technology advisor at NIL Data Communications, focusing on advanced IP-based networks and web technologies. His books include MPLS and VPN Architectures and EIGRP Network Design. You can read his blog here: http://ioshints.blogspot.com/index.html.


how a cloud provider and a collocation center use virtual switching to integrate physical and virtual networks.

Editor's note: In the first part of this series on integrating physical and virtual networks, we examine the role of virtual switching in networking across environments. In part two, we highlight two examples of virtual switching in action.

Not every company is ready to move to full SDN or network virtualization, but there are plenty of measures to take along the way to be sure the virtual and physical worlds are communicating.

Cloud provider Iland, which is primarily a Cisco switch and router shop, takes advantage of VMware’s integration of the Cisco Discovery Protocol (CDP) Messaging System into its VMware virtual switches.

When a network team member adds network components, creates a VLAN on a physical switch, or works with MAC addresses, the CDP Messaging System integration makes these things clear, said Iland’s Giardina. “When we bring up a VM, whether we need to make sure it follows an IP address policy or a port security policy or a VLAN policy, this is all transparent to the hardware side,” he said.

Engineers trained on Cisco hardware can easily apply what they know to the virtualization stack and they can use this communication to apply virtual network components and services to network segments.


The many benefits of SD-WAN for today's networks

“In the past, we had to deal with multiple firewalls and multiple routers for each customer. VMware enables us to spin up iterations of its virtual firewall called the vShield Edge (a part of vCloud Networking and Security) and still have transparency at the network layer to administer everything. And now we don’t have to provision that extra hardware,” Giardina said. This creates savings in time, CAPEX, person hours, and training. “We can virtualize everything and the only cost is the monthly recurring cost to run the existing gear,” Giardina said. Rackforce Uses Cisco Nexus 1000v

For Rackforce, a provider of data center services, Cisco’s Nexus 1000v virtual switch met a couple of challenges to integrating the virtual edge.

First, all of Rackforce’s equipment is dual-homed, using multiple upstream switch fabrics. Rackforce uses IBM blade centers and Cisco UCS chassis with dual home switching, using fabric A and fabric B. VMware did not support two fabrics in an active/active mode when Rackforce was looking for a vswitch solution. “The only way to do that was using the Cisco Nexus 1000v with MAC pinning,” said Denis Skrinnikoff, director of network at Rackforce, a Cisco customer. This created an active/active port channel to different fabrics without having to rely on the LACP or VPC protocols which were typically used to do multi-chassis link aggregation, but were not supported by Cisco UCS and IBM blade.

The second challenge for Rackforce was policy enforcement. “Using the Cisco Nexus 1000v, we identify and observe the traffic to each VM. I can use SNMP from the virtual switch and integrate my existing monitoring tools to see each VM and the amount of traffic it is using, and to look at the flows and where the traffic is going,” said Skrinnikoff. This enables end-to-end QoS and policy enforcement. “I already know Cisco, and I know XOS, and I know how to create the policy map. I can use these skill sets to enforce policies in the virtual world.”

With the Cisco Nexus 1000v, an engineer can integrate existing provisioning engines, script the network deployments, and have a single consistent network configuration from the virtual to the physical, Skrinnikoff explains.

Rackforce’s existing virtual networking topology uses Layer 2 isolation in which VLANs segment traffic in isolated, secure environments for each tenant’s traffic. “We have hundreds to thousands of VLANs running to each of our cloud infrastructures. We broke it out into multiple clouds. We are in the process of deploying a VXLAN overlay using vCloud Director,” said Skrinnikoff. This will ease scaling for Rackforce’s virtual network.

“VXLAN is simple to integrate, easy to implement, and is the most widely supported by the switch vendors we use,” said Skrinnikoff. The Cisco Nexus 1000v supports VXLAN.

About the author
David Geer writes about security and enterprise technology for international trade and business publications.