Asynchronous transfer mode

format_list_bulleted Contenido keyboard_arrow_down
ImprimirCitar
25 Mbps ATM network card with PCI interface and braided pair connection.

The asynchronous transfer mode (Asynchronous Transfer Mode, ATM) is, according to the now-defunct ATM Forum, "a telecommunications concept defined by the standards of the ANSI and ITU organizations for transporting a full range of user traffic, including voice, data, and video signals". ATM was developed to meet the needs of the Broadband Integrated Services Digital Network, as defined in the late 1980s and designed to integrate telecommunications networks. Additionally, it was designed for networks that must handle traditional high-throughput data traffic (for example, file transfers) and real-time, low-latency content such as voice and video. The reference model for ATM resembles the three lowest layers of the ISO-OSI reference model: network layer, data link layer, and physical layer.

ATM provides similar functionality to both circuit-switched and packet-switched networks: ATM uses asynchronous time division multiplexing and encodes data into small packets of fixed size (ISO-OSI frames) called cells. This differs from approaches like Internet Protocol or Ethernet which use variable size packets and frames. ATM uses a connection-oriented model in which a virtual circuit must be established between two endpoints before the actual data exchange begins. These virtual circuits can be "permanent", that is, dedicated connections that are usually preconfigured by the service provider, or "switched," that is, configured on a per-call basis using signaling and disconnected. when the call ends.

ATM is a core protocol used over the SONET/SDH backbone of the public switched telephone network and Integrated Services Digital Network (ISDN), but its use is declining in favor of the next generation network in which the Communication is based on the IP protocol.

Layer 2 - datagrams

In the data link layer of the ISO-OSI reference model (layer 2), the basic transfer units are generically called frames. In ATM these frames are of a fixed length (53 octets or bytes) and are specifically called "cells".

Cell Size

If a voice signal is reduced to packets and is forced to share a link with overflowing data traffic (traffic with some large data packets), no matter how small the voice packets may be, they are always transmitted. they will come across full-size data packets. Under normal hold conditions, cells can experience maximum hold delays. To prevent this issue, all ATM packets, or "cells", are the same size. In addition, the fixed cell structure allows the ATM to be easily switched by hardware without the inherent delays introduced by switching software and routing frames.

Therefore, ATM designers used small data cells to reduce jitter (delay variation, in this case) in multiplexing data streams. Reducing jitter (and also end-to-end round-trip delays) is particularly important when carrying voice traffic, since the conversion of digitized speech to an analog audio signal is an inherently compute-time process. real, and to do a good job, the codec that does this needs an evenly spaced (in time) data stream. If the next data item isn't available when needed, the codec has no choice but to either produce silence or guess - and if the data is late, it's useless, because the time period in which it should have been converted to a signal has already passed. has passed.

At the time of ATM design, 155 Mbit/s Synchronous Digital Hierarchy (SDH) with a payload of 135 Mbit/s was considered a fast optical network link, and many Plesiochronous Digital Hierarchy (PDH) on the network digital were considerably slower, ranging from 1.544 to 45 Mbit/s in the US, and 2 to 34 Mbit/s in Europe.

At 155 Mbit/s, a full-length data packet of 1,500 bytes (12,000 bits), enough to contain a maximum-IP packet size for Ethernet, would require 77.42 µs to transmit. On a low-speed link, such as a 1.544 Mbit/s T1 line link, the same packet would take up to 7.8 milliseconds.

A queuing delay induced by several of these data packets could exceed 7.8 ms several times, in addition to any packet generation delay on the shortest voice packet. This was clearly unacceptable for voice traffic, which needs to have low jitter in the data stream being fed into the codec if good quality sound is to be produced. A packetized voice system can produce this low jitter in several ways:

  • Using a breeding buffer between the network and the codec, one big enough for the codec to move most of the jitter of the data. This allows to smooth the jitter, but the delay introduced by the passage through the buffer requires echo canceler even in local networks; this was considered too expensive at that time. In addition, it increased the delay through the channel and made conversation difficult in the high- retard channels.
  • Using a system that intrinsically provides a low jitter (and a minimum general delay) to the traffic that needs it.
  • Operate based on a user 1:1 (i.e. a dedicated pipe).

ATM was designed for a low-jitter network interface. However, "cells" by design to provide short queuing delays while still supporting datagram traffic. ATM divided all packets, data, and voice streams into 48-byte chunks, adding a 5-byte routing header to each one so they could be reassembled later. The choice of 48 bytes was political and not technical.

When CCITT (now ITU-T) was standardizing ATM, parties in the United States wanted a 64-byte payload because this was considered a good compromise on larger payloads optimized for data transmission. shorter data and payloads optimized for real-time applications like voice; parties in Europe wanted 32-byte payloads because the small size (and therefore short transmission times) simplify voice applications with respect to echo cancellation. Most European parties eventually accepted the Americans' arguments, but France and a few others remained in favor of a shorter cell length. With 32 bytes, France would have been able to implement an ATM-based voice network with calls from one end of France to the other without the need for echo cancellation. 48 bytes (plus 5 header bytes = 53) were chosen as the compromise between the two parties. 5-byte headers were chosen because 10% of the payload was thought to be the maximum price to pay for routing information. ATM multiplexed these 53-byte cells instead of packets which reduced worst-case cell contention jitter by a factor of nearly 30, reducing the need for echo cancellers.

Structure of an ATM cell

An ATM cell consists of a 5-byte header and a 48-byte payload. The payload size of 48 bytes was chosen as described above.

ATM defines two different cell formats: user-network interface (UNI) and network-to-network. (NNI). Most ATM links use the UNI cell format.

Scheme of an ATM UNI cell.

7 4 3 0
GFC VPI
VPI
VCI
VCI
VCI PT CLP
HEC


Charge and filling if necessary (48 bytes)

ATM NNI Cell Scheme

7 4 3 0
VPI
VPI
VCI
VCI
VCI PT CLP
HEC


Charge and filling if necessary (48 bytes)

GFC = The generic flow control field (GFC) is a 4-bit field that was originally added to support the connection of ATM networks to shared access networks, such as a distributed double bus ring (DQDB). The GFC field was designed to give the Interface User-Red (UNI) 4 bits to negotiate multiplexation and flow control between the cells of several ATM connections. However, the exact use and values of the GFC field have not been standardized, and the field is always set at 0000.
VPI = Virtual Route Identifier (8 UNI bits, or 12 NNI bits)
VCI = Virtual Channel Identifier (16 bits)
PT = useful load type (3 bits)
PT bit 3 (msbit): network management cell. If it is 0, the user data cell is applied and the following:
PT bit 2: explicit indication of forward congestion (EFCI); 1 = congestion of the experienced network
PT bit 1 (lsbit): User-to-User Bits (AAU) of the ATM. Used by AAL5 to indicate the limits of the packages.
CLP = Priority of cell loss (1-bit)
HEC = Control of header errors (8-bit CRC, polynomial = X8 + X2 + X + 1)

ATM uses the PT field to designate various special types of cells for operations, administration, and management (OAM), and to delineate packet boundaries in some ATM adaptation layer (AAL) systems. If the most significant bit (MSB) of the PT field is 0, it is a user data cell, and the other two bits are used to indicate network congestion and as a general purpose header bit available for ATM adaptation layers. If the MSB is 1, it is a management cell, and the other two bits indicate the type. (Network management segment, end-to-end network management, resource management, and reserved for future use.

Several ATM handshake protocols use the HEC field to control a CRC-based framing algorithm, which allows ATM cells to be located without overhead beyond what would otherwise be needed for header protection. The 8-bit CRC is used to correct single-bit header errors and detect multi-bit header errors. When multi-bit header errors are detected, the current and subsequent cells are skipped until a cell without header errors is found.

A UNI cell reserves the GFC field for a local system flow control/sub-multiplexing between users. This was intended to allow multiple terminals to share a single network connection, in the same way as two Integrated Services Digital Network. (ISDN) can share a single basic rate ISDN connection. The four GFC bits must be zero by default.

The NNI cell format replicates the UNI format almost exactly, except that the 4-bit GFC field is remapped to the VPI field, extending the VPI to 12 bits. Thus, a single ATM NNI interconnection is capable of handling nearly 212 VPs of up to nearly 216 VCs each (in practice, some of the VP numbers and VC are reserved).

Cells in practice

ATM supports different types of services through AALs. Standardized AALs include AAL1, AAL2, and AAL5, and the rarely used AAL3 and AAL4. AAL1 is used for constant bit rate (CBR) services and circuit emulation. Synchronization is also maintained in AAL1. AAL2 through AAL4 are used for Variable Bit Rate (VBR) and AAL5 for data. The AAL that is used for a given cell is not encoded in the cell. Instead, it is negotiated or configured on the endpoints based on the virtual connection.

After the initial design of ATM, networks have become much faster. A 1500 byte (12000 bit) full size Ethernet frame takes just 1.2 µs to transmit on a 10 Gbit/s network, reducing the need for small cells to reduce jitter due to contention. Some consider that this justifies the substitution of ATM for Ethernet in the backbone network. However, keep in mind that increasing link speeds alone does not alleviate jitter due to queuing. Furthermore, the hardware to implement service adaptation for IP packets is expensive at very high speeds. Specifically, at speeds of OC-3 and above, the cost of segmentation and reassembly (SAR) hardware makes ATM less competitive for IP than Packet Over SONET (POS); Due to its fixed 48-byte payload, ATM is not suitable as a data link layer directly underlying IP (without the need for SAR at the data link level), since the data link layer Open Systems Interconnection over which IP operates must provide a maximum transmission unit (MTU) of at least 576 bytes. SAR performance limits mean the fastest IP router ATM interfaces are STM16 - STM64 which actually compares while as of 2004 POS can operate on OC-192 (STM64) with higher speeds expected in the future.

On the slowest or most congested links (622 Mbit/s or less), ATM makes sense, and for this reason most asymmetric digital subscriber lines. (ADSL) use ATM as an intermediary layer between the physical link layer and a Layer 2 protocol such as PPP or Ethernet.

At these lower speeds, ATM provides a useful capability to carry multiple logical circuits on a single physical or virtual medium, although other techniques exist, such as Multi-link PPP and Ethernet. VLANs, which are optional in VDSL implementations, DSL can be used as an access method for an ATM network, allowing a DSL termination point at a telephone central office to connect to many Internet service providers through a wide area ATM network. In the United States, at least, this has allowed DSL providers to provide DSL access to the customers of many Internet service providers. Since one DSL termination point can support multiple ISPs, the economic viability of DSL has been substantially improved.

Reasons for virtual circuits

ATM works as a channel-based transport layer, using virtual circuits (VCs). This is included in the concept of virtual paths (VPs) and virtual channels. Each ATM cell has an 8-bit or 12-bit Virtual Path Identifier (VPI) and a pair of 16-bit Virtual Channel Identifiers (VCI) defined in its header.

The VCI, together with the VPI, is used to identify the next destination of a cell as it passes through a series of ATM switches on its way to its destination. The length of the VPI varies depending on whether the cell is sent to the user-network interface (at the edge of the network), or if it is sent to the network-network interface (inside the network).

As these cells traverse an ATM network, switching takes place by changing the VPI/VCI (label exchange) values. Although VPI/VCI values are not necessarily consistent from one end of the connection to the other, the concept of a circuit is consistent (unlike IP, where a given packet might arrive at its destination by a different route than the others). ATM switches use the VPI/VCI fields to identify the virtual channel link (VCL) of the next network that a cell needs to transit on its way to its final destination. The function of the VCI is similar to that of the data link connection identifier (DLCI) in frame relay and the logical channel number and logical channel group number in X.25.

Another advantage of using virtual circuits is the possibility of using them as a multiplexing layer, allowing different services (such as voice, frame relay, n*64 channels, IP). The VPI is useful for reducing the switching table of some virtual circuits that have common paths [citation needed].

Using cells and virtual circuits for traffic engineering

Another key ATM concept is the traffic contract. When an ATM circuit is configured, each switch on the circuit is informed of the traffic class of the connection.

ATM traffic contracts are part of the mechanism by which "quality of service" (QoS) is guaranteed. There are four basic types (and several variants) that each have a set of parameters that describe the connection.

  1. CBR - Constant bit rate: a maximum cell speed (PCR) is specified, which is constant.
  2. VBR - Variable binary speed: a medium or sustainable binary speed (SCR) is specified, which can reach a certain level, a PCR, for a maximum interval before being problematic.
  3. ABR - Binary speed available: a guaranteed minimum speed is specified.
  4. UBR - Unspecified binary speed: traffic is assigned to all the remaining transmission capacity.

VBR has real-time and non-real-time variants, and is good for "bursty" traffic. Sometimes the term "not real time" It is abbreviated as vbr-nrt.

Most traffic classes also introduce the concept of cell delay variation tolerance (CDVT), which defines the "bundling" of the cells over time.

Traffic policies

To maintain network performance, networks can apply traffic policing to virtual circuits to limit them to their traffic contracts at network entry points, that is, user-network interfaces (UNIs) and network -to-network interfaces (NNIs): usage/network parameter control. (UPC and NPC). The reference model given by the ITU-T and the ATM Forum for UPC and NPC is the generic cell rate algorithm (GCRA), which is a version of the leaky bucket algorithm. CBR traffic will normally be policed at PCR and CDVt only, while VBR traffic will normally be policed using a double leak cell controller at PCR and CDVt and at SCR and Maximum Burst Size (MBS). The MBS will normally be the packet (SAR-SDU) for the VBR VC in the cells.

Basic policing works on a cell-by-cell basis, but this is not optimal for encapsulated packet traffic (since dropping a single cell will invalidate the entire packet). As a result, schemes such as partial packet discard (PPD) and early packet discard (EPD) have been created which will discard a whole series of cells until the next packet starts. This reduces the number of useless cells in the network, saving bandwidth for full packets. EPD and PPD work with AAL5 connections when they use the end-of-packet marker: the ATM user-to-ATM user (AUU) indication bit in the payload type field of the header, which is set in the last cell of a packet. SAR-SDU.

Traffic shaping

Traffic shaping normally takes place on the network interface card. (NIC) in the user equipment, and tries to ensure that the flow of cells in a VC complies with its traffic policy, that is, that cells are not dropped or deprioritized at the UNI. Since the given reference model for network traffic policing is GCRA, this algorithm is typically used for shaping as well, and single and double leaky bucket implementations can be used as appropriate.

Types of virtual circuits and routes

ATM can build virtual circuits and virtual routes statically or dynamically. Static circuits (permanent virtual circuits, or PVCs) or paths (permanent virtual paths, or PVPs) require that the circuit be made up of a series of segments, one for each pair of interfaces it passes through.

PVPs and PVCs, while conceptually simple, require significant effort in large networks. They are also not compatible with service diversion in case of failure. Dynamically built PVPs (soft PVPs or SPVPs) and PVCs (soft PVCs or SPVCs), on the other hand, are built by specifying the characteristics of the circuit (the service 'contract') and the two endpoints.

ATM networks create and drop switched virtual circuits (SVCs) on demand when requested by an end equipment. One application for SVCs is to carry individual telephone calls when a network of telephone switches are interconnected using ATM. VPCs were also used in attempts to replace local area networks with ATM.

Virtual Circuit Routing

Most ATM networks that support SPVPs, SPVCs and SVCs use the Private Network Node Interface or the Network to Private Network Interface. (NIBP). PNNI uses the same shortest path algorithm that OSPF and IS-IS use to route IP packets to share topology information between switches and select a path through a network. PNNI also includes a powerful digest mechanism to allow the construction of very large networks, as well as call admission control. (CAC) that determines the availability of sufficient bandwidth on a proposed path through a network to satisfy the service requirements of a VC or VP.

Call admission and connection establishment

A network must establish a connection before two parties can send cells to each other. In ATM this is called a virtual circuit. (VC). It can be a Permanent Virtual Circuit (PVC), which is created administratively at the endpoints, or a Switched Virtual Circuit (SVC), which is created based on the needs of the communicating parties. The creation of the SVC is managed by signalling, in which the requester indicates the address of the receiver, the type of service requested and the traffic parameters applicable to the selected service. "The network performs "call admission" to confirm that the requested resources are available and that a path exists for the connection.

Reference model

ATM specifies the following three layers:

  • ATM Adaptation Layer (AAL)
  • Capa ATM 2, which corresponds to the OSI data link layer
  • Physical layer, equivalent to the OSI physical layer

Deployment

ATMs became popular with telephone companies and many computer manufacturers in the 1990s. However, even by the end of the decade, the better price/performance of Internet Protocol-based products was competing with the ATM technology to integrate real-time and bursty network traffic. Companies such as FORE Systems focused on ATM products, while other large vendors such as Cisco Systems provided ATM as an option. dotcom, some still predict that "ATM is going to dominate". However, in 2005, the ATM Forum, which had been the trade organization promoting the technology, merged with groups promoting other technologies and, eventually, it became the Broadband Forum.

Wireless ATM or Mobile ATM

Wireless ATM, or mobile ATM, consists of an ATM core network with a wireless access network. ATM cells are transmitted from base stations to mobile terminals. The mobility functions are carried out in an ATM switch of the core network, known as a "crossover switch", which is similar to the MSC (mobile switching center) of GSM networks. The advantage of wireless ATMs is their high bandwidth and high data transfer rate at layer 2. In the early 1990s, Bell Labs and NEC were active in this field. Andy Hopper of the Cambridge University Computer Laboratory also worked in this area. A Wireless ATM Forum was formed to standardize the technology behind wireless ATM networks. The forum was supported by several telecommunications companies, including NEC, Fujitsu and AT&T. The objective of Mobile ATM was to provide high-speed multimedia communications technology, capable of offering mobile broadband communications beyond those of GSM and WLAN.

Contenido relacionado

PHP nuke

PHP-Nuke is an automated web-based news and content management system based on PHP and MySQL technologies. Originally PHP-Nuke was a fork made by the...

DirectX

DirectX is a collection of APIs developed to facilitate complex tasks related to multimedia, especially video and game programming, on the Microsoft Windows...

EPROM memory

EPROM stands for Erasable Programmable Read-Only Memory cells, or "floating gate transistors", each of which comes from the factory without a load...

Windows 98

Windows 98 is a discontinued Microsoft Windows graphical operating system released on June 25, 1998 by Microsoft and the successor to Windows 95. Like its...

Turbo Pascal

Turbo Pascal is a software development system that includes a compiler and an integrated development environment for the Pascal programming language...
Más resultados...
Tamaño del texto:
undoredo
format_boldformat_italicformat_underlinedstrikethrough_ssuperscriptsubscriptlink
save