Ethernet and Fast Ethernet equipment. Fast Ethernet technology At what level is the 100base scrambler superimposed

Fast Ethernet - the IEEE 802.3 u specification, officially adopted on October 26, 1995, defines the link layer protocol standard for networks operating using both copper and fiber optic cable at a speed of 100 Mb / s. The new specification is a successor to the IEEE 802.3 Ethernet standard, using the same frame format, CSMA/CD media access mechanism, and star topology. The evolution has affected several elements of the configuration of physical layer facilities, which has increased throughput, including the types of cable used, the length of the segments and the number of hubs.

Physical layer

The Fast Ethernet standard defines three types of 100 Mbps Ethernet signaling media.

· 100Base-TX - two twisted pairs of wires. The transmission is carried out in accordance with the standard for data transmission in a twisted physical medium, developed by ANSI (American National Standards Institute - American National Standards Institute). The twisted data cable can be shielded or unshielded. Uses 4V/5V data encoding algorithm and MLT-3 physical encoding method.

· 100Base-FX - two core, fiber optic cable. The transmission is also carried out in accordance with the standard for data transmission in fiber optic media, which is developed by ANSI. Uses 4V/5V data encoding algorithm and NRZI physical encoding method.

· 100Base-T4 is a specific specification developed by the IEEE 802.3u committee. According to this specification, data transmission is carried out over four twisted pairs of telephone cable, which is called Category 3 UTP cable. It uses 8V/6T data encoding algorithm and NRZI physical encoding method.

Multimode cable

This type of fiber optic cable uses a fiber with a core diameter of 50 or 62.5 micrometers and an outer sheath 125 micrometers thick. Such a cable is called a multi-mode optical cable with 50/125 (62.5/125) micrometer fibers. To transmit a light signal over a multimode cable, an LED transceiver with a wavelength of 850 (820) nanometers is used. If a multimode cable connects two ports of switches operating in full duplex mode, it can be up to 2000 meters long.

Single mode cable

Single-mode fiber optic cable has a smaller core diameter of 10 micrometers than multi-mode fiber, and a laser transceiver is used for transmission over single-mode cable, which together ensures efficient transmission over long distances. The wavelength of the transmitted light signal is close to the core diameter, which is 1300 nanometers. This number is known as the zero dispersion wavelength. In a single-mode cable, dispersion and signal loss are very small, which allows the transmission of light signals over longer distances than in the case of using multimode fiber.


38. Gigabit Ethernet technology, general characteristics, specification of the physical environment, basic concepts.
3.7.1. General characteristics of the standard

Quite quickly after the appearance of Fast Ethernet products on the market, network integrators and administrators felt certain limitations when building corporate networks. In many cases, servers connected over a 100-Mbps link overloaded network backbones that also operate at 100 Mbps - FDDI and Fast Ethernet backbones. There was a need for the next level in the hierarchy of speeds. In 1995, only ATM switches could provide a higher level of speed, and in the absence of convenient means of migrating this technology to local networks at that time (although the LAN Emulation - LANE specification was adopted in early 1995, its practical implementation was ahead) to introduce them into Almost no one dared the local network. In addition, ATM technology was very expensive.

Therefore, the next step taken by the IEEE seemed logical - 5 months after the final adoption of the Fast Ethernet standard in June 1995, the IEEE High-Speed ​​Research Group was instructed to consider the possibility of developing an Ethernet standard with even higher bit rates.

In the summer of 1996, the 802.3z group was announced to develop a protocol as close as possible to Ethernet, but with a bit rate of 1000 Mbps. As with Fast Ethernet, the message was received with great enthusiasm by Ethernet proponents.



The main reason for the enthusiasm was the prospect of the same smooth transition of network backbones to Gigabit Ethernet, similar to how overloaded Ethernet segments located at the lower levels of the network hierarchy were transferred to Fast Ethernet. In addition, there was already experience in transmitting data at gigabit speeds, both in territorial networks (SDH technology) and in local networks - Fiber Channel technology, which is mainly used to connect high-speed peripherals to large computers and transmits data over a fiber optic cable from near gigabit through 8V/10V redundancy code.

The first version of the standard was considered in January 1997, and the final 802.3z standard was adopted on June 29, 1998 at a meeting of the IEEE 802.3 committee. Work on the implementation of Gigabit Ethernet over twisted-pair category 5 was transferred to a special committee 802.3ab, which has already considered several options for the draft of this standard, and since July 1998 the project has become quite stable. Final adoption of the 802.3ab standard is expected in September 1999.

Without waiting for the adoption of the standard, some companies released the first Gigabit Ethernet equipment on fiber optic cable by the summer of 1997.

The main idea of ​​the developers of the Gigabit Ethernet standard is to preserve the ideas of the classic Ethernet technology to the maximum while achieving a bit rate of 1000 Mbps.

Since when developing a new technology it is natural to expect some technical innovations that go in the general direction of the development of network technologies, it is important to note that Gigabit Ethernet, as well as its slower counterparts, at the protocol level will not support:

  • quality of service;
  • redundant links;
  • testing the operability of nodes and equipment (in the latter case, with the exception of testing port-to-port communication, as is done for Ethernet 10Base-T and 10Base-F and Fast Ethernet).

All three named properties are considered very promising and useful in modern networks, and especially in networks of the near future. Why do the authors of Gigabit Ethernet refuse them?

The main idea of ​​the developers of Gigabit Ethernet technology is that there are and will be a lot of networks in which the high speed of the backbone and the ability to assign priorities to packets in switches will be quite sufficient to ensure the quality of transport service for all network clients. And only in those rare cases when the backbone is sufficiently loaded and the quality of service requirements are very strict, it is necessary to use ATM technology, which, due to its high technical complexity, really guarantees the quality of service for all major types of traffic.


39. Structural cabling system used in network technologies.
A Structured Cabling System (SCS) is a set of switching elements (cables, connectors, connectors, cross panels and cabinets), as well as a technique for sharing them, which allows you to create regular, easily expandable communication structures in computer networks.

A structured cabling system is a kind of "constructor", with the help of which the network designer builds the configuration he needs from standard cables connected by standard connectors and switched on standard cross panels. If necessary, the connection configuration can be easily changed - add a computer, segment, switch, remove unnecessary equipment, and also change connections between computers and hubs.

When building a structured cabling system, it is understood that every workplace in the enterprise must be equipped with sockets for connecting a phone and a computer, even if this is not currently required. That is, a good structured cabling system is built redundant. This can save money in the future, since changes to the connection of new devices can be made by reconnecting already laid cables.

A typical hierarchical structure of a structured cabling system includes:

  • horizontal subsystems (within the floor);
  • vertical subsystems (inside the building);
  • campus subsystem (within the same territory with several buildings).

Horizontal subsystem connects the cross cabinet of the floor to the user sockets. Subsystems of this type correspond to the floors of the building. Vertical subsystem connects cross cabinets of each floor with the central control room of the building. The next step in the hierarchy is campus Subsystem, which connects several buildings to the main hardware of the entire campus. This part of the cable system is commonly referred to as the backbone.

Using a structured cabling system instead of chaotic cabling offers many benefits to a business.

· Versatility. A structured cabling system, with a well-thought-out organization, can become a single environment for transmitting computer data in a local area network, organizing a local telephone network, transmission of video information and even transmission of signals from fire safety sensors or security systems. This allows you to automate many processes of control, monitoring and management of business services and life support systems of the enterprise.

· Extended service life. The obsolescence period of a well-structured cabling system can be 10-15 years.

· Reduce the cost of adding new users and changing their placements. It is known that the cost of a cable system is significant and is determined mainly not by the cost of the cable, but by the cost of laying it. Therefore, it is more advantageous to carry out a one-time job of laying the cable, possibly with a large margin in length, than to carry out laying several times, increasing the length of the cable. With this approach, all work on adding or moving a user comes down to connecting the computer to an existing outlet.

· Possibility of easy network expansion. The Structured Cabling System is modular and therefore easy to expand. For example, you can add a new subnet to a backbone without affecting existing subnets. You can replace a single subnet cable type independently of the rest of the network. A structured cabling system is the basis for dividing a network into manageable logical segments, since it is itself already divided into physical segments.

· Providing more efficient service. A structured cabling system makes maintenance and troubleshooting easier than with a bus cabling system. In the bus organization of the cable system, the failure of one of the devices or connecting elements leads to a hard-to-localize failure of the entire network. In structured cabling systems, the failure of one segment does not affect the others, since the segments are interconnected using hubs. Hubs diagnose and localize the faulty section.

· Reliability. A structured cabling system has increased reliability, since the manufacturer of such a system guarantees not only the quality of its individual components, but also their compatibility.


40. Hubs and network adapters, principles, use, basic concepts.
Hubs, together with network adapters, as well as the cable system, represent the minimum equipment with which you can create a local network. Such a network will be a common shared environment

Network adapter (Network Interface Card, NIC) together with its driver implements the second, channel level of the model open systems at the end node of the network - a computer. More precisely, in a network operating system, the adapter/driver pair performs only the functions of the physical and MAC layers, while the LLC layer is usually implemented by an operating system module that is common to all drivers and network adapters. Actually, this is how it should be in accordance with the IEEE 802 protocol stack model. For example, in Windows NT, the LLC level is implemented in the NDIS module, which is common to all network adapter drivers, regardless of which technology the driver supports.

The network adapter, together with the driver, performs two operations: transmitting and receiving a frame.

In adapters for client computers a significant part of the work is shifted to the driver, thereby the adapter is simpler and cheaper. The disadvantage of this approach is the high degree of loading of the computer's central processor with routine work on transferring frames from the computer's RAM to the network. The CPU is forced to do this work instead of performing user application tasks.

The network adapter must be configured before being installed on the computer. When configuring an adapter, you typically specify the IRQ number the adapter is using, the DMA channel number (if the adapter supports DMA mode), and the base address of the I/O ports.

Almost all modern technologies local networks a device is defined that has several equal names - concentrator(concentrator), hub (hub), repeater (repeater). Depending on the field of application of this device, the composition of its functions and design significantly change. Only the main function remains unchanged - this is frame repetition either on all ports (as defined in the Ethernet standard) or only on some ports, according to the algorithm defined by the respective standard.

The hub usually has several ports to which, using separate physical cable segments, the end nodes of the network - computers - are connected. The concentrator combines individual physical network segments into a single shared environment, access to which is carried out in accordance with one of the considered local network protocols - Ethernet, Token Ring, etc. Since the logic of access to a shared environment significantly depends on the technology, for each type technologies are produced by their hubs - Ethernet; token ring; FDDI and 100VG-AnyLAN. For a particular protocol, its own, highly specialized name for this device is sometimes used, which more accurately reflects its functions or is used due to tradition, for example, the name MSAU is typical for Token Ring concentrators.

Each hub performs some basic function defined in the corresponding protocol of the technology it supports. Although this feature is defined in some detail in the technology standard, when implementing it, hubs from different manufacturers may differ in details such as the number of ports, support for multiple cable types, and so on.

In addition to the main function, the hub can perform a number of additional functions that are either not defined in the standard at all or are optional. For example, a Token Ring hub can perform the function of shutting down misbehaving ports and switching to a backup ring, although such capabilities are not described in the standard. The hub proved to be a convenient device for performing additional functions that make it easier to control and operate the network.


41. Use of bridges and switches, principles, features, examples, limitations
Structuring with Bridges and Switches

the network can be divided into logical segments using two types of devices - bridges (bridge) and / or switches (switch, switching hub).

Bridge and switch are functional twins. Both of these devices advance frames based on the same algorithms. Bridges and switches use two types of algorithms: the algorithm transparent bridge (transparent bridge), described in the IEEE 802.1D standard, or the algorithm source routing bridge from IBM for Token Ring networks. These standards were developed long before the first switch, which is why they use the term "bridge". When the first industrial switch model for Ethernet technology was born, it performed the same IEEE 802.ID frame advancement algorithm that had been worked out for decades by LAN and WAN bridges.

The main difference between a switch and a bridge is that a bridge processes frames sequentially, while a switch processes frames in parallel. This circumstance is due to the fact that bridges appeared at a time when the network was divided into a small number of segments, and inter-segment traffic was small (it obeyed the 80 to 20% rule).

Today, bridges still work across networks, but only over rather slow global links between two remote LANs. Such bridges are called remote bridges, and their operation algorithm is no different from the 802.1D or Source Routing standard.

Transparent bridges can, in addition to transmitting frames within the same technology, translate local network protocols, such as Ethernet to Token Ring, FDDI to Ethernet, etc. This property of transparent bridges is described in the IEEE 802.1H standard.

In the future, we will call a device that promotes frames according to the bridge algorithm and works on a local network with the modern term "switch". When describing the 802.1D and Source Routing algorithms themselves in the next section, we will traditionally call the device a bridge, as it is actually called in these standards.


42. Switches for local networks, protocols, operating modes, examples.
Each of the 8 10Base-T ports is served by one Ethernet packet processor - EPP (Ethernet Packet Processor). In addition, the switch has a system module that coordinates the operation of all EPP processors. The system module maintains a common switch address table and manages the switch using the SNMP protocol. Frames are passed between ports using a switch fabric, similar to those found in telephone exchanges or multiprocessor computers, connecting multiple processors to multiple memory modules.

Switching matrix works on the principle of circuit switching. For 8 ports, the matrix can provide 8 simultaneous internal channels in half-duplex mode of operation of the ports and 16 in full-duplex mode, when the transmitter and receiver of each port operate independently of each other.

When a frame arrives at any port, the EPP processor buffers the first few bytes of the frame to read the destination address. After receiving the destination address, the processor immediately decides to transfer the packet, without waiting for the arrival of the remaining bytes of the frame.

If the frame needs to be transferred to another port, then the processor accesses the switching matrix and tries to establish a path in it that connects its port with the port through which the route to the destination address goes. The switching matrix can only do this if the destination address port is free at that moment, that is, it is not connected to another port. If the port is busy, then, as in any circuit-switched device, the matrix refuses to connect. In this case, the frame is fully buffered by the input port processor, after which the processor waits for the output port to become free and the switching matrix to form the desired path. After the desired path is established, buffered frame bytes are sent to it and are received by the output port processor. As soon as the output port processor gains access to the Ethernet segment connected to it using the CSMA / CD algorithm, the frame bytes immediately begin to be transmitted to the network. The described method of frame transmission without its full buffering is called “on-the-fly” or “cut-through” switching. The main reason for improving network performance when using a switch is parallel processing of several frames. This effect is illustrated in fig. 4.26. The figure shows an ideal situation in terms of improving performance, when four out of eight ports transmit data at a maximum speed for the Ethernet protocol of 10 Mb / s, and they transmit this data to the remaining four ports of the switch without conflicting - data flows between network nodes are distributed so that each port that receives frames has its own output port. If the switch manages to process incoming traffic even with the maximum intensity of frames arriving at the input ports, then the overall performance of the switch in the above example will be 4x10 = 40 Mbps, and when generalizing the example for N ports - (N / 2)xlO Mbps. The switch is said to provide each station or segment connected to its ports with dedicated protocol bandwidth. 4.26. If two stations, such as stations connected to ports 3 And 4, at the same time you need to write data to the same server connected to the port 8, then the switch will not be able to allocate a 10 Mbps data stream to each station, since port 5 cannot transmit data at a rate of 20 Mbps. Station frames will wait in internal queues of input ports 3 And 4, when the port is free 8 to send the next frame. Obviously, good decision for such a distribution of data flows, it would be to connect the server to a higher-speed port, such as Fast Ethernet. non-blocking switch models.


43. The algorithm of the transparent bridge.
Transparent bridges are invisible to the network adapters of the end nodes, since they independently build a special address table, on the basis of which it is possible to decide whether the incoming frame should be transmitted to some other segment or not. Network adapters when using transparent bridges work exactly the same as in the case of their absence, that is, they do not take any additional actions to make the frame pass through the bridge. The transparent bridging algorithm is independent of the LAN technology in which the bridge is installed, so Ethernet transparent bridging works exactly like FDDI transparent bridging.

A transparent bridge builds its address table based on passive observation of the traffic circulating in the segments connected to its ports. In this case, the bridge takes into account the addresses of the sources of data frames arriving at the ports of the bridge. Based on the frame source address, the bridge concludes that this node belongs to one or another network segment.

Consider the process automatic creation address table of the bridge and its use on the example of a simple network presented in fig. 4.18.

Rice. 4.18. The principle of operation of the transparent bridge

A bridge connects two logical segments. Segment 1 consists of computers connected with one piece of coaxial cable to port 1 of the bridge, and segment 2 consists of computers connected with another piece of coaxial cable to port 2 of the bridge.

Each bridge port acts as the end node of its own segment, with one exception - the bridge port does not have its own MAC address. The bridge port operates in the so-called illegible (promisquous) packet capture mode, when all packets arriving at the port are stored in buffer memory. Using this mode, the bridge monitors all traffic transmitted in the segments attached to it, and uses the packets passing through it to learn the composition of the network. Since all packets are buffered, the bridge does not need a port address.

Initially, the bridge does not know anything about the computers with which MAC addresses are connected to each of its ports. Therefore, in this case, the bridge simply forwards any captured and buffered frame to all its ports except the one it received the frame from. In our example, the bridge has only two ports, so it sends frames from port 1 to port 2 and vice versa. When the bridge is about to transfer a frame from segment to segment, for example from segment 1 to segment 2, it tries again to access segment 2 as an end node according to the rules of the access algorithm, in this example, according to the rules of the CSMA/CD algorithm.

Simultaneously with the transmission of the frame to all ports, the bridge learns the source address of the frame and makes new record about its membership in its address table, which is also called the filtering or routing table.

Once the bridge has gone through the learning phase, it can work more efficiently. When receiving a frame sent, for example, from computer 1 to computer 3, it looks through the address table for a match between its addresses and destination address 3. Since there is such an entry, the bridge performs the second stage of table analysis - it checks whether there are computers with source addresses ( in our case, this is address 1) and the destination address (address 3) in the same segment. Since in our example they are in different segments, the bridge performs the operation promotion (forwarding) frame - transmits a frame to another port, having previously gained access to another segment.

If the destination address is unknown, then the bridge transmits the frame on all its ports, except for the frame source port, as in the initial stage of the learning process.


44. Bridges with source routing.
Source-routed bridging is used to connect Token Ring and FDDI rings, although transparent bridging can also be used for the same purpose. Source Routing (SR) is based on the fact that the sending station puts in the frame sent to another ring all the address information about the intermediate bridges and rings that the frame must go through before getting into the ring to which the station is connected - recipient.

Consider the principles of operation of Source Routing bridges (hereinafter referred to as SR bridges) using the example of the network shown in Fig. 4.21. The network consists of three rings connected by three bridges. Rings and bridges have identifiers to define the route. SR bridges do not build an address table, but use the information available in the corresponding fields of the data frame when forwarding frames.

Fig. 4.21.Source Routing Bridges

Upon receipt of each packet, the SR bridge only needs to look at the Routing Information Field (the Routing Information Field, RIF, in a Token Ring or FDDI frame) to see if it contains its identifier. And if it is present there and accompanied by the identifier of the ring that is connected to this bridge, then in this case the bridge copies the incoming frame to the specified ring. Otherwise, the block is not copied to another ring. In either case, the original copy of the frame is returned on the original ring to the originating station, and if it was sent to another ring, bit A (address recognized) and bit C (frame copied) of the frame status field are set to 1 to tell the originating station that the frame was received by the destination station (in this case, transmitted by the bridge to another ring).

Since routing information in a frame is not always needed, but only for frame transmission between stations connected to different rings, the presence of the RIF field in the frame is indicated by setting the individual / group address (I / G) bit to 1 (in this case, this bit is not used by destination, since the source address is always individual).

The RIF field has a control subfield consisting of three parts.

  • frame type defines the type of the RIF field. There are different types of RIF fields used to find a route and to send a frame along a known route.
  • Maximum frame length field used by a bridge to link rings that have different MTUs. The bridge uses this field to notify the station of the maximum possible frame length (that is, the minimum MTU value for the entire span of the route).
  • RIF field length is necessary, since the number of route descriptors that specify the identifiers of intersected rings and bridges is not known in advance.

For the source routing algorithm to work, two additional frame types are used - single-route broadcast frame-explorer SRBF (single-route broadcast frame) and multi-route broadcast frame-explorer ARBF (all-route broadcast frame).

All SR bridges must be manually configured by the administrator to forward ARBF frames on all ports except the source port of the frame, and for SRBF frames, some bridge ports must be blocked to prevent loops in the network.

Advantages and Disadvantages of Source-Routed Bridges

45. Switches: technical implementation, functions, characteristics that affect their operation.
Features of the technical implementation of switches. Many first-generation switches were similar to routers, that is, they were based on a general-purpose central processing unit connected to interface ports via an internal high-speed bus. The main disadvantage of such switches was their low speed. A general-purpose processor could not cope with a large volume of specialized operations for sending frames between interface modules. In addition to the processor chips, for successful non-blocking operation, the switch must also have a fast node to pass frames between the port processor chips. Currently, switches use as a base one of three schemes on which such an exchange node is built:

  • switching matrix;
  • shared multi-input memory;
  • common bus.

fast ethernet

Fast Ethernet - the IEEE 802.3 u specification, officially adopted on October 26, 1995, defines the link layer protocol standard for networks operating using both copper and fiber optic cable at a speed of 100 Mb / s. The new specification is a successor to the IEEE 802.3 Ethernet standard, using the same frame format, CSMA/CD media access mechanism, and star topology. The evolution has affected several elements of the configuration of physical layer facilities, which has increased throughput, including the types of cable used, the length of the segments and the number of hubs.

Fast Ethernet structure

To better understand the work and understand the interaction of Fast Ethernet elements, let's turn to Figure 1.

Figure 1. Fast Ethernet system

Logical Link Control (LLC) sublayer

In the IEEE 802.3u specification, the functions of the link layer are divided into two sublayers: logical link control (LLC) and medium access layer (MAC), which will be discussed below. LLC, whose functions are defined by the IEEE 802.2 standard, actually provides an interconnection with higher layer protocols (for example, with IP or IPX), providing various communication services:

  • Service without establishing a connection and receiving confirmations. A simple service that does not provide data flow control or error control, and does not guarantee correct delivery of data.
  • Connection service. An absolutely reliable service that guarantees the correct delivery of data by establishing a connection with the receiving system before the start of data transmission and using error control and data flow control mechanisms.
  • Connectionless service with acknowledgments. A medium complexity service that uses acknowledgment messages to ensure guaranteed delivery, but does not establish a connection until data is transmitted.

On the transmitting system, the data passed down from the Network Layer Protocol is first encapsulated by the LLC sublayer. The standard calls them the Protocol Data Unit (PDU, protocol data unit). When a PDU is passed down to the MAC sublayer, where it is again framed by a header and post-information, it can technically be called a frame from that point on. For an Ethernet packet, this means that the 802.3 frame contains a three-byte LLC header in addition to the Network Layer data. Thus, the maximum allowable data length in each packet is reduced from 1500 bytes to 1497 bytes.

The LLC header consists of three fields:

In some cases, LLC frames play a minor role in network communications. For example, in a network using TCP/IP along with other protocols, LLC's only function may be to allow 802.3 frames to contain a SNAP header, like the Ethertype, indicating the Network Layer protocol to which the frame should be sent. In this case, all LLC PDUs use the unnumbered information format. However, other high-level protocols require a more advanced service from an LLC. For example, NetBIOS sessions and several NetWare protocols use connection-based LLC services more widely.

SNAP header

The receiving system needs to determine which of the Network Layer protocols should receive the incoming data. The 802.3 packets within the LLC PDU use another protocol called sub-networkAccessprotocol(SNAP, Subnet Access Protocol).

The SNAP header is 5 bytes long and is located immediately after the LLC header in the data field of an 802.3 frame, as shown in the figure. The header contains two fields.

Organization code. The organization or manufacturer identifier is a 3-byte field that takes the same value as the first 3 bytes of the sender's MAC address in the 802.3 header.

local code. The local code is a 2-byte field that is functionally equivalent to the Ethertype field in the Ethernet II header.

Consistency sublayer

As stated earlier, Fast Ethernet is an evolving standard. A MAC designed for the AUI interface needs to be converted for the MII interface used in Fast Ethernet, which is what this sublayer is intended for.

Media Access Control (MAC)

Each node in a Fast Ethernet network has a media access controller (MediaAccessController- MAC). The MAC is of key importance in Fast Ethernet and has three purposes:

The most important of the three MAC assignments is the first. For any network technology that uses a common medium, the media access rules that determine when a node can transmit are its main characteristic. Several IEEE committees are involved in the development of media access rules. The 802.3 committee, often referred to as the Ethernet committee, defines standards for LANs that use rules called CSMA/CD(Carrier Sense Multiple Access with Collision Detection - multiple access with carrier sense and collision detection).

CSMS/CD are media access rules for both Ethernet and Fast Ethernet. It is in this area that the two technologies completely coincide.

Since all nodes in Fast Ethernet share the same medium, they can only transmit when it is their turn. This queue is defined by CSMA/CD rules.

CSMA/CD

The MAC Fast Ethernet controller listens for a carrier before transmitting. The carrier only exists when another node is transmitting. The PHY layer determines the presence of the carrier and generates a message for the MAC. The presence of the carrier indicates that the medium is busy and the listening node (or nodes) must yield to the transmitting one.

A MAC that has a frame to transmit must wait a certain minimum amount of time after the end of the previous frame before transmitting it. This time is called interpacket gap(IPG, interpacket gap) and lasts 0.96 microseconds, that is, a tenth of the transmission time of a regular Ethernet packet at a speed of 10 Mbps (IPG is the only time interval that is always defined in microseconds, and not in bit time) Figure 2.


Figure 2. Interpacket gap

After the end of packet 1, all LAN nodes are required to wait for the IPG time before being able to transmit. The time interval between packets 1 and 2, 2 and 3 in Fig. 2 is the IPG time. After the transmission of packet 3 was completed, no node had material to process, so the time interval between packets 3 and 4 is longer than the IPG.

All hosts on the network must abide by these rules. Even if a node has many frames to transmit and the node is the only one transmitting, it must wait at least the IPG time after each packet has been forwarded.

This is part of the CSMA rules for accessing the Fast Ethernet medium. In short, many nodes have access to the medium and use the carrier to control its busyness.

Early experimental networks applied exactly these rules, and such networks worked very well. However, using only CSMA resulted in a problem. Often, two nodes, having a packet to transmit and after waiting for the IPG time, started transmitting at the same time, which led to data corruption on both sides. Such a situation is called collision(collision) or conflict.

To overcome this obstacle, early protocols used a fairly simple mechanism. Packages were divided into two categories: commands and reactions. Each command sent by a node required a response. If for some time (called the timeout period) after the transmission of the command, a response to it was not received, then the original command was given again. This could happen multiple times (timeout limit) before the transmitting node fixed the error.

This scheme could work fine, but only up to a certain point. The occurrence of conflicts led to a sharp decrease in performance (usually measured in bytes per second), because nodes often idle waiting for responses to commands that never reach their destination. Network congestion, an increase in the number of nodes is directly related to an increase in the number of conflicts and, consequently, to a decrease in network performance.

Early network designers quickly found a solution to this problem: each node must determine the loss of the transmitted packet by detecting a conflict (rather than waiting for a reaction that never follows). This means that packets lost due to a collision must be retransmitted immediately before the timeout expires. If the host transmitted the last bit of the packet without a collision, then the packet was transmitted successfully.

The carrier sense method is well combined with the collision detection function. Collisions still continue to occur, but this does not affect the performance of the network, as the nodes quickly get rid of them. The DIX group, having developed CSMA/CD medium access rules for Ethernet, designed them in the form of a simple algorithm - Figure 3.


Figure 3. CSMA/CD algorithm

Physical layer device (PHY)

Because Fast Ethernet can use different type cable, each environment requires a unique signal pre-conversion. Transformation is also required for efficient data transmission: to make the transmitted code resistant to interference, possible losses, or distortion of its individual elements (bauds), to ensure efficient synchronization of clock generators on the transmitting or receiving side.

Coding Sublayer (PCS)

Encodes/decodes data coming from/to the MAC layer using the or algorithms.

Sub-layers of physical attachment and dependence on the physical medium (PMA and PMD)

The PMA and PMD sublayers communicate between the PSC sublayer and the MDI interface, providing formation in accordance with the physical encoding method: or .

Auto Negotiation Sublayer (AUTONEG)

The auto-negotiation sublayer allows two communicating ports to automatically select the most efficient mode of operation: full duplex or half duplex 10 or 100 Mbps. Physical layer

The Fast Ethernet standard defines three types of 100 Mbps Ethernet signaling media.

  • 100Base-TX - two twisted pairs of wires. The transmission is carried out in accordance with the standard for data transmission in a twisted physical medium, developed by ANSI (American National Standards Institute - American National Standards Institute). The twisted data cable can be shielded or unshielded. Uses 4V/5V data encoding algorithm and MLT-3 physical encoding method.
  • 100Base-FX - two strands of fiber optic cable. The transmission is also carried out in accordance with the standard for data transmission in fiber optic media, which is developed by ANSI. Uses 4V/5V data encoding algorithm and NRZI physical encoding method.

The 100Base-TX and 100Base-FX specifications are also known as 100Base-X

  • 100Base-T4 is a specific specification developed by the IEEE 802.3u committee. According to this specification, data transmission is carried out over four twisted pairs of telephone cable, which is called Category 3 UTP cable. It uses 8V/6T data encoding algorithm and NRZI physical encoding method.

Additionally, the Fast Ethernet standard includes recommendations for using Category 1 shielded twisted-pair cable, which is the standard cable traditionally used in Token Ring networks. The Support Organization and Guidelines for Using STP Cabling on Fast Ethernet provide a path to Fast Ethernet for customers with STP cabling.

The Fast Ethernet specification also includes an auto-negotiation mechanism that allows a host port to automatically adjust to a data rate of 10 or 100 Mbps. This mechanism is based on the exchange of a number of packets with the port of the hub or switch.

Medium 100Base-TX

The 100Base-TX transmission medium uses two twisted pairs, with one pair used to transmit data and the other to receive them. Since the ANSI TP - PMD specification contains descriptions for both shielded and unshielded twisted-pair cables, the 100Base-TX specification includes support for both unshielded and shielded twisted-pair types 1 and 7.

MDI (Medium Dependent Interface) connector

The media dependent 100Base-TX link interface can be one of two types. For unshielded twisted pair cable, use an eight-pin RJ 45 Category 5 connector as an MDI connector. The same connector is used in 10Base-T, which provides backward compatibility with existing Category 5 cabling. use an IBM type 1 STP connector, which is a shielded DB9 connector. This connector is usually used in Token Ring networks.

Category 5(e) UTP cable

The UTP 100Base-TX media interface uses two pairs of wires. To minimize crosstalk and possible signal distortion, the remaining four wires should not be used to carry any signals. The transmit and receive signals for each pair are polarized, with one wire carrying the positive (+) signal and the other wire carrying the negative (-) signal. The color marking of the cable wires and the pin numbers of the connector for the 100Base-TX network are given in Table. 1. Although the 100Base-TX PHY layer was developed after the adoption of the ANSI TP-PMD standard, the pin numbers of the RJ 45 connector have been changed to align with the wiring scheme already used in the 10Base-T standard. In the ANSI TP-PMD standard, pins 7 and 9 are used for receiving data, while in the 100Base-TX and 10Base-T standards, pins 3 and 6 are used for receiving data. This wiring allows the use of 100Base-TX adapters instead of 10 Base adapters - T and their connection to the same category 5 cables without rewiring. In the RJ 45 connector, the pairs of wires used are connected to pins 1, 2 and 3, 6. To correctly connect the wires, you should be guided by their color marking.

Table 1. Connector pin assignmentMDIcableUTP100Base-TX

Nodes communicate with each other by exchanging frames. In Fast ethernet frame is the basic unit of exchange over the network - any information transmitted between nodes is placed in the data field of one or more frames. Forwarding frames from one node to another is possible only if there is a way to uniquely identify all nodes in the network. Therefore, each node on the LAN has an address, which is called its MAC address. This address is unique: no two hosts on a local network can have the same MAC address. Moreover, in none of the LAN technologies (with the exception of ARCNet) can no two nodes in the world have the same MAC address. Any frame contains at least three basic pieces of information: the recipient's address, the sender's address, and data. Some frames have other fields, but only the three listed are mandatory. Figure 4 shows the Fast Ethernet frame structure.

Figure 4. Frame structureFastethernet

  • address of the recipient- the address of the node receiving the data is indicated;
  • sender's address- the address of the node that sent the data is indicated;
  • length/type(L/T - Length/Type) - contains information about the type of transmitted data;
  • frame checksum(PCS - Frame Check Sequence) - designed to check the correctness of the frame received by the receiving node.

The minimum frame size is 64 octets, or 512 bits (terms octet And byte - synonyms). The maximum frame size is 1518 octets, or 12144 bits.

Frame addressing

Each node on a Fast Ethernet network has a unique number called a MAC address or node address. This number consists of 48 bits (6 bytes), is assigned to the network interface at the time of manufacture of the device and is programmed during the initialization process. Therefore, the network interfaces of all LANs, with the exception of ARCNet, which uses 8-bit addresses assigned by the network administrator, have a built-in unique MAC address that is different from all other MAC addresses on Earth and is assigned by the manufacturer in agreement with the IEEE.

To facilitate the management of network interfaces, the IEEE proposed to divide the 48-bit address field into four parts, as shown in Figure 5. The first two bits of the address (bits 0 and 1) are address type flags. The value of the flags determines how the address part (bits 2 - 47) is interpreted.


Figure 5. MAC address format

The I/G bit is called individual/group address flag and shows what (individual or group) the address is. An individual address is assigned to only one interface (or node) on a network. Addresses that have the I/G bit set to 0 are MAC addresses or node address. If the I/O bit is set to 1, then the address is grouped and is usually called multicast address(multicast address) or functional address(functional address). A multicast address can be assigned to one or more LAN network interfaces. Frames sent to a multicast address are received or copied by all LAN network interfaces that own it. Multicast addresses allow a frame to be sent to a subset of hosts on a local network. If the I/O bit is set to 1, then bits 46 to 0 are treated as a multicast address and not as the U/L, OUI, and OUA fields of a regular address. The U/L bit is called universal/local control flag and determines how the address of the network interface was assigned. If both I/O and U/L bits are set to 0, then the address is the unique 48-bit identifier described earlier.

OUI (organizationally unique identifier) organizationally unique identifier). The IEEE assigns one or more OUIs to each network adapter and interface manufacturer. Each manufacturer is responsible for the correct assignment of the OUA (organizationally unique address - organizationally unique address) which any device it creates should have.

When the U/L bit is set, the address is locally managed. This means that it is not set by the manufacturer of the network interface. Any organization can create its own network interface MAC address by setting the U/L bit to 1 and bits 2 to 47 to some chosen value. When a network interface receives a frame, it first decodes the destination address. When the I/O bit is set in the address, the MAC layer will only receive the frame if the destination address is in the list held by the host. This technique allows one node to send a frame to many nodes.

There is a special multicast address called broadcast address. In a 48-bit IEEE broadcast address, all bits are set to 1. If a frame is sent with a destination broadcast address, then all nodes on the network will receive and process it.

Field Length/Type

The L/T (Length/Type) field is used for two different purposes:

  • to determine the length of the data field of the frame, excluding any padding with spaces;
  • to indicate the type of data in the data field.

The value of the L/T field between 0 and 1500 is the length of the frame data field; a higher value indicates the protocol type.

In general, the L/T field is a historical legacy of the IEEE standardization of Ethernet, which created a number of interoperability problems with equipment released before 1983. Now, Ethernet and Fast Ethernet never use L/T fields. The specified field serves only for coordination with the software that processes the frames (that is, with the protocols). But the only truly standard use of the L/T field is to use it as a length field - the 802.3 specification doesn't even mention its possible use as a data type field. The standard states: "Frames with a length field value greater than that defined in clause 4.4.2 may be ignored, discarded, or used privately. The use of these frames is outside the scope of this standard."

Summarizing what has been said, we note that the L/T field is the primary mechanism by which frame type. Fast Ethernet and Ethernet frames in which the L/T field value specifies the length (L/T value 802.3, frames in which the data type is set by the same field value (L/T value > 1500) are called frames ethernet- II or DIX.

Data field

In the data field contains information that one node sends to another. Unlike other fields that store very specific information, the data field can contain almost any information, as long as its volume is at least 46 and no more than 1500 bytes. How the content of the data field is formatted and interpreted is defined by the protocols.

If data less than 46 bytes in length needs to be sent, the LLC layer appends bytes with an unknown value, called insignificant data(paddata). As a result, the length of the field becomes 46 bytes.

If the frame is of type 802.3, then the L/T field indicates the amount of valid data. For example, if a 12-byte message is sent, then the L/T field stores the value 12, and the data field contains 34 additional insignificant bytes. The addition of non-significant bytes initiates the Fast Ethernet LLC layer, and is usually implemented in hardware.

The MAC layer means do not specify the contents of the L/T field - this does software. This field is almost always set by the network interface driver.

Frame checksum

The frame checksum (PCS - Frame Check Sequence) allows you to make sure that the received frames are not damaged. When forming a transmitted frame at the MAC level, a special mathematical formula is used CRC(Cyclic Redundancy Check - cyclic redundancy code), designed to calculate a 32-bit value. The received value is placed in the FCS field of the frame. The values ​​of all bytes of the frame are fed to the input of the MAC level element that calculates the CRC. The FCS field is the primary and most important error detection and correction mechanism in Fast Ethernet. Starting with the first byte of the destination address and ending with the last byte of the data field.

DSAP and SSAP field values

DSAP/SSAP values

Description

Indiv LLC Sublayer Mgt

Group LLC Sublayer Mgt

SNA Path Control

Reserved (DODIP)

ISO CLNS IS 8473

The 8B6T encoding algorithm converts an eight-bit data octet (8B) into a six-bit ternary symbol (6T). The 6T code groups are designed to be transmitted in parallel over three twisted pairs of a cable, so the effective data rate for each twisted pair is one third of 100 Mbps, that is, 33.33 Mbps. The ternary symbol rate for each twisted pair is 6/8 of 33.3 Mbps, which corresponds to clock frequency 25 MHz. It is with this frequency that the MP interface timer operates. Unlike binary signals, which have two levels, ternary signals transmitted over each pair can have three levels.

Character encoding table

Line code

Symbol

MLT-3 Multi Level Transmission - 3 (multilevel transmission) - is a bit similar to the NRZ code, but unlike the latter, it has three signal levels.

One corresponds to a transition from one signal level to another, and the change in the signal level occurs sequentially, taking into account the previous transition. When transmitting “zero”, the signal does not change.

This code, like NRZ, needs to be pre-encoded.

Compiled from:

  1. Laem Quinn, Richard Russell "Fast Ethernet";
  2. K. Zakler "Computer networks";
  3. V.G. and N.A. Olifer "Computer networks";
Ethernet, but also to the equipment of other, less popular networks.

Ethernet and Fast Ethernet adapters

Adapter Specifications

Network adapters(NIC, Network Interface Card) Ethernet and Fast Ethernet can be interfaced with a computer through one of the standard interfaces:

  • bus ISA (Industry Standard Architecture);
  • PCI bus (Peripheral Component Interconnect);
  • PC Card bus (aka PCMCIA);

Adapters designed for the system bus (backbone) ISA, not so long ago were the main type of adapters. The number of companies producing such adapters was large, which is why devices of this type were the cheapest. ISA adapters are available in 8-bit and 16-bit versions. 8-bit adapters are cheaper, while 16-bit adapters are faster. True, the exchange of information over the ISA bus cannot be too fast (in the limit - 16 MB / s, in reality - no more than 8 MB / s, and for 8-bit adapters - up to 2 MB / s). Therefore, Fast Ethernet adapters, which require high exchange rates for efficient operation, are practically not produced for this system bus. The ISA bus is a thing of the past.

The PCI bus has now practically supplanted the ISA bus and is becoming the main expansion bus for computers. It provides 32-bit and 64-bit data exchange and has a high throughput (theoretically up to 264 MB / s), which fully meets the requirements of not only Fast Ethernet, but also faster Gigabit Ethernet. It is also important that the PCI bus is used not only in IBM PC computers, but also in PowerMac computers. In addition, it supports Plug-and-Play hardware auto-configuration mode. Apparently, in the near future, the majority of network adapters. The disadvantage of PCI compared to the ISA bus is that the number of expansion slots in a computer is usually small (usually 3 slots). But precisely network adapters connect to PCI first.

Bus PC Card (old name PCMCIA) is used so far only in portable computers class Notebook. In these computers, the internal PCI bus is usually not exposed. The PC Card interface provides for a simple connection to a computer of miniature expansion cards, and the exchange rate with these cards is quite high. However, more and more portable computers are equipped with built-in network adapters, as the ability to access the network becomes an integral part of standard set functions. These built-in adapters are again connected to the computer's internal PCI bus.

When choosing network adapter oriented to one or another bus, you must first of all make sure that there are free expansion slots for this bus in the computer connected to the network. You should also evaluate the complexity of installing the purchased adapter and the prospects for the production of boards of this type. The latter may be needed in the event of an adapter failure.

Finally meet again network adapters, connecting to a computer through a parallel (printer) LPT port. The main advantage of this approach is that you do not need to open the computer case to connect adapters. In addition, in this case, the adapters do not occupy the computer's system resources, such as interrupt and DMA channels, as well as memory and I / O device addresses. However, the speed of information exchange between them and the computer in this case is much lower than when using the system bus. In addition, they require more processor time to exchange with the network, thereby slowing down the computer.

Recently, there are more and more computers in which network adapters built into the system board. The advantages of this approach are obvious: the user does not have to buy a network adapter and install it in the computer. You just need to connect the network cable to the external connector of the computer. However, the disadvantage is that the user cannot select the adapter with the best performance.

To other important features network adapters can be attributed:

  • adapter configuration method ;
  • the size of the buffer memory installed on the board and the modes of exchange with it;
  • the ability to install a permanent memory chip on the board for remote boot ( BootROM ).
  • the ability to connect the adapter to different types of transmission media (twisted pair, thin and thick coaxial cable, fiber optic cable);
  • the network transmission speed used by the adapter and the availability of its switching function;
  • the ability to use the adapter full duplex exchange mode;
  • adapter compatibility ( more precisely, adapter driver) with the network software used.

Adapter configuration by the user was used mainly for adapters designed for the ISA bus. Configuration involves setting up the use of computer system resources (input/output addresses, interrupt channels and direct memory access, buffer memory addresses and remote boot memory). Configuration can be carried out by setting the switches (jumpers) to the desired position or using the DOS configuration program supplied with the adapter ( Jumperless , Software configuration). When starting such a program, the user is prompted to set the hardware configuration using a simple menu: select adapter parameters. The same program allows you to self-test adapter . The selected parameters are stored in the non - volatile memory of the adapter . In any case, when choosing parameters, it is necessary to avoid conflicts with system devices computer and other expansion cards.

The adapter can also be configured automatically in Plug-and-Play mode when the computer is powered on. Modern adapters usually support this mode, so they can be easily installed by the user.

In the simplest adapters, exchange with the adapter's internal buffer memory (Adapter RAM) is carried out through the address space of I/O devices. In this case, no additional memory address configuration is required. The base address of the buffer memory operating in shared memory mode must be specified. It is assigned to the upper memory area of ​​the computer (

ComputerPress test laboratory tested 10/100 Mbit/s network cards of the Fast Ethernet standard for the PCI bus intended for use in workstations. The currently most common cards with a bandwidth of 10/100 Mbit / s were chosen, because, firstly, they can be used in Ethernet, Fast Ethernet and mixed networks, and, secondly, the promising Gigabit Ethernet technology ( throughput up to 1000 Mbps) is still most often used to connect powerful servers to the network equipment of the network core. It is extremely important what quality passive network equipment (cables, sockets, etc.) is used in the network. It is well known that if Category 3 twisted-pair cable is sufficient for Ethernet networks, then Category 5 is already required for Fast Ethernet. Signal scattering, poor noise protection can significantly reduce network throughput.

The purpose of testing was to determine, first of all, the index of effective performance (Performance / Efficiency Index Ratio - hereinafter P / E-index), and only then - the absolute value of throughput. P/E-index is calculated as the ratio of network card bandwidth in Mbps to the degree of CPU utilization in percent. This index is the industry standard for determining the performance of network adapters. It was introduced in order to take into account the use of CPU resources by network cards. The fact is that some manufacturers of network adapters are trying to achieve maximum performance by using more computer processor cycles to perform network operations. Minimal CPU usage and relatively high throughput are essential for mission-critical business, real-time, and multimedia applications.

Cards that are currently most often used for workstations in corporate and local networks were tested:

  1. D-Link DFE-538TX
  2. SMC EtherPower II 10/100 9432TX/MP
  3. 3Com Fast EtherLink XL 3C905B-TX-NM
  4. Compex RL 100ATX
  5. Intel EtherExpress PRO/100+ Management
  6. CNet PRO-120
  7. NetGear FA 310TX
  8. Allied Telesyn AT 2500TX
  9. Surecom EP-320X-R

The main characteristics of the tested network adapters are given in Table. one . Let us explain some of the terms that are used in the table. Automatic connection speed detection means that the adapter itself determines the maximum possible speed of operation. In addition, if auto-speed detection is supported, no additional configuration is required when switching from Ethernet to Fast Ethernet and vice versa. That is, the system administrator is not required to reconfigure the adapter and reload drivers.

Support for the Bus Master mode allows you to transfer data directly between the network card and the computer's memory. This frees up the CPU to perform other tasks. This property has become the de facto standard. No wonder all known network cards support the Bus Master mode.

Remote power on (Wake on LAN) allows you to turn on the PC over the network. That is, it becomes possible to service the PC during non-working hours. For this purpose, three-pin connectors are used on system board and network adapter, which are connected with a special cable (included in the delivery). In addition, special control software is required. Wake on LAN technology was developed by the Intel-IBM alliance.

Full duplex mode allows you to transfer data simultaneously in both directions, half duplex - only in one direction. Thus, the maximum possible throughput in full duplex mode is 200 Mbps.

The DMI (Desktop Management Interface) interface allows you to obtain information about the configuration and resources of a PC using network management software.

Support for the WfM (Wired for Management) specification ensures that the network adapter interacts with network management and administration software.

To remotely boot a computer OS over a network, network adapters are equipped with a special BootROM memory. This allows diskless workstations to be used effectively on the network. In most of the tested cards, there was only a socket for installing BootROM; the BootROM chip itself is usually a separately ordered option.

Support for ACPI (Advanced Configuration Power Interface) reduces power consumption. ACPI is a new technology that powers the power management system. It is based on the use of both hardware and software tools. Basically, Wake on LAN is an integral part of ACPI.

Proprietary performance enhancement tools allow you to increase the efficiency of the network card. The most famous of these are 3Com's Parallel Tasking II and Intel's Adaptive Technology. These tools are usually patented.

Support for major operating systems is provided by almost all adapters. The main operating systems include: Windows, Windows NT, NetWare, Linux, SCO UNIX, LAN Manager and others.

The level of service support is assessed by the availability of documentation, diskettes with drivers and the ability to download latest versions drivers from the company's website. The packaging also plays an important role. From this point of view, the best, in our opinion, are D-Link, Allied Telesyn and Surecom network adapters. But in general, the level of support turned out to be satisfactory for all cards.

Typically, the warranty covers the lifetime of the AC adapter (lifetime warranty). Sometimes it is limited to 1-3 years.

Test Methodology

All tests used the latest network card drivers downloaded from the respective manufacturer's Internet servers. In the case when the network card driver allowed any settings and optimizations, the default settings were used (except for the Intel network adapter). It should be noted that cards and corresponding drivers from 3Com and Intel have the richest additional features and functions.

Performance was measured using Novell's Perform3 utility. The principle of the utility is that a small file is copied from the workstation to a shared network drive server, after which it remains in the server's file cache and is repeatedly read from there during a specified period of time. This allows you to achieve memory-network-memory interaction and eliminate the impact of delays associated with disk operations. The utility parameters include initial file size, final file size, resizing step, and test time. The Novell Perform3 utility displays performance values ​​with different file sizes, average and maximum performance (in KB/s). The following parameters were used to configure the utility:

  • Initial file size - 4095 bytes
  • Final file size - 65 535 bytes
  • File increment - 8192 bytes

The testing time with each file was set to twenty seconds.

Each experiment used a pair of identical network cards, one running on the server and the other on the workstation. This does not seem to be in line with common practice, as servers typically use specialized network adapters that come with a number of additional features. But just in this way - the same network cards are installed both on the server and on workstations - testing is carried out by all known test laboratories in the world (KeyLabs, Tolly Group, etc.). The results are somewhat lower, but the experiment turns out to be clean, since only the analyzed network cards work on all computers.

Compaq DeskPro EN Client Configuration:

  • processor Pentium II 450 MHz
  • 512 KB cache
  • RAM 128 MB
  • hard drive 10 GB
  • operating system Microsoft Windows NT Server 4.0c6aSP
  • TCP/IP protocol.

Compaq DeskPro EP Server Configuration:

  • Celeron processor 400 MHz
  • RAM 64 MB
  • hard drive 4.3 GB
  • operating system Microsoft Windows NT Workstation 4.0 c c 6 a SP
  • TCP/IP protocol.

Testing was done with computers connected directly with a UTP Category 5 crossover cable. During these tests, the cards were running in 100Base-TX Full Duplex mode. In this mode, the throughput is somewhat higher due to the fact that part of the service information (for example, confirmation of receipt) is transmitted simultaneously with useful information, the volume of which is estimated. Under these conditions, it was possible to fix quite high throughput values; for example, for a 3Com Fast EtherLink XL 3C905B-TX-NM adapter, an average of 79.23 Mbps.

Processor utilization was measured on the server using the Windows NT Performance Monitor utility; data was written to a log file. The Perform3 utility was run on the client so as not to affect the server's processor load. An Intel Celeron was used as the processor of the server computer, the performance of which is significantly lower than that of the Pentium II and III processors. Intel Celeron was used intentionally: the fact is that since the processor load is determined with a fairly large absolute error, in the case of large absolute values, the relative error turns out to be smaller.

After each test, the Perform3 utility puts the results of its work into a text file as a dataset of the following form:

65535 bytes. 10491.49 KBps. 10491.49 Aggregate KBps. 57343 bytes. 10844.03 KBps. 10844.03 Aggregate KBps. 49151 bytes. 10737.95 KBps. 10737.95 Aggregate KBps. 40959 bytes. 10603.04 KBps. 10603.04 Aggregate KBps. 32767 bytes. 10497.73 KBps. 10497.73 Aggregate KBps. 24575 bytes. 10220.29 KBps. 10220.29 Aggregate KBps. 16383 bytes. 9573.00 KBps. 9573.00 Aggregate KBps. 8191 bytes. 8195.50 KBps. 8195.50 Aggregate KBps. 10844.03 Maximum KBps. 10145.38 Average KBp.

The file size is displayed, the corresponding throughput for the selected client and for all clients (in this case there is only one client), as well as the maximum and average throughput over the entire test. The average values ​​obtained for each test were converted from KB/s to Mbit/s using the formula:
(KB x 8)/1024,
and the value of the P / E index was calculated as the ratio of throughput to processor utilization in percent. Subsequently, the average value of the P/E index was calculated based on the results of three measurements.

When using the Perform3 utility on Windows NT Workstation, the following problem arose: in addition to writing to a network drive, the file was also written to the local file cache, from which it was subsequently read very quickly. The results were impressive, but unrealistic, since there was no data transmission per se over the network. In order for applications to perceive shared network drives as ordinary local drives, the operating system uses a special network component - a redirector that redirects I / O requests over the network. Under normal operating conditions, when writing a file to a shared network drive, the redirector uses the Windows NT caching algorithm. That is why when writing to the server, it also writes to the local file cache of the client machine. And for testing it is necessary that caching is carried out only on the server. To prevent caching on the client computer, in Windows registry NT settings have been changed to disable caching by the redirector. Here's how it was done:

  1. Path to Registry:

    HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Rdr\Parameters

    Parameter name:

    UseWriteBehind enables write-behind optimization for files being written

    Type: REG_DWORD

    Value: 0 (default: 1)

  2. Path to Registry:

    HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Lanmanworkstation\parameters

    Parameter name:

    UtilizeNTCaching specifies whether the redirector will use the Windows NT cache manager to cache file contents.

    Type: REG_DWORD Value: 0 (default: 1)

Intel EtherExpress PRO/100+Management Network Adapter

This card's throughput and CPU usage were almost the same as 3Com's. The window for setting the parameters of this map is shown below.

The new Intel 82559 controller installed on this card provides very high performance, especially on Fast Ethernet networks.

The technology that Intel uses in its Intel EtherExpress PRO/100+ card is called Adaptive Technology. The essence of the method is to automatically change the time intervals between Ethernet packets depending on the network load. As network traffic increases, the distance between individual Ethernet packets dynamically increases to reduce collisions and increase throughput. With a small network load, when the probability of collisions is small, the time intervals between packets are reduced, which also leads to an increase in performance. The benefits of this method should be most pronounced in large collisional Ethernet segments, that is, in cases where the network topology is dominated by hubs rather than switches.

New Intel technology, called Priority Packet, allows you to regulate the traffic passing through the network card, in accordance with the priorities of individual packets. This makes it possible to increase the data transfer rate for mission-critical applications.

Support for VLANs (IEEE 802.1Q standard) is provided.

There are only two indicators on the board - work / connection, speed 100.

www.intel.com

SMC EtherPower II 10/100 Network Adapter SMC9432TX/MP

The architecture of this card uses two promising technologies SMC SimulTasking and Programmable InterPacket Gap. The first technology is similar to 3Com Parallel Tasking technology. Comparing the test results for the cards of these two manufacturers, we can conclude on the degree of efficiency of the implementation of these technologies. We also note that this network card showed the third result both in terms of performance and P / E index, ahead of all cards except 3Com and Intel.

There are four LED indicators on the card: speed 100, transmission, link, duplex.

Company's main website: www.smc.com

Ethernet despite
for all its success, was never elegant.
NICs have only rudimentary
the concept of intelligence. They really
the packet is sent first, and only then
see if anyone else has transmitted the data
at the same time with them. Someone compared Ethernet to
a society in which people can communicate
with each other only when everyone is screaming
simultaneously.

Like him
predecessor, Fast Ethernet uses the method
CSMACD (Carrier Sense Multiple Access with
Collision Detection - Multiple Access Environment with
carrier sense and collision detection).
Behind this long and obscure acronym
hides a very simple technology. When
the Ethernet board should send a message, then
first she waits for silence, then
sends a packet and listens at the same time, not
did someone send a message
at the same time as him. If this happened, then
both packets do not reach the destination. If
there was no collision, and the board should continue
transmit data, it still waits
a few microseconds before again
will try to send a new portion. This
made so that other boards also
could work and no one could capture
channel monopoly. In the event of a collision, both
devices are silent for a short
time span generated
randomly and then take
a new attempt to transfer data.

Due to collisions
Ethernet nor Fast Ethernet can ever reach
its maximum performance 10
or 100 Mbps. As soon as it starts
increase network traffic, temporary
delays between sending individual packets
are reduced, and the number of collisions
increases. Real
Ethernet performance cannot exceed
70% of its potential throughput
ability, and may be even lower if the line
seriously overloaded.

Ethernet uses
packet size is 1516 bytes which is fine
fit when it was created.
Today it is considered a disadvantage when
Ethernet is used for communication
servers, since servers and communication lines
tend to exchange
the number of small packets that
overloads the network. In addition, Fast Ethernet
imposes a limit on the distance between
connected devices - no more than 100
meters and it makes you show
extra care when
designing such networks.

At first, Ethernet was
designed on the basis of bus topology,
when all devices are connected to a common
cable, thin or thick. Application
twisted pair has only partially changed the protocol.
When using a coaxial cable
the collision was determined immediately by all
stations. In the case of twisted pair
use "jam" signal as soon as
station detects a collision, then it
sends a signal to the hub, the latter in
in turn sends "jam" to everyone
devices connected to it.

In order to
reduce congestion, Ethernet networks
broken down into segments that
connected with bridges and
routers. This allows you to transfer
only necessary traffic between segments.
A message sent between two
stations in the same segment will not
transferred to another and will not be able to call in it
overload.

Today at
construction of the central highway,
pooling servers use
switched Ethernet. Ethernet switches can
be considered as high speed
multiport bridges that are able to
independently determine which
ports the packet is addressed to. Switch
looks at packet headers and
compiles a table that defines
where is this or that subscriber with such
physical address. This allows
limit the scope of a package
and reduce the chance of overflow,
sending it only to the right port. Only
broadcast packets are sent out
all ports.

100BaseT
- big brother 10BaseT

technology idea
Fast Ethernet was born in 1992. In August
next year manufacturer group
merged into the Fast Ethernet Alliance (FEA).
The goal of the FEA was to get
formal approval of Fast Ethernet from the committee
802.3 Institute of Electrical Engineers and
radio electronics (Institute of Electrical and Electronic
Engineers, IEEE), since this particular committee
deals with standards for Ethernet. Luck
accompanied by new technology and
supporting alliance: in June 1995
all formal procedures have been completed, and
Fast Ethernet technologies have been named
802.3u.

With a light hand IEEE
Fast Ethernet is called 100BaseT. This is explained
simple: 100BaseT is an extension
10BaseT standard with bandwidth from
10Mbps to 100Mbps. The 100BaseT standard includes
includes a protocol for handling multiple
carrier-sensing access and
CSMA/CD (Carrier Sense Multiple) conflict detection
Access with Collision Detection), which is also used in
10BaseT. In addition, Fast Ethernet can operate on
cables of several types, including
twisted pair. Both of these properties of the new
standards are very important for potential
customers, and it is thanks to them that 100BaseT
turns out to be a successful way to migrate networks
based on 10BaseT.

chief
business case for 100BaseT
is that Fast Ethernet is based on
legacy technology. Since in Fast Ethernet
the same transfer protocol is used
messages, as in older versions of Ethernet, and
cable systems of these standards
compatible, for transition to 100BaseT from 10BaseT
required

smaller
capital investment than for installation
other types of high-speed networks. except
Moreover, since 100BaseT is
continuation of the old Ethernet standard, all
tools and procedures
network performance analysis, as well as all
software running on
old Ethernet networks should in this standard
keep working.
Hence the 100BaseT environment will be familiar
network administrators with experience
with Ethernet. This means that staff training will take
less time and cost significantly
cheaper.

PRESERVATION
PROTOCOL

Perhaps,
the greatest practical benefit of the new
technology brought the decision to leave
message passing protocol unchanged.
Message passing protocol, in our case
CSMA/CD defines the way in which data
transmitted over the network from one node to another
through the cable system. In the ISO/OSI model
the CSMA/CD protocol is part of the layer
media access control (MAC).
This level defines the format
which information is transmitted over the network, and
the way in which a network device receives
network access (or network management) for
data transmission.

Name CSMA/CD
can be divided into two parts: Carrier Sense Multiple Access
and collision detection. From the first part of the name
conclude how a node with a network
adapter determines the moment when it
a message should be sent. In accordance with
CSMA protocol, the network node first "listens"
network to determine if it is being transmitted in
currently any other message.
If you hear a carrier tone (carrier tone),
it means that the network is busy at the moment
message - the network node switches to the mode
waiting and stays in it until the network
will be released. When the network comes
silence, the node starts transmitting.
In fact, the data is sent to all nodes
network or segment, but are accepted only by those
node to which they are addressed.

Collision Detection-
the second part of the name is used to resolve
situations where two or more nodes try to
send messages at the same time.
According to the CSMA protocol, each ready for
transmission, the node must first listen to the network,
to determine if she is free. But,
if two nodes are listening at the same time,
both of them will decide that the network is free and will start
send their packets at the same time. In this
situations transmitted data
superimposed on each other (network
engineers call this conflict), and none
from the messages does not reach the point
destination. Collision Detection requires that the node
listened to the network also after the transmission
package. If a conflict is found, then
the node repeats the transmission through a random
a chosen period of time and
checks again to see if a conflict has occurred.

THREE TYPES OF FAST ETHERNET

As well as
preservation of the CSMA/CD protocol, other important
the solution was to design 100BaseT like this
in such a way that it can be applied
cables of different types - like those that
used in older versions of Ethernet, and
newer models. The standard defines three
modifications to ensure work with
different types of Fast Ethernet cables: 100BaseTX, 100BaseT4
and 100BaseFX. Modifications 100BaseTX and 100BaseT4 are calculated
over twisted pair, while 100BaseFX was designed for
optical cable.

Standard 100BaseTX
requires the use of two pairs of UTP or STP. One
a pair is used for transmission, the other for
reception. These requirements are met by two
main cable standard: EIA/TIA-568 UTP
Category 5 and Type 1 STP from IBM. To 100BaseTX
attractive provision
full duplex mode when working with
network servers, as well as the use
only two of the four pairs of eight-core
cables - the other two pairs remain
free and can be used in
further to expand the possibilities
networks.

However, if you
are going to work with 100BaseTX using
this Category 5 wiring, you should
be aware of its shortcomings. This cable
more expensive than other eight-core cables (for example
Categories 3). In addition, to work with
requires the use of punchdown blocks (punchdown
blocks), connectors and patch panels,
meeting the requirements of Category 5.
It should be added that in order to support
full duplex mode should
install full duplex switches.

100BaseT4 standard
has more lenient requirements for
the cable being used. The reason for that is
the fact that 100BaseT4 uses
all four pairs of eight-core cable: one
for transmitting, the other for receiving, and
the remaining two work as a transmission,
as well as for the reception. So in 100BaseT4 and receive,
and data transmission can be carried out by
three pairs. Expanding 100 Mbps into three pairs,
100BaseT4 reduces the frequency of the signal, so
for its transmission is quite and less
high quality cable. For implementation
100BaseT4 networks, UTP Category 3 and
5 as well as UTP Category 5 and STP Type 1.

Advantage
100BaseT4 is less rigid
wiring requirements. Category 3 and
4 are more common, and besides, they
Significantly cheaper than cables
Categories 5 things to keep in mind before
start of installation work. Disadvantages
are that 100BaseT4 needs all four
pairs and what is full duplex mode by this
protocol is not supported.

Fast Ethernet includes
also standard for multimode operation
optical fiber with 62.5 micron core and 125 micron
shell. The 100BaseFX standard is focused on
mainly on the highway - per connection
Fast Ethernet repeaters within one
building. Traditional Benefits
optical cable are inherent and standard
100BaseFX: electromagnetic immunity
noise, improved data protection and large
distances between network devices.

RUNNER
SHORT DISTANCE

Although Fast Ethernet and
is a continuation of the Ethernet standard,
transition from a 10BaseT network to a 100BaseT network is not possible
be regarded as a mechanical replacement
equipment - for this they can
changes in the network topology are required.

Theoretical
Fast Ethernet segment diameter limit
is 250 meters; it's only 10
percent of theoretical size limit
Ethernet networks (2500 meters). This restriction
stems from the nature of the CSMA/CD protocol and
transmission speed of 100Mbps.

What already
noted earlier, transmitting data
workstation should listen on the network in
period of time to make sure
that the data has reached the destination station.
On an Ethernet network with a bandwidth of 10
Mbps (for example 10Base5) time span,
necessary workstation for
listening to the network for a conflict,
determined by the distance that 512-bit
frame (frame size specified in the Ethernet standard)
will pass during the processing of this frame on
workstation. For an Ethernet network with bandwidth
with a capacity of 10 Mbps, this distance is equal to
2500 meters.

On the other hand,
the same 512-bit frame (802.3u standard
specifies a frame of the same size as 802.3, then
is in 512 bits) transmitted by the working
station in a Fast Ethernet network, only 250 m,
before the workstation completes it
processing. If the receiving station were
away from the transmitting station by
distance over 250m, then the frame could
come into conflict with another frame on
lines somewhere further, and the transmitting
station, having completed the transmission, is no longer
accept this conflict. That's why
the maximum diameter of a 100BaseT network is
250 meters.

To
use the allowed distance,
need two repeaters to connect
all nodes. According to the standard,
maximum distance between node and
repeater is 100 meters; in fast ethernet,
as in 10BaseT, the distance between
hub and workstation
must exceed 100 meters. Insofar as
connecting devices (repeaters)
introduce additional delays, the real
working distance between nodes
be even smaller. That's why
seems reasonable to take all
distances with some margin.

To work on
long distances will have to purchase
optical cable. For example, equipment
100BaseFX in half duplex mode allows
connect a switch to another switch
or end station located on
up to 450 meters apart.
By installing full duplex 100BaseFX, you can
connect two network devices
distance up to two kilometers.

HOW
INSTALL 100BASET

In addition to cables,
which we have already discussed, to install Fast
Ethernet will require network adapters for
workstations and servers, hubs
100BaseT and possibly some
100BaseT switches.

adapters,
necessary for organizing a 100BaseT network,
are called 10/100 Mbps Ethernet adapters.
These adapters are capable (this is a requirement
100BaseT standard) independently distinguish 10
Mbps from 100 Mbps. To serve a group
servers and workstations transferred to
100BaseT, you will also need a 100BaseT hub.

When turned on
server or personal computer from
adapter 10/100 the latter gives a signal,
announcing that it can provide
bandwidth 100Mbps. If
receiving station (most likely
will be a hub) is also designed for
work with 100BaseT, it will issue a signal in response, according to
to which both the hub and the PC or server
automatically switch to 100BaseT mode. If
hub only works with 10BaseT, it does not
returns a signal, and the PC or server
will automatically switch to 10BaseT mode.

When
small-scale 100BaseT configurations can
apply a 10/100 bridge or switch that
will provide communication for the part of the network that works with
100BaseT, with existing network
10BaseT.

deceptive
RAPIDITY

Summing it all up
above, we note that, as it seems to us,
Fast Ethernet is best for problem solving
high peak loads. For example, if
one of the users works with CAD or
image processing programs and
needs to increase throughput
capabilities, Fast Ethernet may be
good way out. However, if
problems are caused by excess
users on the network, then 100BaseT starts
slow down the exchange of information at about 50 percent
network load - in other words, on the same
level as 10BaseT. But in the end it's
it's nothing more than an extension.

Share with friends or save for yourself:

Loading...