SoFunction
Updated on 2025-04-12

Comprehensive analysis of server technology

What is a server

A server is a high-performance computer that serves as a node of the network, storing and processing 80% of the data and information on the network, so it is also called the soul of the network. To make an image metaphor: a server is like a post office switch, while fixed or mobile network terminals such as microcomputers, notebooks, PDAs, and mobile phones are like telephones scattered in homes, various offices, public places, etc. In our daily life and work, we must go through a switch to reach the target phone; in the same way, network terminal devices such as microcomputers in homes and enterprises must go online to obtain information, communicate with the outside world, and entertainment, etc. must also go through a server. Therefore, it can also be said that the server is "organizing" and "leading" these devices.

The composition of the server is basically similar to that of the microcomputer, including processors, hard disks, memory, system buses, etc. They are specially formulated for specific network applications. Therefore, there are great differences between the server and the microcomputer in terms of processing capabilities, stability, reliability, security, scalability, and manageability. Especially with the advancement of information technology, the role of the network is becoming more and more obvious, and the requirements for the data processing capabilities, security, etc. of your information system are becoming increasingly high. If you are stolen by hackers during e-commerce, you will lose key business data; if you cannot access it normally on an ATM, you should consider the server behind the commander of these equipment systems, rather than complaining about the quality of the staff and other objective conditions.

Server technology EMP technology

Currently, the main technical hotspots of servers are: IRISC and CISC technology, processor technology, multi-processor technology (AMP technology, SMP technology, MPP technology, COMA technology, cluster technology and NUMA technology), SCSI interface technology, intelligent I/O technology, fault tolerance technology, disk array technology, hot plug technology, and dual-machine hot backup.

Servers undertake the task of transmitting and processing large amounts of data in the network, and must have high scalability, high reliability, high availability and high manageability. The IA-64 system will drive the improvement of server technology characteristics, such as high-performance CPU, multi-processor technology, bus and memory technology, fault tolerance technology, cluster technology, hardware management interface, balanced server platform technology, etc.

EMP (Emergency Management Port) technology

EMP (Emergency Management Port) technology is also a remote management technology. Using EMP technology, it can directly connect to the server through a telephone line or cable on the client to perform off-site operations on the server, such as turning off the operating system, starting the power supply, turning off the power supply, capturing the server screen, configuring the server BIOS, etc. It is a good technical means to achieve fast service and save maintenance costs. The use of two technologies: ISC and EMP can realize remote monitoring and management of the server.

Server technology: RAID redundant disk array technology

Currently, the main technical hotspots of servers are: IRISC and CISC technology, processor technology, multi-processor technology (AMP technology, SMP technology, MPP technology, COMA technology, cluster technology and NUMA technology), SCSI interface technology, intelligent I/O technology, fault tolerance technology, disk array technology, hot plug technology, and dual-machine hot backup.

Servers undertake the task of transmitting and processing large amounts of data in the network, and must have high scalability, high reliability, high availability and high manageability. The IA-64 system will drive the improvement of server technology characteristics, such as high-performance CPU, multi-processor technology, bus and memory technology, fault tolerance technology, cluster technology, hardware management interface, balanced server platform technology, etc.

RAID (Redundant Array of Independent Disks) Redundant Disk Array Technology

RAID technology is an industrial standard, and the definitions of RAID levels of different manufacturers are different. Currently, there are four types of definitions of RAID level that can be widely recognized by the industry, RAID 0, RAID 1, RAID 0+1 and RAID 5.

RAID 0 is a data redundant storage space striping, with low cost, extremely high read and write performance, and high storage space utilization. It is suitable for special applications with extremely strict speed requirements such as Video/Audio signal storage, temporary file dumping, etc. However, due to the lack of data redundancy, its security is greatly reduced, and damage to any hard disk that makes up the array will cause catastrophic data loss. Therefore, it is unwise for general applications to configure more than 4 hard disks in RAID 0.

RAID 1 is a complete mirror of two hard disk data, with good security, simple technology, convenient management, and good read and write performance. But it cannot be expanded (single hard disk capacity), and the data space is wasteful. Strictly speaking, it should not be called "array".

RAID 0+1 combines the characteristics of RAID 0 and RAID 1. The independent disk is configured as RAID 0, and two complete sets of RAID 0 mirror each other. It has excellent read and write performance and high security, but it invests a lot in building an array and has low data space utilization, so it cannot be called a cost-effective solution.

Overview of load balancing technology

At present, whether in enterprise networks, park networks or wide-area networks such as the Internet, the development of business volume has exceeded the most optimistic estimates in the past. The Internet access boom is surging, and new applications are emerging one after another. Even if the network built according to the optimal configuration at that time, it will soon feel overwhelmed. Especially the core parts of each network, the data traffic and computing strength are so high that a single device cannot afford it. How to achieve reasonable traffic allocation between multiple network devices that complete the same function so as not to cause one device to be too busy while other devices do not fully exert their processing capabilities has become a problem, and the load balancing mechanism has emerged.

Load balancing is built on the existing network structure, which provides an inexpensive and effective way to expand server bandwidth and increase throughput, strengthen network data processing capabilities, and improve network flexibility and availability. It mainly completes the following tasks: solving network congestion problems, providing services nearby, and achieving geographical location irrelevance; providing users with better access quality; improving server response speed; improving server and other resources utilization efficiency; and avoiding single-point failures in key areas of the network.

For load balancing applications of a network, you can start from different levels of the network. The specific situation depends on the specific analysis of the network bottlenecks. It is basically achieved from three perspectives: transmission link aggregation, adopting higher-level network switching technology, and setting up server cluster strategies.

■Transport link aggregation

In order to support the increasing high bandwidth applications, more and more PCs are using faster links to connect to the network. The distribution of traffic volume in the network is unbalanced, with high core and low edge, high key departments and low general departments. With the significant improvement of computer processing capabilities, people have higher requirements for the processing capabilities of multi-working group LANs. When the demand for high-bandwidth applications within the enterprise continues to increase (such as web access, document transmission and internal network connection), the data interfaces in the core parts of the LAN will create bottlenecks, which extend the response time of customer application requests. In addition, the LAN has the characteristics of decentralization, and the network itself does not have protection measures for the server. An unintentional action (like kicking off the plug of the network cable) will disconnect the server from the network.

Generally, the countermeasure to solve the bottleneck problem is to increase the capacity of the server link to make it exceed current requirements. For example, you can upgrade from Fast Ethernet to Gigabit Ethernet. For large enterprises, adopting upgrade technology is a long-term and promising solution. However, for many companies, when the demand is not as large as it has to spend a lot of money and time to upgrade, using upgrade technology is a waste of money. In this case, link aggregation technology provides a cost-effective solution to eliminate bottlenecks and insecurity factors on the transmission link.

Link aggregation technology integrates the transmission capacity of multiple lines into a single logical connection. When the original line cannot meet the needs and the upgrade of a single line is too expensive or difficult to achieve, a multi-line solution must be adopted. Currently, there are 4 link aggregation technologies that can "bundle" multiple lines. The synchronous IMUX system works in the bit layer of T1/E1, and uses multiple synchronized DS1 channels to transmit data to achieve load balancing. IMA is another multi-line reverse multiplexing technology that works at the cell level and can run on platforms using ATM routers. Using a router to implement multi-line is a popular link aggregation technology. The router can allocate packets to each parallel link based on the cache size of the known destination address, or it can use a cyclic allocation method to distribute packets to the line. Multi-link PPP, also known as MP or MLP, is a router load balancing technology used to encapsulate data links using PPP. MP can decompose large PPP packets into small data segments and distribute them to multiple parallel lines, and can also dynamically allocate dialing lines according to the current link utilization. This can run well on low-speed lines despite its slow speed, as both packet segmentation and additional buffering increase latency.

Link aggregation systems increase network complexity, but also improve network reliability, allowing people to adopt redundant routing on lines in critical LAN segments such as servers. For IP systems, VRRP (virtual routing redundancy protocol) can be considered. VRRP can generate a virtual default gateway address. When the main router cannot be connected, the backup router will use this address to enable LAN communication to continue. In short, when the performance of the main line must be improved and the upgrade of a single line is not feasible, link aggregation technology can be used.

Higher level exchange

Large networks are generally composed of a large number of specialized technical equipment, such as firewalls, routers, layer 2/3 switches, load balancing devices, buffer servers and web servers. How to organically combine these technical equipment is a key issue that directly affects network performance. Many switches now provide layer 4 switching functions, which can map an external IP address into multiple internal IP addresses, and dynamically use one of the internal addresses for each TCP connection request to achieve the purpose of load balancing. Some protocols support load balancing related functions internally, such as redirection capabilities in the HTTP protocol.

Web content exchange technology, namely URL switching or layer seven switching technology, provides a high-level control method for access traffic. Web content exchange technology checks all HTTP headers, performs load balancing decisions based on the information in the header, and can determine how to provide services for personal homepages and image data based on this information. It is not controlled based on the TCP port number, so it will not cause the access traffic to be stuck. If the web server has optimized special functions such as image services, SSL dialogues, database transaction services, etc., then using this level of traffic control will improve the performance of the network. At present, products and solutions that adopt Layer Seven switching technology include iSwitch, switches, Cisco's CDN (Content Switching Network System), etc.

Server Cluster Solution

In some cases, for example, when an internal employee and an external client use the website at the same time, the company connects the internal employee's service request to a slower server to provide more resources to the external client, Web content exchange technology can be used. Web host access control devices can also use this technology to reduce hardware costs because it can easily transfer user traffic to multiple hosts to the same web server. If the user visits increase to a certain level, these traffic can also be transferred to dedicated Web server devices. Although this dedicated device is costly, since the same Web content exchange technology is used to control traffic, the structural framework of the network does not need to be changed.

However, the number of standards and rules that load balancing devices using Web content exchange technology can support is limited, and the flexibility of standards and rules they use is limited. In addition, the depth of the HTTP header that the load balancing device can monitor is also a factor that limits the content exchange capability. If the information you are looking for is in a field that cannot be monitored by the load balancing device, the role of content exchange cannot be played. Furthermore, content exchange is also limited by the number of TCP connections that can be opened simultaneously and the establishment and disconnection ratio of TCP connections. In addition, Web content exchange technology will also occupy a large amount of system resources (including memory and processor usage). Testing of Web content exchange technology shows that manipulating the throughput of Web content is laborious and sometimes only minor performance improvements can be achieved. Therefore, network administrators must seriously consider the issues of investment and reward.

■ Server cluster with balanced policies

Today, servers must have the ability to provide a large number of concurrent access services, and their processing capabilities and I/O capabilities have become bottlenecks in providing services. If the increase in customers causes traffic to exceed the range that the server can bear, the result will inevitably be a downtime. Obviously, the limited performance of a single server cannot solve this problem. The processing capacity of an ordinary server can only reach tens of thousands to hundreds of thousands of requests per second, and it cannot handle millions or even more requests in one second. But if 10 such servers can be formed into a system and all requests are evenly distributed to all servers through software technology, then this system will have the ability to process millions or even more requests per second. This is the initial basic design idea for load balancing using server clusters.

Early server clusters were usually backed up with fiber optic mirror cards for master-slave mode. What makes service operators trouble is that critical servers or servers with more applications and large data traffic generally do not have too low grades, while service operators spend money on two servers but often only get the performance of one server. The new solution is shown in the figure. Through LSANT (Load Sharing Network Address Transfer), the different IP addresses of multiple server network cards are translated into a VIP (Virtual IP) address, so that each server is in a working state at all times. The work that originally needed to be completed with small machines was changed to multiple PC servers. This elastic solution has a very obvious effect on investment protection - not only avoids the huge equipment investment brought about by the rigid upgrade of small machines, but also avoids repeated investment in personnel training. At the same time, service operators can adjust the number of servers at any time according to business needs.

Network load balancing improves the availability and scalability of Internet server programs on such web servers, FTP servers, and other mission-critical servers. A single computer can provide a limited level of server reliability and scalability. However, by connecting hosts of two or more advanced servers into clusters, network load balancing can provide the reliability and performance required by mission-critical servers.

In order to establish a high-load Web site, a distributed structure with multiple servers must be used. The above-mentioned way of using a proxy server and a web server, or the way two web servers cooperate with each other also belongs to a multi-server structure, but in these multi-server structures, each server plays a different role and belongs to an asymmetric architecture. The role of each server in an asymmetric server structure is different, for example, one server is used to provide static web pages, the other is used to provide dynamic web pages, and so on. This makes it necessary to consider the relationship between different servers when designing web pages. Once the relationship between servers is to be changed, some web pages will cause connection errors, which is not conducive to maintenance and poor scalability.

The network design structure that can perform load balancing is a symmetrical structure. In the symmetrical structure, each server has an equivalent status and can provide services to the outside world alone without the assistance of other servers. Then, some technique can be used to evenly distribute the requests sent from outside on each server in the symmetric structure, and the server receiving the connection requests independently responds to the client's request. In this structure, since it is not difficult to establish a completely consistent web server, load balancing technology becomes a key technology for building a high-load web site.

In short, load balancing is a strategy that allows multiple servers or links to jointly undertake some heavy computing or I/O tasks, thereby eliminating network bottlenecks at a lower cost and improving network flexibility and reliability.

High-end server technology

Server performance indicators are represented by system response speed and job throughput. Response speed refers to the time the user responds from input information to the server completing the task. Job throughput is the amount of tasks completed by the entire server in unit time. Assuming that the user inputs the request continuously, the throughput of a single user is inversely proportional to the response time when the system resources are abundant, that is, the shorter the response time, the greater the throughput. In order to shorten the response time of a user or service, more resources can be allocated to it. Performance adjustment is to change the system resources allocated by each user and service program according to the application requirements and the specific operating environment and status of the server, give full play to the system capabilities, and use as few resources as possible to meet user requirements, so as to achieve the purpose of serving more users.

Technical Objectives

The high scalability, high availability, easy management and high reliability required by the server are not only technical goals pursued by manufacturers, but also required by users.

Scalability is specifically reflected in two aspects: one is to leave more free space in the chassis, and the other is to have abundant I/O bandwidth. With the increase in processor computing speed and the increase in the number of parallel processors, the bottleneck of server performance will be attributed to PCI and its affiliated devices. The significance of high scalability is that users can add relevant components at any time as needed, while meeting the system operation requirements, while protecting investment.

Availability is measured by the proportion of time when the device is in normal operation. For example, 99.9% of availability means that the equipment cannot operate normally for 8 hours per year, and 99.999% of availability means that the equipment cannot operate normally for 5 minutes per year. Component redundancy is the basic method to improve availability. It is usually to add redundant configurations to the components that cause the most damage to the system (such as power supplies, hard disks, fans and PCI cards) and design convenient replacement structures (such as hot swap) to ensure that these devices will not affect the normal operation of the system even if they fail.

Manageability aims to utilize specific technologies and products to improve system reliability and reduce system purchase, use, deployment and support costs. The most significant effect is to reduce the working hours of maintenance personnel and avoid the losses caused by system downtime. The management performance of the server directly affects the ease of use of the server. Manageability is the largest proportion of TCO's various expenses. Studies have shown that the cost of system deployment and support far exceeds the cost of initial purchases, and the compensation paid to management and support staff is the highest share. In addition, the financial losses caused by the reduction of work efficiency, the loss of business opportunities and the decline in operating income cannot be ignored. Therefore, the manageability of the system is not only an urgent requirement of the IT department, but also plays a very critical role in the operating efficiency of the enterprise. Manageability products and tools can simplify system management by providing relevant information within the system. Remote management is achieved through the network, and technical support personnel can solve problems on their own desktop without having to go to the fault site in person. System components can automatically monitor their working status. If you find any fault hazards, you can issue a warning at any time to remind maintenance personnel to take immediate measures to protect the company's data assets. The operation of replacing faulty components is also very simple and convenient.

Speaking of reliability, simply put it, it requires the server to run stably, which means the downtime is low. The key is the collaboration between the operating system and hardware devices. If the resources to be processed are controlled on the CPU and operating system, rather than on the application, the system will be avoided from being unable to run due to errors in processing a certain task, and the server downtime will be greatly reduced. This is precisely one of the advantages of the Unix/Linux system. The interruptions in daily maintenance work include: host upgrade, hardware maintenance or installation, operating system upgrade, application/file upgrade or maintenance, file reorganization, system-wide backup, etc. Unexpected disasters include hard disk damage, system failure, software failure, user error, power failure, man-made damage and natural disasters.

SMP

SMP (Symmetric Multi-Processor) is a symmetric multi-processor. In a symmetrical structure, each processor in the machine has the same status, they are connected together to share a memory. There is an operating system in the memory, and every computer can run this operating system and respond to the requirements of external devices, that is, the status of each memory is equal and symmetrical. In the domestic market, this type of processor is generally 4 or 8, and a few are 16 processors. However, generally speaking, SMP structures have poor machine scalability and are difficult to achieve more than 100 multiprocessors. The conventional ones are generally 8 to 16, but this is enough for most users. The advantage of this kind of machine is that its usage is not much different from that of a microcomputer or a workstation, and the programming changes are relatively small. If the programs written with a microcomputer workstation are to be ported to an SMP machine for use, it is relatively easy to change. The usability of the SMP structure is relatively poor. Because 4 or 8 processors share an operating system and a memory, once there is a problem with the operating system, the entire machine will be completely paralyzed. Moreover, because this machine is poor in scalability, it is not easy to protect users' investment. However, this type of machine technology is relatively mature and there are many corresponding software, so a large number of parallel machines launched in the domestic market are now in this type.

Cluster Technology

In layman's terms, clustering is a technology that connects at least two systems together, allowing two servers to work like a machine or look like a machine. The use of cluster systems is usually to improve the stability of the system and the data processing capabilities and service capabilities of the network center. Since the early 1980s, various forms of cluster technologies have emerged. Because clusters can provide high availability and scalability, it quickly becomes the backbone of enterprise and ISP computing.

Common cluster technologies

1. Server mirroring technology

Server mirroring technology is to mirror the hard disks of two servers built on the same LAN through software or other special network devices (such as mirror cards). Among them, one server is designated as the master server and the other is the slave server. Clients can only read and write the mirrored volumes on the primary server, that is, only the primary server provides services to the user through the network, and the corresponding volumes on the secondary server are locked to prevent access to data. The master/slave servers monitor each other's operating status through the heartbeat monitoring line. When the master server fails due to a failure, the slave server will take over the application of the master server in a very short time.

The characteristic of server mirroring technology is that it is low cost, improves the availability of the system, and ensures that the system is still available when one server is down. However, this technology is limited to a cluster of two servers and the system is not scalable.

2. Application error taking over cluster technology

The error-takeover cluster technology is to connect two or more servers built on the same network through cluster technology. Each server in the cluster node runs different applications, has its own broadcast address, and provides services to front-end users. At the same time, each server monitors the operating status of other servers, providing a hot backup function for designated servers. When a node fails due to a failure, the server specified in the cluster system will take over the data and applications of the failed machine in a very short time and continue to provide services to front-end users.

\Error taking over clustering technology usually requires sharing external storage devices - disk array cabinets. Two or more servers are connected to the disk array cabinet via SCSI cables or fibers, and the data is stored on the disk array cabinet. In this kind of cluster system, two nodes are usually backed up by each other, rather than several servers backing up for one server at the same time. The nodes in the cluster system monitor each other's heartbeat through a serial port, a shared disk partition or an internal network.

Error taking over clustering technology is often used in clusters such as database servers, MAIL servers, etc. This clustering technology increases peripheral costs due to the use of shared storage devices. It can realize a cluster of up to 32 machines, greatly improving the availability and scalability of the system.

3. Fault-tolerant cluster technology

A typical application of fault-tolerant clustering technology is fault-tolerant machines. In fault-tolerant machines, each component has a redundant design. In fault-tolerant cluster technology, each node of the cluster system is closely linked to other nodes. They often need important subsystems such as shared memory, hard disk, CPU and I/O. Each node in the fault-tolerant cluster system is collectively imaged into an independent system, and all nodes are part of this image system. In a fault-tolerant cluster system, the switching of various applications between different nodes can be smoothly completed without switching time.

The implementation of fault-tolerant cluster technology often requires special software and hardware design, so it is costly, but fault-tolerant systems maximize the availability of the system and are the best choice for the financial, financial and security sectors.

Currently, the application error takeover technology is widely used in improving the usability of the system, that is, the cluster technology we usually use in which dual-machine shared disk arrays through SCSI cables. This technology is currently further expanded by various cluster software manufacturers and operating system software manufacturers, forming various cluster systems on the market.

High-performance cluster system technology based on IA architecture

Using the latest high-performance server cluster system with 4 and 8-channel IA server architecture, the leading VI (Visual Interface) technology is adopted to effectively eliminate the bottleneck of communication between nodes in the system. At the same time, the load balancing technology adopted by the system allows user equipment to be fully utilized, and it achieves the reliability of 49s, with extremely high product cost-effectiveness, providing a powerful database server platform for Internet applications.

1. System Overview

The data center field has always been the world of high-end RISC servers. For many years, people have only chosen small machines such as AS400, E10000, HP9000, etc., which are expensive and have high maintenance costs. IA architecture servers are cheap and easy to use and maintain. Supercomputers can be constructed through Cluster technology. Their super processing power can replace expensive medium and mainframes, opening up new directions for high-end applications in the industry.

For large growing users, the growth of enterprise operation and management data stored in data centers or data warehouses is amazing, and the role of these data on large users is very important. The data accumulated over several years of development is a valuable asset. By analyzing these vast data, operators can obtain intuitive business charts and curves, which can provide strong decision-making support for the future development of large users. However, as such data continues to expand over time, it puts huge pressure on IT system administrators of large users.

What kind of server does the user really need to meet the current and future development needs?

First of all, you must have super computing power and can withstand concurrent access from long-term and large users.

Secondly, the high availability and easy-to-use and easy management of the server system are also extremely important to users. If the system fails, causing service interruption or the loss of important information, it will cause recoverable losses to users. Therefore, users must consider highly available system solutions when choosing a server system.

Third, with the continuous accumulation of data, querying and statistics of data will make the system slower and slower, and the update of hardware equipment can be said to be an indispensable task for a large developing user.

Using the latest 4 and 8-channel IA server architecture high-performance server cluster system [1], the leading VI (Visual Interface) technology is adopted to effectively eliminate the bottleneck of communication between nodes in the system. At the same time, the load balancing technology adopted by the system allows user equipment to be fully utilized, and it achieves the reliability of 49s, with extremely high product cost-effectiveness. Since its launch in 1999, it has attracted widespread attention from users and provides a strong database server platform for domestic users.

2. System Principles

The high-performance server cluster system is a cluster based on 2 nodes or 4 nodes, with up to 32 CPUs and a maximum memory support of 32GB. 4 nodes form a working unit, and up to 16 working units can be cascaded.

Each node is an IA server, which supports parallel processing of 4-channel or 8-channel Pentium III Xeon CPUs. Each server is plugged into a Gigabit network card or a high-speed switching card with a vi structure, and connected to a high-speed switch (can be a Gigabit Ethernet card switch or some special high-speed switch, such as a high-speed switch with a vi structure) as data exchange between servers, which is called a SAN (Server Area Network) switch.

Each server is plugged into a 100-megabit or gigabit Ethernet card, connected to a switch or hub for LAN, providing connection services for client access.

Four servers share a Fibre Channel disk array cabinet. Each server has two Fibre Channel cards connected to two Fibre Channel hubs, each connected to two controllers in the Fibre Channel disk array cabinet. For Fibre Channel disk array cabinets, as long as one controller can work properly, the entire array cabinet can work normally, so this configuration scheme is redundant to prevent single point failure. For the most important data stored in the disk array cabinet, the cluster system and the disk array cabinet can also be stored separately to ensure data security. Fibre Channel allows a distance of up to 10 kilometers between the cluster system and the disk array cabinet.

Each server has a hard disk area for installing the boot system and the management part of the database system of the machine. User data is stored in a shared disk array cabinet.

In the LAN, there is a client as a management console, and a database management console is installed on it to manage parallel databases. It can monitor database instances on four nodes at the same time, and realize functions such as starting, stopping, and monitoring operation performance.

In addition, the network management system, SAN management console, disk cabinet management console, UPS management console, etc. are installed on this client to realize unified management of the cluster system. Some management functions only need to install the TCP/IP protocol when implementing them, while others also need to install the SNMP protocol to work normally.

In addition to excellent performance indicators, a good cluster system also needs to have corresponding operating system and database support. Our current cluster system supports WINDOWS NT 4.0 and WINDOWS 2000 operating system [2], and supports ORACLE and DB2 in terms of databases. It does not run on a stand-alone machine. It can only show its performance when multiple nodes work at the same time, and enable the system to truly achieve load balancing.

2.1 Two-node cluster system

From the perspective of solution configuration, users can make different choices according to their needs, and can use two high-end servers to implement a virtual host. At this time, high-speed switching devices using vi structure are more advantageous. There is no need to use vi switches. High-speed data exchange between servers can be directly connected with a high-speed switching card with vi structure on each server. If you use a Gigabit Ethernet card, you also need a Gigabit switch, which is costly.

The above is all the content of this article. I hope it will be helpful to everyone's study and I hope everyone will support me more.