- Table of Contents
- Related Documents
-
Title | Size | Download |
---|---|---|
01-Text | 13.04 MB |
Installation safety recommendations
Liquid cooling system requirements
Installation site requirements
Airflow direction of the server
Temperature and humidity requirements
Equipment room height requirements
Corrosive gas concentration requirements
Installing or removing the server
Installing cable management brackets
Connecting a mouse, keyboard, and monitor
Removing the server from a rack
Powering on and powering off the server
Configuring basic BIOS settings
Installing the operating system and hardware drivers
Installing the operating system
Processor installation guidelines
Replacing the server management module
Removing the server management module
Installing the server management module
Replacing riser cards and PCIe modules
Riser card and PCIe module compatibility
Removing a riser card and a PCIe module
Installing a riser card and a PCIe module
Installing PCIe modules and a riser card on PCIe riser connector 3
Installing PCIe modules and a riser card on PCIe riser connector 4
Replacing a storage controller and a power fail safeguard module
Removing a standard storage controller and a power fail safeguard module
Installing a standard storage controller and a power fail safeguard module
Replacing the chassis air baffle
Replacing a standard PCIe network adapter
Replacing an OCP network adapter
Replacing a SATA M.2 SSD and a front SATA M.2 SSD expander module
Removing a SATA M.2 SSD and a SATA M.2 SSD expander module
Installing a SATA M.2 SSD and a SATA M.2 SSD expander module
Replacing an NVMe M.2 SSD and an NVMe M.2 SSD expander module
Removing an NVMe M.2 SSD and an NVMe M.2 SSD expander module
Installing an NVMe M.2 SSD and an NVMe M.2 SSD expander module
Replacing a serial & DSD module
Removing a serial & DSD module
Installing a serial & DSD module
Removing an SD card and serial & DSD module
Installing an SD card and serial & DSD module
Adding an LCD smart management module
Replacing the LCD smart management module
Removing the LCD smart management module
Installing the LCD smart management module
Replacing a chassis air baffle
Installing a chassis air baffle
Installing and setting up a TCM or TPM
Installation and setup flowchart
Enabling the TCM or TPM in the BIOS
Configuring encryption in the operating system
Replacing the NVMe VROC module
Installing the NVMe VROC module
Installing a GPU module on the rear 4GPU module
Removing and installing a blank
Connecting cables for the OCP network adapter
Connecting the supercapacitor cable
Connecting cables for the mid GPU module
Connecting cables for the rear 4GPU module
Connecting cables for the front M.2 SSD expander module
Connecting cables for riser cards
Connecting the LCD smart management module cable
Connecting an inlet temperature sensor cable
Monitoring the temperature and humidity in the equipment room
Updating firmware for the server
Safety information
For more information, see the operating environment requirements for H3C indoor devices.
Safety sign conventions
To avoid bodily injury or damage to the server or its components, make sure you are familiar with the safety signs on the server chassis or its components.
Sign |
Description |
Circuit or electricity hazards are present. Only H3C authorized or professional server engineers are allowed to service, repair, or upgrade the server. To avoid bodily injury or damage to circuits, do not open any components marked with the electrical hazard sign unless you have authorization to do so. |
|
Electrical hazards are present. Field servicing or repair is not allowed. To avoid bodily injury, do not open any components with the field-servicing forbidden sign in any circumstances. |
|
The RJ-45 ports on the server can be used only for Ethernet connections. To avoid electrical shocks, fire, or damage to the equipment, do not connect an RJ-45 port to a telephone. |
|
The surface or component might be hot and present burn hazards. To avoid being burnt, allow hot surfaces or components to cool before touching them. |
|
The server or component is heavy and requires more than one people to carry or move. To avoid bodily injury or damage to hardware, do not move a heavy component alone. In addition, observe local occupational health and safety requirements and guidelines for manual material handling. |
|
The server is powered by multiple power supplies. To avoid bodily injury from electrical shocks, make sure you disconnect all power supplies if you are performing offline servicing. |
Power source recommendations
Power instability or outage might cause data loss, service disruption, or damage to the server in the worst case.
To protect the server from unstable power or power outage, use uninterrupted power supplies (UPSs) to provide power for the server.
Installation safety recommendations
To avoid bodily injury or damage to the server, read the following information carefully before you operate the server.
General operating safety
To avoid bodily injury or damage to the server, follow these guidelines when you operate the server:
· Only H3C authorized or professional server engineers are allowed to install, service, repair, operate, or upgrade the server.
· Place the server on a clean, stable table or floor for servicing.
· Make sure all cables are correctly connected before you power on the server.
· To avoid being burnt, allow the server and its internal modules to cool before touching them.
Electrical safety
WARNING! If you put the server in standby mode (system power LED in amber) with the power on/standby button on the front panel, the power supplies continue to supply power to some circuits in the server. To remove all power for servicing safety, you must first press the button, wait for the system to enter standby mode, and then remove the power cords from the server. |
To avoid bodily injury or damage to the server, follow these guidelines:
· Always use the power cords that came with the server.
· Do not use the power cords that came with the server for any other devices.
· Power off the server when installing or removing any components that are not hot swappable.
Rack mounting recommendations
To avoid bodily injury or damage to the equipment, follow these guidelines when you rack mount a server:
· Mount the server in a standard 19-inch rack.
· Make sure the leveling jacks are extended to the floor and the full weight of the rack rests on the leveling jacks.
· Couple the racks together in multi-rack installations.
· Load the rack from the bottom to the top, with the heaviest hardware unit at the bottom of the rack.
· Get help to lift and stabilize the server during installation or removal, especially when the server is not fastened to the rails. As a best practice, a minimum of two people are required to safely load or unload a rack. A third person might be required to help align the server if the server is installed higher than check level.
· For rack stability, make sure only one unit is extended at a time. A rack might get unstable if more than one server unit is extended.
· Make sure the rack is stable when you operate a server in the rack.
· To maintain correct airflow and avoid thermal damage to the server, use blank panels to fill empty rack units.
ESD prevention
Preventing electrostatic discharge
To prevent electrostatic damage, follow these guidelines:
· Transport or store the server with the components in antistatic bags.
· Keep the electrostatic-sensitive components in separate antistatic bags until they arrive at an ESD-protected area.
· Place the components on a grounded surface before removing them from their antistatic bags.
· Avoid touching pins, leads, or circuitry.
Grounding methods to prevent electrostatic discharge
The following are grounding methods that you can use to prevent electrostatic discharge:
· Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.
· Take adequate personal grounding measures, including wearing antistatic clothing and static dissipative shoes.
· Use conductive field service tools.
· Use a portable field service kit with a folding static-dissipating work mat.
Cooling performance
Poor cooling performance might result from improper airflow and poor ventilation and might cause damage to the server.
To ensure good ventilation and proper airflow, follow these guidelines:
· Install blanks if the following module slots are empty:
¡ Drive bays.
¡ Fan bays.
¡ PCIe slots.
¡ Power supply slots.
· Do not block the ventilation openings in the server chassis.
· To avoid thermal damage to the server, do not operate the server for long periods in any of the following conditions:
¡ Access panel open or uninstalled.
¡ Air baffles uninstalled.
¡ PCIe slots, drive bays, fan bays, or power supply slots empty.
Battery safety
The server's system board contains a system battery, which is designed with a lifespan of 3 to 5 years.
If the server no longer automatically displays the correct date and time, you might need to replace the battery. When you replace the battery, follow these safety guidelines:
· Do not attempt to recharge the battery.
· Do not expose the battery to a temperature higher than 60°C (140°F).
· Do not disassemble, crush, puncture, short external contacts, or dispose of the battery in fire or water.
· Dispose of the battery at a designated facility. Do not throw the battery away together with other wastes.
Preparing for installation
Prepare a rack that meets the rack requirements and plan an installation site that meets the requirements for space and airflow, temperature, humidity, equipment room height, cleanliness, and grounding.
Rack requirements
IMPORTANT: To avoid affecting the server chassis, install power distribution units (PDUs) with the outputs facing backwards. If you install PDUs with the outputs facing the inside of the server, perform onsite survey to make sure the cables won't affect the server rear. |
Liquid-cooled modules not installed
The server is 2U high and has a depth of 780 mm (30.71 in). The rack for installing the server must meet the following requirements:
· A standard 19-inch rack.
· A clearance of more than 50 mm (1.97 in) between the rack front posts and the front rack door.
· A minimum of 1200 mm (47.24 in) in depth as a best practice. For installation limits for different rack depth, see Table 2.
Table 2 Installation requirements for different rack depths
Rack depth |
Installation requirements |
1000 mm (39.37 in) |
· The H3C cable management arm (CMA) is not supported. · A clearance of 60 mm (2.36 in) is reserved from the server rear to the rear rack door for cabling. · The slide rails and PDUs might hinder each other. Perform onsite survey to determine the PDU installation location and the proper PDUs. If the PDUs hinder the installation and movement of the slide rails anyway, use other methods to support the server, a tray for example. |
1100 mm (43.31 in) |
Make sure the CMA does not hinder PDU installation at the server rear before installing the CMA. If the CMA hinders PDU installation, use a deeper rack or change the installation locations of PDUs. |
1200 mm (47.24 in) |
Make sure the CMA does not hinder PDU installation or cabling. If the CMA hinders PDU installation or cabling, change the installation locations of PDUs. For detailed installation suggestions, see Figure 1. |
Figure 1 Installation recommendations for a 1200 mm deep rack (top view)
(1) 1200 mm (47.24 in) rack depth |
(2) A minimum of 50 mm (1.97 in) between the front rack posts and the front rack door |
(3) 760 mm (29.92 in) between the front rack posts and the rear of the chassis, including power supply handles at the server rear (not shown in the figure) |
(4) 780 mm (30.71 in) server depth, including chassis ears |
(5) 940 mm (37.01 in) between the front rack posts and the CMA |
(6) 840 mm (33.07 in) between the front rack posts and the rear ends of the slide rails |
Liquid cooled modules installed
The server is 2U high and has a depth of 814.7 mm (32.05 in). The rack for installing the server must meet the following requirements in Table 3. As a best practice, use it together with H3C cold plate liquid cooling system. For more information about the liquid cooling system, see Cold Plate Liquid Cooling System User Guide. The server can also be used without H3C cold plate liquid cooling system. Before use, a site survey of the customer's site is required. Contact technical support for details.
Table 3 Requirements for installing liquid cooling systems on servers
Item |
Installation limits |
Server traffic |
Greater than or equal to 1 liter per minute. |
Pressure differential at server inlet and outlet |
Greater than or equal to 25 KPa. |
Supported liquid inlet temperature at server/CDU secondary supply liquid temperature |
Between 5 and 50 degrees Celsius, with a recommended value of 40 degrees Celsius. To prevent condensation, the minimum supply water temperature should be at least 3 degrees Celsius higher than the dew point temperature. The dew point temperature can be measured using a dew point hygrometer. |
Working pressure of liquid cooling system |
Less than or equal to 3.5 Bar, with a recommended value of less than or equal to 2.5 Bar. |
Filtration precision at the secondary side |
Less than or equal to 50 microns. |
Liquid cooling system requirements
A liquid cooling system is required for a server installed with liquid cooled modules.
Figure 2 Liquid cooling system
(1) Liquid-cooled server |
(2) Coolant distribution unit (CDU) |
(3) Rack |
(4) Manifold |
CDU requirements
The CDU brings heat into the primary cooling loop through the secondary cooling loop that carries the coolant, and then the heat is transferred out of the equipment room through the primary cooling loop. As a best practice, deploy the CDU inside the server rack, with a ratio of one rack to one CDU.
Table 4 CDU specifications
Item |
Specification |
Cooling capability |
≥ 35 kW |
Temperature in the secondary cooling loop |
15°C to 45°C (23°F to 122°F) |
Water volume in the secondary cooling loop |
≥N*1.5 LPM (N is the number of servers) |
Materials |
Pure water, ethylene glycol, or propylene glycol aqueous solutions |
· Primary cooling loop: Circulation of the cooling liquid between the external heat dissipation facilities (such as cooling towers) and the CDU. · Secondary cooling loop: Circulation of the coolant between the liquid cooling equipment in the CDU and the server rack. |
Manifold requirements
The manifold connects the liquid cooled modules in the server to the CDU and provides a channel for circulation of the coolant. The manifold must meet the following requirements:
· Normal operating pressure is not less than 6.9 Bar.
· Drainage and exhaust systems are available.
· The material is compatible with pure water, ethylene glycol, or propylene glycol aqueous solutions. As a best practice, the manifold uses stainless steel.
· The surface is free of scratches or oil stains.
· The interior is kept clean and dry.
Quick coupling requirements
A quick coupling is a connector between the liquid cooled module in the server and the manifold. This server uses universal quick disconnects (UQDs), with a UQD04 fitting and an equivalent fluid passage diameter of 5 mm (0.20 in). The connector on the manifold side that connects to the server also needs to be compatible with this quick coupling. If there are any special requirements for quick couplings, contact H3C Support.
Power distribution unit (PDU) requirements
The estimated maximum power of a single server with liquid-cooled modules installed is 1600 W. Please configure the total PDU power based on the number of servers deployed.
Coolant requirements
The coolant carries heat transferred in the liquid cooling system for servers. As a best practice, use Propylene Glycol 25 Vol% (PG25) coolant recommended by Intel. If there are any special requirements for the coolant, contact H3C Support.
Table 5 Coolant specifications
Item |
Parameter |
Composition: |
Deionized water solution containing 25% Propylene Glycol |
Inlet liquid temperature |
15°C to 45°C (59°F to 113°F) NOTE: To prevent condensation, the minimum supply water temperature must be greater than the dew point temperature by 2°C to 3°C (35.6°F to 37.4°F). A dew point humidity meter can be used to measure the dew point. |
Total microbial count: |
< 105 CFU/ml |
Impurity particles |
< 50 µm |
pH value |
8 to 10.5 |
Installation site requirements
Airflow direction of the server
Figure 3 Airflow direction of the server
(1) and (2) Directions of the airflow into the chassis and power supply |
(3) Directions of the airflow out of the power supply |
(4) and (5) Direction of the airflow out of the chassis |
Temperature and humidity requirements
To ensure correct operation of the server, make sure the room temperature and humidity meet the requirements as described in "Appendix A Server specifications."
Equipment room height requirements
To ensure correct operation of the server, make sure the equipment room height meets the requirements as described in "Appendix A Server specifications."
Corrosive gas concentration requirements
Corrosive gases can accelerate corrosion and aging of metal components and even cause server failure. Table 6 describes common corrosive gases and their sources.
Table 6 Common corrosive gases and their sources
Corrosive gas |
Sources |
Hydrogen sulfide (H2S) |
Geothermal emissions, microbiological activities, fossil fuel processing, wood pulping, sewage treatment, combustion of fossil fuel, auto emissions, ore smelting, and sulfuric acid manufacture. |
Sulfur dioxide (SO2) and sulfur trioxide (SO3) |
Combustion of fossil fuel, auto emissions, ore smelting, sulfuric acid manufacture, and tobacco smoke. |
Sulphur (S) |
Foundries and sulfur manufacture. |
Hydrogen Fluoride (HF) |
Fertilizer manufacture, aluminum manufacture, ceramics manufacture, steel manufacture, electronics device manufacture, and fossil fuel. |
Nitrogen Oxide (NOx) |
Automobile emissions, fossil fuel combustion, microbes, and chemical industry. |
Ammonia (NH3) |
Microbes, sewage, fertilizer manufacture, geothermal steam, refrigeration equipment, cleaning products, and reproduction (blueprint) machines. |
Carbonic oxide (CO) |
Combustion, automobile emissions, microbes, trees, and wood pulping. |
Chlorine (Cl2) and chlorine dioxide (ClO2) |
Chlorine manufacture, aluminum manufacture, papermills, refuse decomposition, and cleaning products. |
Hydrochloric acid (HCl) |
Automobile emissions, combustion, oceanic processes, and polymer combustion. |
Hydrobromic acid (HBr) and hydroiodic acid (HI) |
Automobile emissions. |
Ozone (O3) |
Atmospheric photochemical processes mainly involving nitrogen oxides and oxygenated hydrocarbons, automotive emissions, and electrostatic filters. |
Hydrocarbons (CnHn) |
Automobile emissions, fossil fuel processing, tobacco smoke, water treatment, microbes, paper mill, and many other sources, both natural and industrial. |
Requirements of corrosive gas concentration vary by server model. For information about the requirements, see the installation guide of the server.
Requirements for the data center equipment room
As a best practice, make sure the corrosive gas concentration for the data center equipment room meets the requirements of severity level G1 of ANSI/ISA 71.04-1985. The rate of copper corrosion product thickness growth must be less than 300 Å/month, and the rate of silver corrosion product thickness growth must be less than 200 Å/month. Angstrom (Å) is a metric unit of length equal to one ten-billionth of a meter.
To meet the copper and silver corrosion rates stated in severity level G1, make sure the corrosive gases in the equipment room do not exceed the concentration limits as shown in Table 7.
Table 7 Corrosive gas concentration limits in the data center equipment room
Corrosive gas |
Concentration (ppb) |
Remarks |
H2S |
< 3 |
The concentration limits are calculated based on the reaction results of the gases in the equipment room with a relative humidity less than 50%. If the relative humidity of the equipment room increases by 10%, the severity level of ANSI/ISA 71.04-1985 to be meet must also increase by 1. |
SO2, SO3 |
< 10 |
|
Cl2 |
< 1 |
|
NOx |
< 50 |
|
HF |
< 1 |
|
NH3 |
< 500 |
|
O3 |
< 2 |
|
NOTE: Part per billion (ppb) is a concentration unit. 1 ppb represents a volume-to-volume ratio of 1 to 100000000. |
Requirements for the non-data center equipment room
The corrosive gas concentration for the non-data center equipment room must meet the requirements of class 3C2 of IEC 60721-3-3:2002, as shown in Table 8.
Table 8 Corrosive gas concentration limits in the non-data center equipment room
Gas |
Average concentration (mg/m3) |
Maximum concentration (mg/m3) |
SO2 |
0.3 |
1.0 |
H2S |
0.1 |
0.5 |
Cl2 |
0.1 |
0.3 |
HCI |
0.1 |
0.5 |
HF |
0.01 |
0.03 |
NH3 |
1.0 |
3.0 |
O3 |
0.05 |
0.1 |
NOX |
0.5 |
1.0 |
CAUTION: As a best practice, control the corrosive gas concentrations in the equipment room at their average values. Make sure the corrosive gas concentrations do not exceed 30 minutes per day at their maximum values. |
Guidelines for controlling corrosive gases
To control corrosive gases, follow these guidelines:
· As a best practice, do not build the equipment room in a place with a high concentration of corrosive gases.
· Make sure the equipment room is not connected to sewer, sewage, vertical shaft, or septic tank pipelines and keep it far away from these pipelines. The air inlet of the equipment room must be away from such pollution sources.
· Use environmentally friendly materials to decorate the equipment room. Avoid using organic materials that contains harmful gases, such as sulfur or chlorine-containing insulation cottons, rubber mats, sound-proof cottons, and avoid using plasterboards with high sulfur concentration.
· Place fuel (diesel or gasoline) engines separately. Do not place them in the same equipment room with the device. Make sure the exhausted air of the engines will not flow into the equipment room or towards the air inlet of the air conditioners.
· Place batteries separately. Do not place them in the same room with the device.
· Employ a professional company to monitor and control corrosive gases in the equipment room regularly.
Cleanliness requirements
Requirements of dust particle concentration vary by server model. For information about the requirements, see the installation guide of the server.
Requirements for the data center equipment room
The concentration of dust participles in the equipment room must meet the ISO 8 cleanroom standard defined by ISO 14644-1, as described in Table 9. Make sure no zinc whiskers are in the equipment room.
Table 9 Dust particle concentration limit in the equipment room
Particle diameter |
Concentration limit |
≥ 5 µm |
≤ 29300 particles/m3 |
≥ 1 µm |
≤ 832000 particles/m3 |
≥ 0.5 µm |
≤ 3520000 particles/m3 |
Requirements for the non-data center equipment room
The concentration of dust participles (particle diameter ≥ 0.5 µm) must meet the requirement of the GB 50174-2017 standard, which is less than 17600000 particles/m3.
Guidelines for controlling cleanliness
To maintain cleanliness in the equipment room, follow these guidelines:
· Keep the equipment room away from pollution sources and do not smoke or eat in the equipment room.
· Use double-layer glass in windows and seal doors and windows with dust-proof rubber strips.
· Use dustproof materials for floors, walls, and ceilings and use matt coating that does not produce powders.
· Keep the equipment room clean and clean the air filters of the rack regularly.
· Wear ESD clothing and shoe covers before entering the equipment room. Keep the ESD clothing and shoe covers clean and replace them frequently.
Grounding requirements
Correctly connecting the server grounding cable is crucial to lightning protection, anti-interference, and ESD prevention. The server can be grounded through the grounding wire of the power supply system and no external grounding cable is required.
Storage requirements
Follow these guidelines to store storage media:
· As a best practice, do not store an HDD for 6 months or more without powering on and using it.
· As a best practice, do not store an SSD, M.2 SSD, or SD card for 3 months or more without powering on and using it. Long unused time increases data loss risks.
· To store the server chassis, or an HDD, SSD, M.2 SSD, or SD card for 3 months or more, power on it every 3 months and run it for a minimum of 2 hours each time. For information about powering on and powering off the server, see "Powering on and powering off the server."
Installation tools
Table 10 lists the tools that you might use during installation.
Picture |
Name |
Description |
T25 Torx screwdriver |
Installs or removes screws inside chassis ears. A flat-head screwdriver can also be used for this purpose. |
|
T30 Torx screwdriver |
Installs or removes captive screws on processor heatsinks. |
|
T15 Torx screwdriver (shipped with the server) |
Installs or removes screws on the processor system board. |
|
T10 Torx screwdriver (shipped with the server) |
Installs or removes screws on chassis ears. |
|
Flat-head screwdriver |
Installs or removes captive screws inside multifunctional rack mount ears or replaces system batteries. |
|
Phillips screwdriver |
Installs or removes screws on drive carriers. |
|
|
Cage nut insertion/extraction tool |
Inserts or extracts the cage nuts in rack posts. |
Diagonal pliers |
Clips insulating sleeves. |
|
Tape measure |
Measures distance. |
|
Multimeter |
Measures resistance and voltage. |
|
ESD wrist strap |
Prevents ESD when you operate the server. |
|
Antistatic gloves |
Prevents ESD when you operate the server. |
|
Antistatic clothing |
Prevents ESD when you operate the server. |
|
Ladder |
Supports high-place operations. |
|
Type-C to USB cable |
When connecting to a third party USB Wi-Fi module, you can access the HDM interface through the HDM Mobile client on the mobile endpoint. When connecting to an external USB drive, you can download SDS logs on the HDM interface and store them in the USB drive. NOTE: Support for the USB Wi-Fi module depends on the server model. |
|
USB Wi-Fi module or USB drive |
||
Interface cable (such as an Ethernet cable or optical fiber) |
Connects the server to an external network. |
|
Serial console cable |
Connects the serial connector on the server to a monitor for troubleshooting. |
|
Monitor |
Displays the output from the server. |
|
Temperature and humidity meter |
Displays current temperature and humidity. |
|
Oscilloscope |
Displays the variation of voltage over time in waveforms. |
Installing or removing the server
Installing the server
Installing rails
Install the inner rails to the server and the outer rails to the rack. For information about installing the rails, see the document shipped with the rails.
Rack-mounting the server
1. Slide the server into the rack. For more information about how to slide the server into the rack, see the installation guide for the rails.
Figure 4 Rack-mounting the server
2. Secure the server.
a. Push the server until the multifunctional rack mount ears are flush against the rack front posts, as shown by callout 1 in Figure 5.
b. Unlock the latches of the multifunctional rack mount ears, as shown by callout 2 in Figure 5.
c. Fasten the captive screws inside the chassis ears and lock the latches, as shown by callout 3 in Figure 5.
Installing cable management brackets
Install cable management brackets if the server is shipped with cable management brackets. For information about how to install cable management brackets, see the installation guide shipped with the brackets.
Connecting external cables
Cabling guidelines
WARNING! To avoid electric shock, fire, or damage to the equipment, do not connect communication equipment to RJ-45 Ethernet ports on the server. |
· For heat dissipation, make sure no cables block the inlet or outlet air vents of the fan modules, heatsinks, GPU modules, and PSUs.
· To easily identify ports and connect/disconnect cables, make sure the cables do not cross.
· Label the cables for easy identification of the cables.
· Wrap unused cables onto an appropriate position on the rack.
· To avoid damage to cables when extending the server out of the rack, do not route the cables too tight if you use cable management brackets.
Connecting a mouse, keyboard, and monitor
About this task
The server provides two DB15 VGA connectors for connecting a monitor. One is on the front panel (left multifunctional rack mount ear is required) and the other is on the rear panel.
The server is not shipped with a standard PS2 mouse and keyboard. To connect a PS2 mouse and keyboard, you must prepare a USB-to-PS2 adapter.
Procedure
1. Connect one plug of a VGA cable to a VGA connector on the server, and fasten the screws on the plug.
Figure 6 Connecting a VGA cable
2. Connect the other plug of the VGA cable to the VGA connector on the monitor, and fasten the screws on the plug.
3. Connect the mouse and keyboard.
¡ For a USB mouse and keyboard, directly connect the USB connectors of the mouse and keyboard to the USB connectors on the server.
¡ For a PS2 mouse and keyboard, insert the USB connector of the USB-to-PS2 adapter to a USB connector on the server. Then, insert the PS2 connectors of the mouse and keyboard into the PS2 receptacles of the adapter.
Figure 7 Connecting a PS2 mouse and keyboard by using a USB-to-PS2 adapter
Connecting an Ethernet cable
About this task
Perform this task before you set up a network environment or log in to the HDM management interface through the HDM network port to manage the server.
Procedure
1. Determine the network port on the server.
¡ To connect the server to the external network, use the Ethernet port on the network adapter.
¡ To log in to the HDM management interface, use the HDM dedicated network port. For the location of the HDM dedicated network port, see "Rear panel."
If the server is configured with an OCP network adapter, you can also use the HDM shared network port on the OCP network adapter to log in to the HDM management interface. For the location of the OCP network adapter, see "Rear panel."
2. Determine type of the Ethernet cable.
Verify the connectivity of the cable by using a link tester.
If you are replacing the Ethernet cable, make sure the new cable is the same type or compatible with the old cable.
3. Label the Ethernet cable by filling in the names and numbers of the server and the peer device on the label.
As a best practice, use labels of the same kind for all cables.
If you are replacing the Ethernet cable, label the new cable with the same number as the number of the old cable.
4. Connect one end of the Ethernet cable to the network port on the server and the other end to the peer device.
Figure 8 Connecting an Ethernet cable
5. Verify network connectivity.
After powering on the server, use the ping command to test the network connectivity. If the connection between the server and the peer device fails, verify that the Ethernet cable is securely connected.
6. Secure the Ethernet cable. For information about how to secure cables, see "Securing cables."
Connecting the power cord
Guidelines
WARNING! To avoid damage to the equipment or even bodily injury, use the power cord that ships with the server. |
Before connecting the power cord, make sure the server and components are installed correctly.
Procedure
1. Insert the power cord plug into the power receptacle of a power supply at the rear panel, as shown in Figure 9.
Figure 9 Connecting the power cord
2. Connect the other end of the power cord to the power source, for example, the power strip on the rack.
3. Secure the power cord to avoid unexpected disconnection of the power cord.
Multiple types of wire fasteners can be used for securing the power cord. In this procedure, a cable clamp is used.
a. If the cable clamp is positioned too near the power cord that it blocks the power cord plug connection, press down the tab on the cable mount and slide the clip backward.
Figure 10 Sliding the cable clamp backward
b. Open the cable clamp, place the power cord through the opening in the cable clamp, and then close the cable clamp, as shown by callouts 1, 2, 3, and 4 in Figure 11.
Figure 11 Securing the AC power cord
c. Slide the cable clamp forward until it is flush against the edge of the power cord plug, as shown in Figure 12.
Figure 12 Sliding the cable clamp forward
Securing cables
Securing cables to cable management brackets
For information about how to secure cables to cable management brackets, see the installation guide shipped with the brackets.
Securing cables to slide rails by using cable straps
You can secure cables to either left slide rails or right slide rails. As a best practice for cable management, secure cables to left slide rails.
When multiple cable straps are used in the same rack, stagger the strap location, so that the straps are adjacent to each other when viewed from top to bottom. This positioning will enable the slide rails to slide easily in and out of the rack.
To secure cables to slide rails by using cable straps:
1. Hold the cables against a slide rail.
2. Wrap the strap around the slide rail and loop the end of the cable strap through the buckle.
3. Dress the cable strap to ensure that the extra length and buckle part of the strap are facing outside of the slide rail.
Figure 13 Securing cables to a slide rail
Removing the server from a rack
1. Power off the server. For more information, see "Powering off the server."
2. Disconnect all peripheral cables from the server.
3. Extend the server from the rack.
a. Open the latches of the multifunctional rack mount ears, as shown by callout 1 in Figure 14.
b. Loosen the captive screws inside the multifunctional rack mount ears, as shown by callout 2 in Figure 14.
c. Slide the server out of the rack, as shown by callout 3 in Figure 14.
Figure 14 Extending the server from the rack
4. Place the server on a clean, stable surface.
Powering on and powering off the server
Important information
If the server is connected to external storage devices, make sure the server is the first device to power off and then the last device to power on. This restriction prevents the server from mistakenly identifying the external storage devices as faulty devices.
Powering on the server
Prerequisites
Before you power on the server, you must complete the following tasks:
· Install the server and internal components correctly.
· Connect the server to a power source.
Procedure
Powering on the server by pressing the power on/standby button
Press the power on/standby button to power on the server.
The server exits standby mode and supplies power to the system. The system power LED changes from steady amber to flashing green and then to steady green. For information about the position of the system power LED, see "LEDs and buttons."
Powering on the server from the HDM Web interface
1. Log in to HDM.
For information about how to log in to HDM, see H3C Servers HDM2 User Guide.
2. Power on the server.
a. Select System > Power Management.
b. Click Power on.
For more information, see HDM online help.
Powering on the server from the remote console interface
1. Log in to HDM.
For information about how to log in to HDM, see H3C Servers HDM2 User Guide.
2. Log in to a remote console and then power on the server.
For information, see HDM2 online help.
Configuring automatic power-on
You can configure automatic power-on from HDM or the BIOS.
To configure automatic power-on from HDM:
1. Log in to HDM.
For information about how to log in to HDM, see H3C Servers HDM2 User Guide.
2. Configure automatic power-on for the server.
a. Select System > Power Management, and then click System Power Restore.
b. Select Always power on, and then click OK.
To configure automatic power-on from the BIOS:
1. Log in to the BIOS.
For information about how to log in to the BIOS, see the BIOS user guide for the server.
2. Configure automatic power-on for the server.
a. Select Server > AC Restore Settings, and then press Enter.
b. Select Always Power On, and then press Enter.
c. Press F4 to save the configuration.
For more information, see the BIOS user guide for the server.
Powering off the server
Guidelines
Before powering off the server, you must complete the following tasks:
· Back up all critical data.
· Make sure all services have stopped or have been migrated to other servers.
Procedure
Powering off the server from its operating system
1. Connect a monitor, mouse, and keyboard to the server.
2. Shut down the operating system of the server.
3. Disconnect all power cords from the server.
Powering off the server by pressing the power on/standby button
1. Press the power on/standby button and wait for the system power LED to turn into steady amber.
2. Disconnect all power cords from the server.
Powering off the server forcedly by pressing the power on/standby button
IMPORTANT: This method forces the server to enter standby mode without properly exiting applications and the operating system. Use this method only when the server system crashes. For example, a process gets stuck. |
1. Press and hold the power on/standby button until the system power LED turns into steady amber.
2. Disconnect all power cords from the server.
Powering off the server from the HDM Web interface
1. Log in to HDM.
For information about how to log in to HDM, see H3C Servers HDM2 User Guide.
2. Power off the server.
a. Select System > Power Management.
b. Click Graceful power-off.
3. Disconnect all power cords from the server.
Powering off the server from the remote console interface
1. Log in to HDM.
For information about how to log in to HDM, see H3C Servers HDM2 User Guide.
2. Log in to a remote console and then power off the server.
For information about how to log in to a remote console, see HDM online help.
3. Disconnect all power cords from the server.
Configuring the server
The following information describes the procedures to configure the server after the server installation is complete.
Configuration flowchart
Figure 15 Configuration flowchart
Powering on the server
1. Power on the server. For information about the procedures, see "Powering on the server."
2. Verify that the health LED on the front panel is steady green, which indicates that the system is operating correctly. For more information about the health LED status, see "LEDs and buttons."
Configuring basic BIOS settings
You can set the server boot order and the BIOS passwords from the BIOS setup utility of the server.
|
NOTE: The BIOS setup utility screens are subject to change without notice. |
Setting the server boot order
The server has a default boot order. You can change the server boot order from the BIOS. For the default boot order and the procedure of changing the server boot order, see the BIOS user guide for the server.
Setting the BIOS passwords
BIOS passwords include a boot password as well as an administrator password and a user password for the BIOS setup utility. By default, no passwords are set.
To prevent unauthorized access and changes to the BIOS settings, set both the administrator and user passwords for accessing the BIOS setup utility. Make sure the two passwords are different.
After setting the administrator password and user password for the BIOS setup utility, you must enter the administrator password or user password each time you access the BIOS setup utility.
· To obtain administrator privileges, enter the administrator password.
· To obtain the user privileges, enter the user password.
For the difference between the administrator and user privileges and guidelines for setting the BIOS passwords, see the BIOS user guide for the server.
Configuring RAID
Configure physical and logical drives (RAID arrays) for the server.
The supported RAID levels and RAID configuration methods vary by storage controller model. For more information, see the storage controller user guide for the server.
Installing the operating system and hardware drivers
Installing the operating system
Install a compatible operating system on the server by following the procedures described in the operating system installation guide for the server.
For the server compatibility with the operating systems, visit the component compatibility lookup tool at http://www.h3c.com/en/home/qr/default.htm?id=66.
Installing hardware drivers
IMPORTANT: To avoid hardware unavailability caused by an update failure, always back up the drivers before you update them. |
For newly installed hardware to operate correctly, the operating system must have the required hardware drivers.
To install a hardware driver, see the operating system installation guide for the server.
Updating firmware
IMPORTANT: Verify the hardware and software compatibility before firmware upgrade. For information about the hardware and software compatibility, see the software release notes. |
You can update the following firmware from UniSystem or HDM:
· HDM.
· BIOS.
· CPLD.
· BPCPLD.
· PSU.
· LCD.
For information about the update procedures, see the firmware update guide for the server.
Replacing hardware options
If you are replacing multiple hardware options, read their replacement procedures and identify similar steps to streamline the entire replacement procedure.
When you remove the access panel for the first time, remove the screws at the two sides of the chassis rear.
Adding a processor
For information about how to add a processor, see H3C UniServer R4900 G6 Ultra Server Processor Installation Quick Start.
If the server is installed with the liquid-cooled module, make sure all the processors are installed. In this case, you do not need to add any processor.
Replacing a processor
WARNING! To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them. |
Processor installation guidelines
· You can install one or two processors. If liquid-cooled modules are installed, you must install two processors.
· To avoid damage to a processor or the system board, only H3C authorized or professional server engineers can install, replace, or remove a processor.
· Make sure the processors on the server are the same model.
· The pins in the processor sockets are very fragile and prone to damage. Install a protective cover if a processor socket is empty.
· For the server to operate correctly, make sure processor 1 is in position. For more information about processor locations, see "System board components."
· Different processors might have different heatsinks, but the processor replacement procedure is the same.
· You must paste the barcode label shipped with the processor to the side of the heatsink to cover the original barcode label on the heatsink. This ensures that H3C will provide the warranty service for the processor.
Processor model suffixes
If the model of a processor is UN-CPU-INTEL-8490H, the model suffix is H. For more information about the supported processor models, use the component compatibility lookup tool at http://www.h3c.com/en/home/qr/default.htm?id=66.
Table 11 displays the meanings of processor model suffixes for the Intel Eagle Stream CPUs.
Table 11 Processor model suffix description
Processor model suffix |
Description |
Remarks |
P |
Cloud – IaaS |
IaaS scenario-based optimization for VM applications requiring high base frequency. |
V |
Cloud – SaaS |
SaaS scenario-based optimization for high-density and low-power consumption VM applications. |
M |
Media Transcode |
Media processing scenario-based optimization. |
H |
DB and Analytics |
Database and analysis-based optimization. |
N |
Network/5G/Edge(High TPT/Low Latency) |
Supports network/5G/Edge (high TPT/low latency) services. |
S |
Storage & HCI |
Supports storage and hyper fusion architecture. |
T |
Long-life Use/High Tcase |
Supports operation with a long life cycle/under a high temperature. |
U |
1-Socket |
Supports only single-processor operation. |
Q |
Liquid cooling |
Dedicated for liquid cooling servers. |
This table is for reference only. For detailed information, see the Inter official website. |
Prerequisites
Take the following ESD prevention measures:
· Wear antistatic clothing.
· Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.
· Do not wear any conductive objects, such as jewelry or watches.
· When you replace a component, examine the slot and connector for damages. Make sure the pins are not damaged (bent for example) and do not contain any foreign objects.
Removing a processor
1. Power off the server. For more information, see "Powering off the server."
2. Remove the server from the rack. For more information, see "Removing the server from a rack."
3. Remove the access panel:
a. Press the button on the locking lever and then lift the locking lever.
The access panel automatically slides to the server rear.
b. Lift the access panel to remove it from the server.
4. (Optional.) Remove the mid drive cage, mid GPU module, or rear 4GPU module.
5. Remove the chassis air baffle. Open the blue clip on the air baffle and lift the air baffle out of the chassis.
6. (Optional.) Disconnect the liquid leakage detection cable on the liquid-cooled module.
7. Remove the processor heatsink (or the liquid-cooled module):
a. Loosen the four captive screws.
b. Open heatsink (or the liquid-cooled module) clips at the four corners.
c. Lift the heatsink (or the liquid-cooled module) slowly to remove it.
8. Remove the processor:
a. Lift the locking lever to release the processor.
b. Hold the processor to pull it out from the retaining bracket.
9. Remove the processor retaining bracket from the heatsink (or the liquid-cooled module):
a. Release the four corner clips of the retaining bracket from the heatsink (or the liquid-cooled module). You must press one clip and its cater-cornered clip outward, and press the other two clips inward.
b. Lift the retaining bracket to remove it from the heatsink (or the liquid-cooled module).
10. Use isopropanol wiping cloth to clear the residual thermal grease from the processor top and heatsink (or the liquid-cooled module).
Installing a processor
1. Install the retaining bracket onto the heatsink (or the liquid-cooled module):
a. Close the ejector lever on the retaining bracket for secure installation of the processor.
b. Align the alignment triangle on the retaining bracket with the cut-off corner of the heatsink (or the liquid-cooled module). Place the bracket on top of the heatsink (or the liquid-cooled module), with the four corners of the bracket clicked into the four corners of the heatsink (or the liquid-cooled module).
2. Smear thermal grease onto the processor:
a. Clean the heatsink. Make sure no thermal grease remains on the heatsink (or the liquid-cooled module) top.
b. Use the thermal grease injector to inject 0.6 ml of thermal grease to the five dots on the processor, 0.12 ml for each dot.
3. Install the processor onto the retaining bracket:
CAUTION: To avoid damage to the processor, always hold the processor by its edges. Never touch the gold contacts on the processor bottom. |
a. Tilt the processor, align the small triangle on the processor with the alignment triangle in the retaining bracket, and insert one edge of the processor into the retaining bracket. Place two thumbs against the heatsink (or the liquid-cooled module), press the other end of the processor, and place down the processor.
b. Open the clips on the retaining bracket until the processor fits snugly onto the retaining bracket.
4. Install the heatsink onto the server:
a. Align the alignment triangle on the retaining bracket with the cut-off corner of the processor socket and the pin holes in the heatsink (or the liquid-cooled module) with the guide pins on the processor socket. Lower down the heatsink (or the liquid-cooled module) on the processor socket.
b. Press down the heatsink (or the liquid-cooled module) clips at the four corners to lock the heatsink (or the liquid-cooled module) in place.
c. Use a T30 Torx screwdriver to fasten the four captive screws on the heatsink (or the liquid-cooled module).
CAUTION: To avoid poor contact between the processor and the system board or damage to the pins in the processor socket, tighten the screws to a torque value of 0.9 N·m (8 in-lbs). |
5. Paste bar code label supplied with the processor over the original label on the heatsink (or the liquid-cooled module).
IMPORTANT: This step is required for you to obtain H3C's processor servicing. |
6. (Optional.) Connect the leak detection cable on the liquid-cooled module.
7. Install the chassis air baffle.
8. (Optional.) Install the mid drive cage, mid GPU module, or rear 4GPU module.
9. Install the access panel:
a. Place the access panel onto the server.
b. Slide the access panel to the server front.
c. Press down the locking lever on the access panel until it snaps into place.
10. Rack-mount the server. For more information, see "Rack-mounting the server."
11. Connect the power cord. For more information, see "Connecting the power cord."
12. Power on the server. For more information, see "Powering on the server."
Verifying the replacement
Log in to HDM and view the operating status of the processor to verify that the processor is operating correctly. For more information, see the HDM2 online help.
Replacing a liquid-cooled module
Prerequisites
Take the following ESD prevention measures:
· Wear antistatic clothing.
· Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.
· Do not wear any conductive objects, such as jewelry or watches.
When you replace a component, examine the slot and connector for damages. Make sure the pins are not damaged (bent for example) and do not contain any foreign objects.
Refer to "Processor installation guidelines" to learn the processor installation guidelines.
Procedure
CAUTION: · To avoid processor and system board damage, only H3C-authorized personnel and professional server engineers can replace a liquid-cooled module. · To prevent damage to the pins on the processor socket, always install a cover over an empty processor socket. · To prevent ESD damage to electronic components, wear an ESD wrist strap during your operation and make sure the strap is grounded reliably. |
Removing a liquid-cooled module
1. Power off the server. For more information, see "Powering off the server."
2. Remove the server from the rack. For more information, see "Removing the server from a rack."
3. Remove the access panel:
a. Press the button on the locking lever and then lift the locking lever.
The access panel automatically slides to the server rear.
b. Lift the access panel to remove it from the server.
4. Remove the chassis air baffle. Open the blue clips on the air baffle and lift the air baffle up to remove it from the chassis.
5. Remove the fan cage. Pull up the ejector levers at both sides of the fan cage and lift the fan cage to remove it from the chassis.
6. Remove the liquid leakage sensor cable.
7. Remove the liquid-cooled module with the processor.
a. Loosen the eight captive screws on the liquid-cooled module.
b. Unlock the eight retaining clips to release the liquid-cooled module.
c. Loosen the matching screws between the inlet and outlet of the liquid cooling module and the rear of the chassis.
d. Lift the liquid-cooled module to remove it from the server.
CAUTION: The pins on the processor socket are very fragile. Never touch the pins. Any damage to them might require system board replacement. |
8. Remove the two processors one by one.
a. Lift up the locking lever to release the processor.
b. Hold two sides of the processor to detach it from the retaining bracket.
9. Remove the retaining packet.
a. Loosen the four corners of the retaining packet.
b. Lift the retaining packet to remove it from the liquid-cooled module.
10. Clean up the remaining thermal grease. Use isopropyl alcohol wiping cloth to clean the top of the processor and the surface of the liquid-cooled module. Ensure that the surface is clean and tidy.
Installing a liquid-cooled module
1. Unpack the liquid-cooled module. Attach the retaining bracket onto the liquid-cooled module.
a. Close the lever of the liquid-cooled module.
CAUTION: For the processor to snug in place, close the lever on the retaining bracket. |
b. Align the alignment triangle of the retaining bracket with the cut-off corner of the liquid-cooled module and press the retaining bracket onto the liquid-cooled module until the four corners of the retaining bracket click into the four corners of the liquid-cooled module.
2. (Optional.) Smear thermal grease onto the liquid-cooled module. Use the thermal grease injector to inject 0.6 ml of thermal grease to the five dots on the processor, 0.12 ml for each dot.
CAUTION: A new liquid-cooled module comes with thermal grease. If the thermal grease does not function well for cooling, clean and reapply thermal grease on the module. |
3. Install the two processors onto the retaining bracket one by one.
CAUTION: To avoid damage to a processor, always hold the processor by the edges. Never touch the gold contacts on the processor bottom. |
a. Tilt the processor, align the small triangle on the processor with the alignment triangle in the retaining bracket, and insert one edge of the processor into the retaining bracket. Place two thumbs against the liquid-cooled module, press the other end of the processor, and place down the processor.
b. Open the clips on the retaining bracket until the processor fits snugly onto the retaining bracket.
c. Repeat the same procedure to install the other processors onto the retaining bracket.
4. Install the liquid-cooled module with the processors and retaining brackets onto the server.
IMPORTANT: Paste bar code label supplied with the processor over the original label on the liquid-cooled module. This step is required for you to obtain H3C's processor servicing. |
\
a. Align the alignment triangle on the retaining bracket with the cut-off corner of the processor socket and the screw holes in the liquid-cooled module with the guide pins on the processor socket. Place the heatsink onto the processor socket.
b. Lock each of the eight retaining clips to secure the liquid-cooled module in place.
c. Install the matching screws between the inlet and outlet of the liquid cooling module and the rear of the chassis.
d. Use a T30 Torx screwdriver to fasten the eight captive screws on the liquid-cooled module.
CAUTION: As a best practice, use a torque of 0.9 N·m (8 in-lbs) to avoid poor contact of the processor or damage to the pins on the processor socket. |
5. Connect the liquid leakage detection sensor cable.
6. Install the fan cage. Place the fan cage into the chassis and close the ejector levers.
7. Install the chassis air baffle.
8. Install the access panel:
a. Place the access panel onto the server.
b. Slide the access panel to the server front.
c. Press down the locking lever on the access panel until it snaps into place.
9. Rack-mount the server. For more information, see "Rack-mounting the server."
10. Connect the power cord. For more information, see "Connecting the power cord."
11. Power on the server. For more information, see "Powering on the server."
Verifying the replacement
Log in to the HDM Web interface to view whether the processor operates correctly after replacement of the liquid-cooled module. For more information ,see the HDM2 online help.
Replacing a DIMM
WARNING! To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them. |
About DIMMs
DDR5 DIMMs can perform parity check on addresses and the DDR5 DIMMs cannot protect data from getting lost in case of unexpected system power outage.
Guidelines
The server provides eight DIMM channels per processor and each channel has two DIMM slots. If the server has one processor, the total number of DIMM slots is 16. If the server has two processors, the total number of DIMM slots is 32.
Only DDR5 DIMMs are supported.
IMPORTANT: If liquid cooled modules are installed, you must install two processors. |
When you install a DIMM, use Table 12 to verify that it is compatible with the processors.
Table 12 DIMM and processor compatibility
Processor |
Memory type @ frequency |
Max memory size per processor |
Sapphire Rapids |
DDR5 @4800MHz |
6 TB |
DIMM and processor compatibility
To obtain the memory frequency and maximum memory frequency supported by a specific processor, use the component compatibility lookup tool at http://www.h3c.com/en/home/qr/default.htm?id=66. You can query the memory frequency by selecting Memory Module and query the maximum supported memory frequency by selecting Processor.
The actual operating memory frequency is equal to the lesser of the memory frequency or the maximum memory frequency supported by the processors. For example, if the memory frequency is 4400 MHz and the maximum memory frequency supported by processors is 4800 MHz, the actual operating memory frequency is 4400 MHz.
The number of DIMMs per channel (1DPC or 2DPC) can affect the operating DIMM frequency. For more information, see Table 13.
Table 13 Operating DIMM frequency with different DPC configuration
CPU type |
DDR5 DIMM frequency |
DPC configuration |
Operating DIMM frequency |
Sapphire Rapids |
4800MHz |
1 DPC |
4800 MHz |
2 DPC |
4400 MHz |
Installation guidelines
When you install only DDR5 DIMMs, follow these restrictions and guidelines:
· Make sure their corresponding processors are present before powering on the server.
· As a best practice, install DDR5 DIMMs that have the same product code and DIMM specification (type, capacity, rank, and frequency). For information about DIMM product codes, use the component compatibility lookup tool at http://www.h3c.com/en/home/qr/default.htm?id=66. To install components or replace faulty DIMMs of other specifications, contact Technical Support.
· For the configured memory mode to take effect, make sure the following installation requirements are met:
Memory mode |
DIMM population requirements |
Independent |
· If one processor is present, see Figure 16. · If two processors are present, see Figure 17 and Figure 18. |
Mirror |
· If one processor is present, this mode is supported only when 8 DIMMs or 16 DIMMs are installed. For more information, see Figure 16. · If two processors are present, this mode is supported only when 16 DIMMs or 32 DIMMs are installed. For more information, see Figure 17 and Figure 18. |
|
NOTE: · If the DIMM configuration does not meet the requirements for the configured memory mode, the system uses the default memory mode (Independent mode). · In Figure 16, Figure 17, and Figure 18, the black DIMM slots (for example, the D1 slot) are grey colored, and the white DIMM slots (for example, the D0 slot) are not colored. |
Figure 16 DDR5 DIMM population schemes for one processor
Figure 17 DDR5 DIMM population schemes for two processors (1)
Figure 18 DDR5 DIMM population schemes for two processors (2)
Prerequisites
Take the following ESD prevention measures:
· Wear antistatic clothing.
· Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.
· Do not wear any conductive objects, such as jewelry or watches.
When you replace a component, examine the slot and connector for damages. Make sure the pins are not damaged (bent for example) and do not contain any foreign objects.
Removing a DIMM
1. Power off the server. For more information, see "Powering off the server."
2. Remove the server from the rack. For more information, see "Removing the server from a rack."
3. Remove the access panel:
a. Press the button on the locking lever and then lift the locking lever.
The access panel automatically slides to the server rear.
b. Lift the access panel to remove it from the server.
4. (Optional.) Remove the mid drive cage, mid GPU module, or rear 4GPU module.
5. Remove the chassis air baffle. Open the blue clip on the air baffle and lift the air baffle out of the chassis.
6. Open the DIMM slot latches and pull the DIMM out of the slot to remove the DIMM.
CAUTION: To avoid damage to DIMMs or the system board, make sure the server has been powered off normally and the power cord has been disconnected for more than 20 seconds before removing a DIMM. |
Installing a DIMM
1. Install the DIMM. Align the notch on the DIMM with the connector key in the DIMM slot and press the DIMM into the socket until the latches lock the DIMM in place.
2. Install the chassis air baffle.
3. (Optional.) Install the mid drive cage, mid GPU module, or rear 4GPU module.
4. Install the access panel:
a. Place the access panel onto the server.
b. Slide the access panel to the server front.
c. Press down the locking lever on the access panel until it snaps into place.
5. Rack-mount the server. For more information, see "Rack-mounting the server."
6. Connect the power cord. For more information, see "Connecting the power cord."
7. Power on the server. For more information, see "Powering on the server."
8. (Optional.) To modify the memory mode, enter the BIOS and configure the memory mode as described in the BIOS user manual for the server.
Verifying the replacement
Use one of the following methods to verify that the DIMM is installed correctly:
· Using the operating system:
¡ In Windows, select Run in the Start menu, enter msinfo32, and verify the memory capacity of the DIMM.
¡ In Linux, execute the cat /proc/meminfo command to verify the memory capacity.
· Using HDM:
Log in to HDM and verify the memory capacity of the DIMM. For more information, see the HDM2 online help.
· Using BIOS:
Access the BIOS, select Advanced > Socket Configuration > Memory Configuration > Memory Topology, and press Enter. Then, verify the memory capacity of the DIMM.
If the memory capacity displayed is inconsistent with the actual capacity, remove and then reinstall the DIMM, or replace the DIMM with a new DIMM.
If the DIMM is in Mirror mode, it is normal that the displayed capacity is smaller than the actual capacity.
Replacing the system board
Guidelines
WARNING! To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them. |
To prevent electrostatic discharge, place the removed parts on an antistatic surface or in antistatic bags.
Prerequisites
Take the following ESD prevention measures:
· Wear antistatic clothing.
· Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.
· Do not wear any conductive objects, such as jewelry or watches.
When you replace a component, examine the slot and connector for damages. Make sure the pins are not damaged (bent for example) and do not contain any foreign objects.
Removing the system board
1. Power off the server. For more information, see "Powering off the server."
2. Remove the server from the rack. For more information, see "Removing the server from a rack."
3. Remove the OCP network adapter.
4. Remove the power supplies.
5. Remove the access panel:
a. Press the button on the locking lever and then lift the locking lever.
The access panel automatically slides to the server rear.
b. Lift the access panel to remove it from the server.
6. Remove the chassis air baffle. Open the blue clip on the air baffle, and lift the air baffle out of the chassis.
7. Remove all fans.
8. Remove the fan cage. Pull up the ejector levers at both sides of the fan cage and lift the fan cage to remove it from the chassis.
9. (Optional.) Remove the mid drive cage, mid GPU module, or rear 4GPU module.
10. Disconnect all cables connected to the system board.
11. Remove the cable baffle.
12. Remove all components installed on the system board, for example, riser cards, DIMMs, and processors.
13. Install protective covers over the empty processor sockets. Place a cover on each socket and press the cover diagonally to secure it.
14. Remove the system board:
a. Loosen the two captive screws on the system board.
b. Hold the system board handle and slide the system board toward the server front to disengage the system board and the server management module. Lift the system board out of the chassis.
Installing the system board
1. Install the system board:
a. Slowly place the system board in the chassis. Then, hold the system board handle and slide the system board toward the server rear until the system board connector is successfully inserted into the server management module.
|
NOTE: The system board is securely seated if you cannot use the system board handle to lift the system board. |
b. Fasten the two captive screws on the system board.
2. Install the removed cable cover.
3. Reconnect cables to the system board.
4. Remove the installed protective covers over the processor sockets. Hold a cover and lift it straight up and away from a socket.
5. Install the removed components (for example, riser cards, DIMMs, and GPU modules) on the system board.
6. Install the fan cage. Place the fan cage into the chassis and close the ejector levers.
7. Install the chassis air baffle.
8. (Optional.) Install the mid drive cage, mid GPU module, or rear 4GPU module.
9. Install the removed fans.
10. Install the access panel:
a. Place the access panel onto the server.
b. Slide the access panel to the server front until it snaps into place.
11. Install the removed OCP network adapter.
12. Install the removed power supplies.
13. Rack-mount the server. For more information, see "Rack-mounting the server."
14. Connect the power cord. For more information, see "Connecting the power cord."
15. Power on the server. For more information, see "Powering on the server."
Replacing the server management module
Prerequisites
Take the following ESD prevention measures:
· Wear antistatic clothing.
· Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.
· Do not wear any conductive objects, such as jewelry or watches.
When you replace a component, examine the slot and connector for damages. Make sure the pins are not damaged (bent for example) and do not contain any foreign objects.
Removing the server management module
1. Power off the server. For more information, see "Powering off the server."
2. Remove the server from the rack. For more information, see "Removing the server from a rack."
3. Remove the OCP network adapter.
4. Remove the power supplies.
5. Remove the access panel:
a. Press the button on the locking lever and then lift the locking lever.
The access panel automatically slides to the server rear.
b. Lift the access panel to remove it from the server.
6. Remove the chassis air baffle. Open the blue clip on the air baffle, and lift the air baffle out of the chassis.
7. Remove all fans.
8. Remove the fan cage. Pull up the ejector levers at both sides of the fan cage and lift the fan cage to remove it from the chassis.
9. (Optional.) Remove the mid drive cage, mid GPU module, or rear 4GPU module.
10. Disconnect all cables connected to the system board.
11. Remove the cable baffle.
12. Remove all components installed on the system board, for example, riser cards, DIMMs, and processors.
13. Install protective covers over the empty processor sockets.
14. Remove the system board:
a. Loosen the two captive screws on the system board.
b. Hold the system board handle and slide the system board toward the server front to disengage the system board and the server management module. Lift the system board out of the chassis.
15. Remove the server management module. Slide the management module toward the server front to disengage the connectors on the module and the rear panel. Lift the management module out of the chassis.
Installing the server management module
1. Install the server management module. Slowly place the management module into the chassis. Then, slide the management module toward the server rear until the connectors on the module are securely seated.
2. Install the system board:
a. Slowly place the system board into the chassis. Then, hold the system board handle and slide the system board toward the server rear until the system board connector is successfully inserted into the server management module.
|
NOTE: The system board is securely seated if you cannot use the system board handle to lift the system board. |
b. Fasten the two captive screws on the system board.
3. Install the removed cable baffle.
4. Reconnect cables to the system board.
5. Remove the installed protective covers over the processor sockets. Hold a cover and lift it straight up and away from a socket.
6. Install the removed components (for example, riser cards, DIMMs, and GPU modules) on the system board.
7. Install the fan cage. Place the fan cage into the chassis and close the ejector levers.
8. Install the chassis air baffle.
9. (Optional.) Install the mid drive cage, mid GPU module, or rear 4GPU module.
10. Install the removed fans.
11. Install the access panel:
a. Place the access panel onto the server.
b. Slide the access panel to the server front until it snaps into place.
12. Install the removed OCP network adapter.
13. Install the removed power supplies.
14. Rack-mount the server. For more information, see "Rack-mounting the server."
15. Connect the power cord.
16. Power on the server. For more information, see "Powering on the server."
Replacing a SAS/SATA drive
The drives are hot swappable.
To configure RAID settings after the drive is replaced, see the storage controller user guide for the server.
Guidelines
The drives are hot swappable. If you hot swap an HDD repeatedly within 30 seconds, the system might fail to identify the drive.
If you are using the drives to create a RAID, follow these restrictions and guidelines:
· To avoid degraded RAID performance or RAID creation failures, make sure all drives in the RAID are the same type (HDDs or SSDs) and have the same connector type (SAS or SATA).
· For efficient use of storage, use drives that have the same capacity to build a RAID. If the drives have different capacities, the lowest capacity is used across all drives in the RAID.
· If one drive is used by several logical drives, RAID performance might be affected and maintenance complexities will increase.
· If the installed drive contains RAID information, you must clear the information before configuring RAIDs. As a best practice, install drives that do not contain RAID information.
Prerequisites
Take the following ESD prevention measures:
· Wear antistatic clothing.
· Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.
· Do not wear any conductive objects, such as jewelry or watches.
When you replace a component, examine the slot and connector for damages. Make sure the pins are not damaged (bent for example) and do not contain any foreign objects.
Identify the position of the drive to be replaced.
Identify the RAID array information of the drive to be replaced. To replace a drive in a non-redundancy RAID array, back up data in the RAID array if the old drive is full or the new drive is of a different model.
Removing a SAS/SATA drive
1. Remove the security bezel, if any.
2. Observe the drive LEDs to verify that the drive is not selected by the storage controller and is not performing a RAID migration or rebuilding. For more information about drive LEDs, see drive LEDs in "Appendix B Component specifications".
3. Remove the drive:
¡ To remove an SSD, press the button on the drive panel to release the locking lever, and then hold the locking lever and pull the drive out of the slot.
¡ To remove an HDD, press the button on the drive panel to release the locking lever. Pull the drive 3 cm (1.18 in) out of the slot. Wait for a minimum of 30 seconds for the drive to stop rotating, and then pull the drive out of the slot.
4. Remove the drive carrier. Remove the screws that secure the drive and then remove the drive from the carrier.
Installing a SAS/SATA drive
IMPORTANT: As a best practice, install drives that do not contain RAID information. |
1. Attach the drive to the drive carrier. Place the drive in the carrier and then use four screws to secure the drive into place.
2. Insert the drive into the slot and push it gently until you cannot push it further, and then close the locking lever.
3. Install the security bezel, if any. Press the latch at the other end, close the security bezel, and then release the latch to secure the security bezel into place. Insert the key provided with the bezel into the lock on the bezel and lock the security bezel.
Verifying the replacement
Use one of the following methods to verify that the drive has been replaced correctly:
· Verify the drive properties (including capacity) by using one of the following methods:
¡ Log in to HDM. For more information, see HDM2 online help.
¡ Access the BIOS. For more information, see the storage controller user guide for the server.
¡ Access the CLI or GUI of the server.
· Observe the drive LEDs to verify that the drive is operating correctly. For more information about drive LEDs, see drive LEDs in "Appendix B Component specifications".
Adding an NVMe drive
Prerequisites
Take the following ESD prevention measures:
· Wear antistatic clothing.
· Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.
· Do not wear any conductive objects, such as jewelry or watches.
When you replace a component, examine the slot and connector for damages. Make sure the pins are not damaged (bent for example) and do not contain any foreign objects.
Identify the position of the drive to be replaced.
Identify the RAID array information for the drive to be replaced. To replace a drive in a non-redundancy RAID array, back up data in the RAID array if the old drive is full or the new drive is of a different model.
For more information about the installation guidelines, see "Guidelines."
Installing an NVMe drive
|
NOTE: Only some operating systems support the hot insertion of NVMe drives. For more information, use the component compatibility lookup tool at http://www.h3c.com/en/home/qr/default.htm?id=66. |
1. Remove the intelligent security bezel, if any.
2. Install the drive into the drive carrier. Secure the four screws into the screw holes, and then fasten the screws in sequence.
3. Install an NVMe drive.
¡ If the operating system supports hot insertion of NVMe drives, refer to NVMe Drives Online Replacement User Guide for detailed operation procedures.
¡ If the operating system does not support hot insertion of NVMe drives, proceed to the next step.
4. Power off the server. For more information, see "Powering off the server."
5. Push the drive into the drive slot and close the locking lever on the drive panel.
6. Install the security bezel, if any. Press the latch at the other end, close the security bezel, and then release the latch to secure the security bezel into place. Insert the key provided with the bezel into the lock on the bezel and lock the security bezel.
Verifying the replacement
Use the following methods to verify that the drive is installed correctly:
· Verify the drive properties (including capacity) by using one of the following methods:
¡ Access HDM. For more information, see HDM2 online help.
¡ Access the BIOS. For more information, see the BIOS user guide for the server.
¡ Access the CLI or GUI of the server.
· Observe the drive LEDs to verify that the drive is operating correctly. For more information, see "Drive LEDs."
Replacing an NVMe drive
Guidelines
The server supports U.2 and E1.S NVMe drives.
Support for hot insertion and managed hot removal of NVMe drives varies by operating system. For more information, use the OS compatibility lookup tool at http://iconfig-chl.h3c.com/iconfig/OSIndex. To replace an NVMe drive in an operating system that does not support hot insertion or managed hot removal of NVMe drives, first power off the server.
If an operating system supports hot swapping of NVMe drives, follow these guidelines:
· Insert NVMe drives steadily without pauses to prevent the operating system from being stuck or restarted.
· Do not hot swap multiple NVMe drives at the same time. As a best practice, hot swap NVMe drives one after another at intervals longer than 30 seconds. After the operating system identifies the first NVMe drive, you can hot swap the next drive. If you insert multiple NVMe drives simultaneously, the system might fail to identify the drives.
If you are using the drives to create a RAID, follow these restrictions and guidelines:
· For efficient use of storage, use drives that have the same capacity to build a RAID. If the drives have different capacities, the lowest capacity is used across all drives in the RAID. A drive with extra capacity cannot be used to build other RAIDs.
· As a best practice, install drives that do not contain RAID information.
Prerequisites
Take the following ESD prevention measures:
· Wear antistatic clothing.
· Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.
· Do not wear any conductive objects, such as jewelry or watches.
When you replace a component, examine the slot and connector for damages. Make sure the pins are not damaged (bent for example) and do not contain any foreign objects.
Identify the position of the drive to be replaced.
Identify the RAID array information for the drive to be replaced. To replace a drive in a non-redundancy RAID array, back up data in the RAID array if the old drive is full or the new drive is of a different model.
Removing an NVMe drive
1. Remove the security bezel, if any.
2. Remove the NVMe drive.
¡ If the operating system supports hot removal or managed hot removal of the NVMe drive, refer to NVMe Drives Online Replacement User Guide for detailed operation procedures.
¡ If the operating system does not support hot removal or managed hot removal of the NVMe drive, power off the server and proceed to the next step. For more information about powering off the server, see "Powering off the server."
3. Remove the drive. Press the button on the drive panel to release the locking lever, and then hold the locking lever and pull the drive out of the slot.
4. Remove the drive carrier. Remove the screws that secure the drive and then remove the drive from the carrier.
Installing an NVMe drive
1. Install an NVMe drive.
¡ If the old NVMe drive is removed through hot removal or managed hot removal, refer to NVMe Drives Online Replacement User Guide for detailed operation procedures.
¡ If the old NVMe drive is removed with the server powered off, proceed to the next step.
2. Attach the drive to the drive carrier. Place the drive in the carrier and then use four screws to secure the drive into place.
3. Insert the drive into the slot and push it gently until you cannot push it further, and then close the locking lever.
4. Install the removed security bezel, if any. Press the latch at the other end, close the security bezel, and then release the latch to secure the security bezel into place. Insert the key provided with the bezel into the lock on the bezel and lock the security bezel.
Verifying the replacement
Use the following methods to verify that the drive is installed correctly:
· Verify the drive properties (including capacity) by using one of the following methods:
¡ Access HDM. For more information, see HDM2 online help.
¡ Access the BIOS. For more information, see the BIOS user guide for the server.
¡ Access the CLI or GUI of the server.
· Observe the drive LEDs to verify that the drive is operating correctly. For more information, see "Drive LEDs."
Replacing a drive backplane
WARNING! To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them. |
Prerequisites
Take the following ESD prevention measures:
· Wear antistatic clothing.
· Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.
· Do not wear any conductive objects, such as jewelry or watches.
When you replace a component, examine the slot and connector for damages. Make sure the pins are not damaged (bent for example) and do not contain any foreign objects.
Removing a drive backplane
1. Power off the server. For more information, see "Powering off the server."
2. Remove the server from the rack. For more information, see "Removing the server from a rack."
3. Remove the drives attached to the backplane.
4. Remove the access panel:
a. Press the button on the locking lever and then lift the locking lever.
The access panel automatically slides to the server rear.
b. Lift the access panel to remove it from the server.
5. Remove the fan cage. Pull up the ejector levers at both sides of the fan cage, and lift the fan cage to remove it out of the chassis.
6. Disconnect cables from the backplane.
7. Remove the drive backplane. Loosen the captive screws that secure the backplane, and then lift the backplane out of the chassis.
Installing a drive backplane
1. Install a drive backplane. Place the backplane in the slot and then fasten the captive screws.
2. Connect cables to the drive backplane.
3. Install the fan cage. Place the fan cage into the chassis and close the ejector levers.
4. Install the access panel:
a. Place the access panel onto the server.
b. Slide the access panel to the server front.
c. Press down the locking lever on the access panel until it snaps into place.
5. Install the removed drives.
6. Rack-mount the server. For more information, see "Rack-mounting the server."
7. Connect the power cord. For more information, see "Connecting the power cord."
8. Power on the server. For more information, see "Powering on the server."
Installing a rear drive cage
Prerequisites
Take the following ESD prevention measures:
· Wear antistatic clothing.
· Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.
· Do not wear any conductive objects, such as jewelry or watches.
· When you replace a component, examine the slot and connector for damages. Make sure the pins are not damaged (bent for example) and do not contain any foreign objects.
Procedure
1. Power off the server. For more information, see "Powering off the server."
2. Remove the server from the rack. For more information, see "Removing the server from a rack."
3. Remove the access panel:
a. Press the button on the locking lever and then lift the locking lever.
The access panel automatically slides to the server rear.
b. Lift the access panel to remove it from the server.
4. Remove the PCIe riser card blank. Lift the blank to remove it from the chassis.
5. For a 2SFF UniBay drive cage, install a bracket:
a. Align the guide pin on the bracket with the notch in the chassis.
b. Place the bracket in the chassis.
c. Use screws to secure the bracket.
6. Install the rear drive cage:
a. Place the drive cage in the chassis.
b. Use screws to secure the drive cage.
7. Connect the cables. See "Connecting drive cables."
8. Install the blank. Aligning the guide pins on the blank with the notches in the chassis, insert the blank into the slot.
9. Install the access panel:
a. Place the access panel onto the server.
b. Slide the access panel to the server front.
c. Press down the locking lever on the access panel until it snaps into place.
10. Rack-mount the server. For more information, see "Rack-mounting the server."
11. Connect the power cord.
12. Power on the server. For more information, see "Powering on the server."
Replacing riser cards and PCIe modules
WARNING! To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them. |
Guidelines
If a processor is faulty or absent, the PCIe slots connected to it are unavailable. For more information about riser card, PCIe slot, and processor mappings, see "Riser cards."
The server provides three PCIe riser connectors on the system board to connect riser cards, which hold PCIe modules. For more information about the connector locations, see system board components in "Appendix A Server specifications." For information about PCIe slots on a riser card, see riser cards in "Appendix B Component specifications."
You can install a PCIe module in a PCIe slot for a larger-sized PCIe module. For example, an LP PCIe module can be installed in a slot for an FHFL PCIe module.
A PCIe slot can supply power to the installed PCIe module if the maximum power consumption of the module does not exceed 75 W. If the maximum power consumption exceeds 75 W, a power cord is required.
The description for PCIe5.0 x16 (8,4,2,1) is as follows:
· PCIe5.0: Fifth-generation signal speed.
· x16: Connector bandwidth.
· (8,4,2,1): Compatible bus bandwidth, including x8, x4, x2, and x1.
For an x8 MCIO connector, x8 indicates the bus bandwidth.
Riser card and PCIe module compatibility
The riser card and PCIe module compatibility is as shown in Table 14, Table 15, Table 16, Table 17, Table 18, and Table 19.
Table 14 Riser card and PCIe module compatibility (1)
Riser card model |
Riser card location |
PCIe slots on a riser card |
PCIe slot or connector description |
PCIe module for PCIe slot or connector |
PCIe slot power capability |
Processor |
RC-3FHFL-2U-G6 |
PCIe riser connector 1 |
Slots 1 through 3 |
PCIe5.0 x16 (16,8,4,2,1) |
FHFL |
75 W |
Processor 1 |
SLOT 1-A |
x8 MCIO connector |
Connected to MCIO connector C1-P3A on the system board, providing an x16 PCIe link for slot 1 with x8 MCIO connector SLOT 1-C |
N/A |
Processor 1 |
||
SLOT 1-C |
x8 MCIO connector |
Connected to MCIO connector C1-P3C on the system board, providing an x16 PCIe link for slot 1 with x8 MCIO connector SLOT 1-A |
N/A |
Processor 1 |
||
SLOT 2-A |
x8 MCIO connector |
Connected to MCIO connector C1-P2A on the system board, providing an x16 PCIe link for slot 2 with x8 MCIO connector SLOT 2-C |
N/A |
Processor 1 |
||
SLOT 2-C |
x8 MCIO connector |
Connected to MCIO connector C1-P2C on the system board, providing an x16 PCIe link for slot 2 with another x8 MCIO connector SLOT 2-A |
N/A |
Processor 1 |
||
PCIe riser connector 2 |
Slots 4 through 6 |
PCIe5.0 x16 (16,8,4,2,1) |
FHFL |
75 W |
Processor 2 |
|
SLOT 1-A |
x8 MCIO connector |
Connected to MCIO connector C2-P4A, providing an x16 PCIe link for slot 4 with x8 MCIO connector SLOT 1-C |
N/A |
Processor 2 |
||
SLOT 1-C |
x8 MCIO connector |
Connected to MCIO connector C2-P4C on the system board, providing an x16 PCIe link for slot 4 with x8 MCIO connector SLOT 1-A |
N/A |
Processor 2 |
||
SLOT 2-A |
x8 MCIO connector |
Connected to MCIO connector C2-P2A on the system board, providing an x16 PCIe link for slot 5 with x8 MCIO connector SLOT 2-C |
N/A |
Processor 2 |
||
SLOT 2-C |
x8 MCIO connector |
Connected to MCIO connector C2-P2C on the system board, providing an x16 PCIe link for slot 5 with x8 MCIO connector SLOT 2-A |
N/A |
Processor 2 |
Table 15 Riser card and PCIe module compatibility (2)
Riser card model |
Riser card location |
PCIe slots on a riser card |
PCIe slot or connector description |
PCIe module for PCIe slot or connector |
PCIe slot power capability |
Processor |
RC-3FHHL-2U-G6 |
PCIe riser connector 1 |
Slot 1 |
PCIe5.0 x16 (16,8,4,2,1) |
FHHL |
75 W |
Processor 1 |
Slot 2/3 |
PCIe5.0 x16 (8,4,2,1) |
FHHL |
75 W |
Processor 1 |
||
SLOT 1-A |
x8 MCIO connector |
Connected to MCIO connector C1-P2A on the system board, providing an x16 PCIe link for slot 1 with x8 MCIO connector SLOT 1-C |
N/A |
Processor 1 |
||
SLOT 1-C |
x8 MCIO connector |
Connected to MCIO connector C1-P2C on the system board, providing an x16 PCIe link for slot 1 with x8 MCIO connector SLOT 1-A |
N/A |
Processor 1 |
||
PCIe riser connector 2 |
Slot 4 |
PCIe5.0 x16 (16,8,4,2,1) |
FHHL |
75 W |
Processor 2 |
|
Slot 5/6 |
PCIe5.0 x16 (8,4,2,1) |
FHHL |
75 W |
Processor 2 |
||
SLOT 1-A |
x8 MCIO connector |
Connected to MCIO connector C2-P2A, providing an x16 PCIe link for slot 4 with x8 MCIO connector SLOT 1-C. |
N/A |
Processor 2 |
||
SLOT 1-C |
x8 MCIO connector |
Connected to MCIO connector C2-P2C on the system board, providing an x16 PCIe link for slot 4 with x8 MCIO connector SLOT 1-A |
N/A |
Processor 2 |
Table 16 Riser card and PCIe module compatibility (3)
Riser card model |
Riser card location |
PCIe slots on a riser card |
PCIe slot or connector description |
PCIe module for PCIe slot or connector |
PCIe slot power capability |
Processor |
RC-FHHL-2U-G6 |
PCIe riser connector 1 |
Slot 3 |
PCIe5.0 x16 (16,8,4,2,1) |
FHHL |
75 W |
Processor 1 |
PCIe riser connector 2 |
Slot 6 |
PCIe5.0 x16 (16,8,4,2,1) |
FHHL |
75 W |
Processor 2 |
|
RC-2FHFL-2U-LC-G6 |
PCIe riser connector 2 |
slot 4 |
PCIe5.0 x16 (16,8,4,2,1) |
FHFL |
75W |
Processor 2 |
slot 5 |
PCIe5.0 x16 (16,8,4,2,1) |
FHFL |
75W |
Processor 2 |
||
SLOT 1-A |
x8 MCIO connector |
Connected to MCIO connector C2-P4A on the system board, providing an x16 PCIe link for slot 4 with x8 MCIO connector SLOT 1-C |
N/A |
Processor 2 |
||
SLOT 1-C |
x8 MCIO connector |
Connected to MCIO connector C2-P4C on the system board, providing an x16 PCIe link for slot 4 with x8 MCIO connector SLOT 1-A |
N/A |
Processor 2 |
||
SLOT 2-A |
x8 MCIO connector |
Connected to MCIO connector C2-P2A on the system board, providing an x16 PCIe link for slot 5 with x8 MCIO connector SLOT 2-C |
N/A |
Processor 2 |
||
SLOT 2-C |
x8 MCIO connector |
Connected to MCIO connector C2-P2C on the system board, providing an x16 PCIe link for slot 5 with x8 MCIO connector SLOT 2-A |
N/A |
Processor 2 |
Table 17 Riser card and PCIe module compatibility (4)
Riser card model |
Riser card location |
PCIe slots on a riser card |
PCIe slot or connector description |
PCIe module for PCIe slot or connector |
PCIe slot power capability |
Processor |
RC-1FHFL-R3-2U-G6 |
PCIe riser connector 3 |
Slot 7 |
PCIe5.0 x16 (16,8,4,2,1) |
FHFL |
75 W |
Processor 2 |
RC-2HHHL-R3-2U-G6 |
PCIe riser connector 3 |
Slot 7 |
PCIe5.0 x16 (8,4,2,1) |
HHHL |
75 W |
Processor 2 |
Slot 8 |
PCIe5.0 x16 (8,4,2,1) |
HHHL |
75 W |
Processor 2 |
||
RC-2FHFL-R3-2U-G6 |
PCIe riser connector 3 |
Slot 7 |
PCIe5.0 x16 (8,4,2,1) |
FHFL |
75 W |
Processor 2 |
Slot 8 |
PCIe5.0 x16 (8,4,2,1) |
FHFL |
75 W |
Processor 2 |
Table 18 Riser card and PCIe module compatibility (5)
Riser card model |
Riser card location |
Slot or connector on the riser card |
PCIe slot or connector description |
PCIe module for PCIe slot or connector |
PCIe slot power capability |
Processor |
RC-2HHHL-R4-2U-G6 |
PCIe riser connector 4 |
Slot 9 |
PCIe5.0 x16 (8,4,2,1) |
HHHL |
75 W |
Processor 1 |
Slot 10 |
PCIe5.0 x16 (8,4,2,1) |
HHHL |
75 W |
Processor 2 |
||
SLOT 1 cable |
x8 MCIO connector |
Connected to MCIO connector C1-P2C on the system, providing an x8 PCIe link for slot 9 |
N/A |
Processor 1 |
||
SLOT 2 cable |
x8 MCIO connector |
Connected to MCIO connector C2-P2C on the system board, providing an x8 PCIe for slot 10 |
N/A |
Processor 2 |
||
AUX |
AUX connector |
Connected to connector AUX8 on the system board |
N/A |
N/A |
||
PWR |
Power connector |
Connected to connector PWR4 on the system board |
N/A |
N/A |
Table 19 Riser card and PCIe module compatibility (6)
Riser card model |
Riser card location |
PCIe slots on a riser card |
PCIe slot or connector description |
PCIe module for PCIe slot or connector |
PCIe slot power capability |
Processor |
RC-5HHHL-R5-2U-G5 |
In the middle of the server, secured by the pegs on the side panels inside the chassis. |
Slot 12 |
PCIe4.0 x16 (8,4,2,1) |
HHHL |
75 W |
Processor 1 |
Slot 13 |
PCIe4.0 x16 (8,4,2,1) |
HHHL |
75 W |
Processor 1 |
||
Slot 14 |
PCIe4.0 x16 (8,4,2,1) |
HHHL |
75 W |
Processor 2 |
||
Slot 15 |
PCIe4.0 x16 (8,4,2,1) |
HHHL |
75 W |
Processor 2 |
||
MCIO connector 2 |
x8 MCIO connector |
Connected to MCIO connector C1-P3C on the system board, providing an x8 PCIe link for slot 12 |
N/A |
Processor 1 |
||
MCIO connector 3 |
x8 MCIO connector |
Connected to MCIO connector C1-P3A on the system board, providing an x8 PCIe link for slot 13 |
N/A |
Processor 1 |
||
MCIO connector 4 |
x8 MCIO connector |
Connected to MCIO connector C2-P4A on the system board, providing an x8 PCIe link for slot 14 |
N/A |
Processor 2 |
||
MCIO connector 5 |
x8 MCIO connector |
Connected to MCIO connector C2-P4C on the system board, providing an x8 PCIe link for slot 15 |
N/A |
Processor 2 |
||
AUX |
AUX connector |
Connector to connector AUX7 on the system board. |
N/A |
N/A |
||
PWR |
Power connector |
Connected to connector PWR6 on the system board. |
N/A |
N/A |
||
PCA-R4900-4GPU-G6 |
PCIe riser connector 1 and PCIe riser connector 2 |
Slot 3 |
PCIe5.0 x16 (16,8,4,2,1) |
FHHL |
75 W |
Processor 1 |
Slot 6 |
PCIe5.0 x16 (16,8,4,2,1) |
FHHL |
75 W |
Processor 2 |
||
Slot 11 |
PCIe5.0 x16 (16,8,4,2,1) |
FHFL |
300 W* |
Processor 1 |
||
Slot 12 |
PCIe5.0 x16 (16,8,4,2,1) |
FHFL |
300 W* |
Processor 1 |
||
Slot 13 |
PCIe5.0 x16 (16,8,4,2,1) |
FHFL |
300 W* |
Processor 2 |
||
Slot 14 |
PCIe5.0 x16 (16,8,4,2,1) |
FHFL |
300 W* |
Processor 2 |
|
NOTE: 300W*: Slots 11 through 14 on the rear 4GPU module support only GPU modules. To provide a power capacity of 300 W, you must connect external GPU power cords. |
Prerequisites
Take the following ESD prevention measures:
· Wear antistatic clothing.
· Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.
· Do not wear any conductive objects, such as jewelry or watches.
When you replace a component, examine the slot and connector for damages. Make sure the pins are not damaged (bent for example) and do not contain any foreign objects.
Removing a riser card and a PCIe module
1. Power off the server. For more information, see "Powering off the server."
2. Remove the server from the rack. For more information, see "Removing the server from a rack."
3. Remove the access panel:
a. Press the button on the locking lever and then lift the locking lever.
The access panel automatically slides to the server rear.
b. Lift the access panel to remove it from the server.
4. Disconnect all cables that hinder the replacement, if any.
5. Remove the riser card installed with a PCIe module. Holding the riser card by the notch and handle, press the unlock button and lift the riser card to remove the riser card from the chassis.
6. Remove the PCIe module from the riser card:
a. Remove the screws on the riser card.
b. Pull the PCIe module out of the slot.
Installing a riser card and a PCIe module
1. Install the PCIe module on the riser card:
a. Remove the PCIe module blank. Open the cover on the riser card, and then pull out the blank.
b. Install the PCIe module to the riser card. Insert the PCIe module into the PCIe slot along the guide rails, and then use screws to secure the PCIe module.
2. Install the riser card on the server:
a. Lift the riser card blank to remove it from the chassis.
b. Install the riser card on the PCIe riser connector. Align the two standouts on the card with the notches in the chassis, press the unlock button, and slide the riser card until it snaps into place.
3. Connect cables to the riser card or PCIe modules, if any.
4. Install the access panel:
a. Place the access panel onto the server.
b. Slide the access panel to the server front.
c. Press down the locking lever on the access panel until it snaps into place.
5. Rack-mount the server. For more information, see "Rack-mounting the server."
6. Connect the power cord.
7. Power on the server. For more information, see "Powering on the server."
Installing PCIe modules and a riser card on PCIe riser connector 3
Prerequisites
Take the following ESD prevention measures:
· Wear antistatic clothing.
· Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.
· Do not wear any conductive objects, such as jewelry or watches.
When you replace a component, examine the slot and connector for damages. Make sure the pins are not damaged (bent for example) and do not contain any foreign objects.
For more information, see "Replacing riser cards and PCIe modules."
Procedure
1. Identify the position of the PCIe riser connector. For more information, see system board components in "Appendix A Server specifications."
2. Power off the server. For more information, see "Powering off the server."
3. Remove the server from the rack. For more information, see "Removing the server from a rack."
4. Remove the access panel:
a. Press the button on the locking lever and then lift the locking lever.
The access panel automatically slides to the server rear.
b. Lift the access panel to remove it from the server.
5. Lift the PCIe riser card blank to remove it.
6. Install a PCIe module to the riser card:
a. Remove the PCIe module blank. Open the cover on the riser card, and then pull out the blank
b. Install the PCIe module into the riser card. Insert the PCIe module into the PCIe slot along the guide rails, and close the cover on the riser card.
7. Install the support bracket:
a. Align the guide pins on the support bracket with the guide holes on the chassis.
b. Place the support bracket onto the chassis.
c. Fasten the screws to secure the support bracket.
8. Install the riser card installed with the PCIe module. Inset the riser card into the PCIe riser connector along the guide rails.
9. Connect cables to the riser card or PCIe modules, if any.
10. Install the removed PCIe riser card blank:
a. Align the standouts on the PCIe riser card blank with the notches on the side of the chassis.
b. Insert the PCIe riser card blank into the chassis.
11. Install the access panel:
a. Place the access panel onto the server.
b. Slide the access panel to the server front.
c. Press down the locking lever on the access panel until it snaps into place.
12. Rack-mount the server. For more information, see "Rack-mounting the server."
13. Connect the power cord.
14. Power on the server. For more information, see "Powering on the server."
Installing PCIe modules and a riser card on PCIe riser connector 4
Prerequisites
Take the following ESD prevention measures:
· Wear antistatic clothing.
· Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.
· Do not wear any conductive objects, such as jewelry or watches.
When you replace a component, examine the slot and connector for damages. Make sure the pins are not damaged (bent for example) and do not contain any foreign objects.
For more information, see "Replacing riser cards and PCIe modules."
Procedure
1. Power off the server. For more information, see "Powering off the server."
2. Remove the server from the rack. For more information, see "Removing the server from a rack."
3. Remove the access panel:
a. Press the button on the locking lever and then lift the locking lever.
The access panel automatically slides to the server rear.
b. Lift the access panel to remove it from the server.
4. Lift the PCIe riser card blank to remove it.
5. Install a PCIe module to the riser card:
a. Remove the PCIe module blank. Open the cover on the riser card, and then pull out the PCIe module blank.
b. Install the PCIe module into the riser card. Insert the PCIe module into the PCIe slot along the guide rails, and close the cover on the riser card.
6. Install the riser card that carries the PCIe module. Inset the riser card into the PCIe riser connector along the guide rails.
7. Connect cables to the riser card or PCIe modules, if any.
8. Install the access panel:
a. Place the access panel onto the server.
b. Slide the access panel to the server front.
c. Press down the locking lever on the access panel until it snaps into place.
9. Rack-mount the server. For more information, see "Rack-mounting the server."
10. Connect the power cord.
11. Power on the server. For more information, see "Powering on the server."
Replacing a storage controller and a power fail safeguard module
WARNING! To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them. |
For some storage controllers, you can order a power fail safeguard module to prevent data loss when power outage occurs.
A power fail safeguard module provides a flash card and a supercapacitor. When a system power failure occurs, this supercapacitor can provide power for a minimum of 20 seconds. During this interval, the storage controller transfers data from DDR memory to the flash card, where the data remains indefinitely or until the controller retrieves the data.
A supercapacitor has a lifespan of 3 to 5 years. If the lifespan of a supercapacitor expires, a supercapacitor exception might occur. The system notifies users of supercapacitor exceptions by using the following methods:
· For a PMC storage controller, the status of the flash card will become Abnormal_status code. You can check the status code to identify the exception. For more information, see HDM2 online help.
· For an LSI storage controller, the status of the flash card of the power fail safeguard module will become Abnormal.
You can also review log messages from HDM2 to identify supercapacitor exceptions.
For the power fail safeguard module to take effect, replace the supercapacitor before its lifespan expires.
The supercapacitor might have a low charge after the power fail safeguard module is installed or after the server is powered up. If the system displays that the supercapacitor has low charge, no action is required. The system will charge the supercapacitor automatically. You can view the status of the supercapacitor from HDM or the BIOS.
IMPORTANT: After the supercapacitor replacement, verify that cache related settings are enabled for logical drives. For more information, see HDM2 online help. |
Guidelines
When you install standard storage controllers, follow these restrictions and guidelines:
· Make sure the standard storage controllers are of the same vendor (PMC or LSI). For information about the available storage controllers and their vendors, use the component compatibility lookup tool at http://www.h3c.com/en/home/qr/default.htm?id=66.
· If the drives are installed only at the server front, install storage controllers to different riser cards. The controller in a lower-numbered slot is connected to the drive backplane for the higher-numbered drive carriers and the controller in a higher-numbered slot to the drive backplane for the lower-numbered drive carriers. For more information about the drive carrier locations, see the front panel view in " Appendix A Server specifications."
· If the drives are installed at both the server front and server rear, install storage controllers to one riser card. The controller in a lower-numbered slot is connected to the rear drive backplane and the controller in a higher-numbered slot to the front drive backplane. For information about slot locations, see the rear panel view in "Appendix A Server specifications."
Use Table 20 to identify the supercapacitor available for a storage controller.
Table 20 Standard storage controller and supercapacitor compatibility matrix
Standard storage controller |
Supercapacitor |
Supercapacitor installation location |
RAID-LSI-9560-LP-8i-4GB |
BAT-LSI-G3-A |
In the supercapacitor container at the server rear or on the air baffle |
RAID-LSI-9560-LP-16i |
||
RAID-P460-B4 |
BAT-PMC-G3-2U |
|
HBA-LSI 9540-8i |
Not supported |
Not supported |
HBA-LSI-9500-LP-8i |
||
HBA-LSI 9500-16i |
To replace the storage controller with a controller of a different model, back up data in the drives of the storage controller and clear RAID configuration.
To replace the storage controller with a controller of the same model, make sure the following configurations remain the same after replacement:
· Storage controller operating mode.
· Storage controller firmware version.
· BIOS boot mode.
· First boot option in Legacy mode.
For more information, see the storage controller user guide for the server and the BIOS user guide for the server.
Prerequisites
Take the following ESD prevention measures:
· Wear antistatic clothing.
· Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.
· Do not wear any conductive objects, such as jewelry or watches.
When you replace a component, examine the slot and connector for damages. Make sure the pins are not damaged (bent for example) and do not contain any foreign objects.
Removing a standard storage controller and a power fail safeguard module
1. Power off the server. For more information, see "Powering off the server."
2. Remove the server from the rack. For more information, see "Removing the server from a rack."
3. Remove the access panel:
a. Press the button on the locking lever and then lift the locking lever.
The access panel automatically slides to the server rear.
b. Lift the access panel to remove it from the server.
4. Disconnect all cables from the standard storage controller.
5. Remove the standard storage controller:
a. Remove the riser card where the standard storage controller resides. Holding the riser card by the notch and handle, press the unlock button and lift the riser card to remove the riser card from the chassis.
b. Remove the standard storage controller from the riser card. Open the retaining latch on the riser card, and then pull the storage controller out from the slot.
6. Remove the power fail safeguard module or super capacitor, if any:
a. Remove the flash card on the storage controller, if any. Remove the screws that secure the flash card, and then remove the flash card.
b. Remove the supercapacitor. Pull the clip on the supercapacitor holder, and take the supercapacitor out of the holder.
c. Remove the supercapacitor holder. Lift the retaining latch at the bottom of the supercapacitor holder, and slide the holder to remove it.
Installing a standard storage controller and a power fail safeguard module
1. Install the supercapacitor on the supercapacitor holder:
a. Install the supercapacitor holder. Place the supercapacitor holder in the chassis and slide it to the server rear until it snaps into place.
b. Connect one end of a supercapacitor extension cable to the supercapacitor.
c. Install the supercapacitor to the supercapacitor holder. Tilt the supercapacitor and insert one end of the supercapacitor into the holder. Pull the clip on the holder and insert the other end into the holder, and then release the clip.
2. Install the removed flash card on the power fail safeguard module:
a. Install the internal threaded studs supplied with the power fail safeguard module on the standard storage controller.
b. Install the flash card on the standard storage controller. Insert the flash card connector into the socket and use screws to secure the flash card on the storage controller.
3. Install the standard storage controller on the riser card. Insert the standard storage controller into the PCIe slot along the guide rails, and then close the retaining latch on the riser card.
4. Install the riser card on the server.
5. Connect the cables for the standard storage controller to the drive backplane. For more information, see "Connecting drive cables."
6. Install the removed power fail safeguard module or supercapacitor. Connect the supercapacitor extension cable to the flash card. For more information, see "Connecting the supercapacitor cable."
7. Install the access panel:
a. Place the access panel onto the server.
b. Slide the access panel to the server front.
c. Press down the locking lever on the access panel until it snaps into place.
8. Rack-mount the server. For more information, see "Rack-mounting the server."
9. Connect the power cord.
10. Power on the server. For more information, see "Powering on the server."
Replacing a GPU module
Guidelines
For information about configuration guides for the power cords of GPU modules, contact Technical Support.
To install FHFL dual-width GPU modules, install them to slots as shown in Table 21, as a best practice, and follow these guidelines:
· Install GPU modules in PCIe slots with x16 bus bandwidth.
· If the number of GPU modules is equal to or smaller than 3, install the GPU modules to riser cards.
· If the number of GPU modules is equal to 4, install the GPU modules to the rear 4GPU module.
To install FHFL single-width GPU, install them to slots as shown in Table 22, as a best practice, and follow these guidelines:
· Install GPU modules in PCIe slots with x16 bus bandwidth.
· If the number of GPU modules is equal to or smaller than 3, install one GPU to each riser card.
· If the number of GPU modules is equal to or greater than 4, install two RC-3FHFL-2U-G6 riser cards to riser connectors 1 and 2 and install two GPU modules to each riser card as a best practice.
To install HHHL single-width GPU, install them to slots as shown in Table 23, as a best practice, and follow these guidelines:
· If the number of GPU modules is equal to or greater than 8, you can install GPU modules to PCIe slots with x8 bus bandwidth. In other cases, install GPU modules to PCIe slots with x16 bus bandwidth.
· If the number of GPU modules is greater than 10, install GPU modules to the mid GPU module.
Table 21 FHFL dual-width GPU installation guidelines
Number of GPUs |
Recommended GPU installation locations |
1 |
Slot 5 |
2 |
Slots 2 and 5 |
3 |
Slots 2, 5, and 7 |
4 |
Slots 11, 12, 13, and 14 |
Table 22 FHFL single-width GPU installation guidelines
Number of GPUs |
Recommended GPU installation locations |
1 |
Slot 5 |
2 |
Slots 2 and 5 |
3 |
Slots 2, 5, and 7 |
4 |
Slots 1, 2, 4, and 5 |
5 |
Slots 1, 2, 4, 5, and 7 |
Table 23 HHHL GPU installation guidelines
Number of GPUs |
Recommended GPU installation locations |
1 |
Slot 5 |
2 |
Slots 2 and 5 |
3 |
Slots 2, 5, and 7 |
4 |
Slots 2, 3, 5, and 6 |
5 |
Slots 2, 3, 4, 5, and 6 |
6 |
Slots 1, 2, 3, 4, 5, and 6 |
7 |
Slots 1, 2, 3, 4, 5, 6, and 7 |
8 |
Slots 1, 2, 3, 4, 5, 6, 7, and 8 |
9 |
Slots 1, 2, 3, 4, 5, 6, 7, 8, and 9 |
10 |
Slots 1, 2, 3, 4, 5, 6, 7, 8, 9, and 10 |
14 |
Slots 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 13, 14, and 15 |
Prerequisites
Take the following ESD prevention measures:
· Wear antistatic clothing.
· Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.
· Do not wear any conductive objects, such as jewelry or watches.
When you replace a component, examine the slot and connector for damages. Make sure the pins are not damaged (bent for example) and do not contain any foreign objects.
Removing a GPU module
1. Power off the server. For more information, see "Powering off the server."
2. Remove the server from the rack. For more information, see "Removing the server from a rack."
3. Remove the access panel:
a. Press the button on the locking lever and then lift the locking lever.
The access panel automatically slides to the server rear.
b. Lift the access panel to remove it from the server.
4. Disconnect all cables that hinder the replacement, if any.
5. Remove the riser card where the GPU module resides. Holding the riser card by the notch and handle, press the unlock button and lift the riser card to remove the riser card out of the chassis.
6. Remove the GPU module from the riser card:
a. Disconnect the cable from the GPU module, if any.
b. Open the retaining latch on the riser card, and pull the GPU module out from the slot.
Replacing the chassis air baffle
Install a chassis air baffle suitable for the GPU module.
Installing a GPU module
1. Install a GPU module on the riser card:
a. Insert the GPU module into the PCIe slot along the guide rails, and then close the retaining latch on the riser card.
b. (Optional.) Connect the GPU module power cord according to the cable label.
2. Reconnect other cables to the riser card.
3. Install the riser card on the server:
a. Insert the riser card into the chassis along the slot.
b. (Optional.) Connect the riser card power cord.
4. Install the access panel:
a. Place the access panel onto the server.
b. Slide the access panel to the server front.
c. Press down the locking lever on the access panel until it snaps into place.
5. Rack-mount the server. For more information, see "Rack-mounting the server."
6. Connect the power cord.
7. Power on the server. For more information, see "Powering on the server."
Replacing a network adapter
WARNING! To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them. |
The OCP network adapter supports NCSI. By default, port 1 on the OCP network adapter acts as the HDM shared network port. You can configure another port on the OCP network adapter as the HDM shared network port from the HDM Web interface. For more information, see HDM online help.
Guidelines
An OCP network adapter can be installed in slot 8, 16, or 17. For information about the slots for OCP network adapters, see PCIe slots in "Appendix B Component specifications." If you install only one OCP network adapter, install it in slot 16 as a best practice.
The OCP network adapters in slots 16 and 17 support hot swapping.
To hot swap an OCP network adapter, follow these restrictions and guidelines:
· OCP network adapters installed before the server is powered on support hot swapping. Make sure the replaced network adapter and the newly installed network adapter are the same model.
· OCP network adapters installed after the server is powered on do not support hot swapping. To replace such an OCP network adapter, first power off the server, replace the OCP network adapter, and then power on the server.
For operating systems that support hot swapping of OCP network adapters, use the component compatibility lookup tool at http://www.h3c.com/en/home/qr/default.htm?id=66.
To install a standard PCIe network adapter, a riser card is required. For more information about riser card and PCIe module compatibility, see riser cards in "Appendix B Component specifications."
Prerequisites
Take the following ESD prevention measures:
· Wear antistatic clothing.
· Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.
· Do not wear any conductive objects, such as jewelry or watches.
When you replace a component, examine the slot and connector for damages. Make sure the pins are not damaged (bent for example) and do not contain any foreign objects.
Replacing a standard PCIe network adapter
Removing a standard PCIe network adapter
1. Power off the server. For more information, see "Powering off the server."
2. Disconnect cables from the standard PCIe network adapter.
3. Remove the server from the rack. For more information, see "Removing the server from a rack."
4. Remove the access panel:
a. Press the button on the locking lever and then lift the locking lever.
The access panel automatically slides to the server rear.
b. Lift the access panel to remove it from the server.
5. Disconnect all cables that hinder the replacement, if any.
6. Remove the riser card that holds the PCIe network adapter. Holding the riser card by the notch and handle, press the unlock button and lift the riser card to remove it from the chassis.
7. Remove the PCIe network adapter from the riser card. Loosen the captive screws on the riser card and pull the PCIe network adapter out of the slot.
Installing a standard PCIe network adapter
For more information, see "Installing a riser card and a PCIe module."
Replacing an OCP network adapter
Some operating systems support managed removal of some OCP network adapters. To replace such an OCP network adapter, you do not need to power off the server. For more information about managed removal, see "Appendix B Managed removal of OCP network adapters." This section describes the procedure to replace an OCP network adapter that does not support managed removal.
Removing an OCP network adapter
1. Power off the server. For more information, see "Powering off the server."
2. Disconnect all cables from the OCP network adapter.
3. Remove the OCP network adapter: Loosen the captive screws on the OCP network adapter and pull the OCP network adapter out from the chassis.
Installing an OCP network adapter
1. Install the OCP network adapter: Insert the OCP network adapter into the slot and fasten the captive screws on it.
2. Connect cables to the OCP network adapter.
3. Power on the server. For more information, see "Powering on the server."
4. (Optional.) Configure a network port on the OCP network adapter as an HDM shared network port.
OCP network adapters inserted into OCP adapter slots support NCSI. By default, port 1 on an OCP network adapter acts as the HDM shared network port. You can specify another port on the OCP network adapter as the HDM shared network port from the HDM Web interface. Note that you can specify only one port as the HDM shared network port at the same time.
Replacing a SATA M.2 SSD and a front SATA M.2 SSD expander module
M.2 SSD drives are installed on the server using the M.2 SSD expander module. You can install the M.2 SSD expander module to the server front or server rear. M.2 SSD drives can be front or rear M.2 SSD drives according to their installation locations.
Guidelines
WARNING! To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them. |
For front SATA M.2 SSDs:
· The front M.2 SSD expander module is installed between the drive backplane and the fan modules at the front of the chassis. An M.2 expander module is required for SATA M.2 SSDs. You can install a maximum of two SATA M.2 SSDs and connect the expander module to the system board with cables. For more information about cabling, see "Connecting cables for the front M.2 SSD expander module."
· If you install two SATA M.2 SSDs to the front M.2 SSD expander module, the M.2 expander module supports building a RAID for SATA M.2 SSDs. RAID 0 and RAID 1 are supported. To ensure high availability in RAID setup, install two SATA M.2 SSDs of the same model.
· As a best practice, use SATA M.2 SSDs to install the operating system.
Prerequisites
Take the following ESD prevention measures:
· Wear antistatic clothing.
· Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.
· Do not wear any conductive objects, such as jewelry or watches.
· When you replace a component, examine the slot and connector for damages. Make sure the pins are not damaged (bent for example) and do not contain any foreign objects.
Removing a SATA M.2 SSD and a SATA M.2 SSD expander module
1. Power off the server. For more information, see "Powering off the server."
2. Remove the server from the rack. For more information, see "Removing the server from a rack."
3. Remove the access panel:
a. Press the button on the locking lever and then lift the locking lever.
The access panel automatically slides to the server rear.
b. Lift the access panel to remove it from the server.
4. Remove the SATA M.2 SSD expander module that holds the SATA M.2 SSD:
a. Disconnect the cable from the SATA M.2 SSD expander module.
b. Remove the expander module. Remove the screws that secure the expander module and then pull the expander module out.
5. Remove the SATA M.2 SSD. Slide the locking tab, lift the SSD, and then pull the SSD out of the slot.
Installing a SATA M.2 SSD and a SATA M.2 SSD expander module
1. Install the SATA M.2 SSD to the SATA M.2 SSD expander module. Insert the connector of the SSD into the socket, slide the locking tab, press the SSD to secure the SSD into place, and then release the locking tab.
2. Install the expander module.
a. Align the two screw holes in the expander module with the two internal threaded studs on the chassis, put the expander module onto the chassis, and then use screws to secure the expander module.
b. Connect the SATA M.2 SSD cable. For more information, see "Connecting cables for the front M.2 SSD expander module."
3. Install the access panel:
a. Place the access panel onto the server.
b. Slide the access panel to the server front.
c. Press down the locking lever on the access panel until it snaps into place.
4. Rack-mount the server. For more information, see "Rack-mounting the server."
5. Connect the power cord.
6. Power on the server. For more information, see "Powering on the server."
Replacing an NVMe M.2 SSD and an NVMe M.2 SSD expander module
M.2 SSD drives are installed on the server through the M.2 SSD expander module. You can install the M.2 SSD expander module to the server front or server rear. M.2 SSD drives can be front or rear M.2 SSD drives according to their installation locations.
Guidelines
WARNING! To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them. |
For front NVMe M.2 SSDs:
· The front M.2 SSD expander module is installed between the drive backplane and the fan modules at the front of the chassis. An M.2 expander module is required for NVMe M.2 SSDs. You can install a maximum of two NVMe M.2 SSDs and connect the expander module to the system board with cables. For more information about cabling, see "Connecting cables for the front M.2 SSD expander module."
· If you install two NVMe M.2 SSDs to the front M.2 SSD expander module, the M.2 expander module supports building a RAID for NVMe M.2 SSDs. RAID 0 and RAID 1 are supported. To ensure high availability in RAID setup, install two NVMe M.2 SSDs of the same model.
For rear NVMe M.2 SSDs:
· The rear NVMe M.2 SSD expander module (model: RAID-MARVELL-SANTACRUZ-LP-2i) is installed at the rear of the chassis and supports NVMe M.2 SSDs. You can install a maximum of two NVMe M.2 SSDs.
· If you install two NVMe M.2 SSDs to the rear NVMe M.2 SSD expander module, the M.2 expander module supports building a RAID for NVMe M.2 SSDs. RAID 0 and RAID 1 are supported. To ensure high availability in RAID setup, install two NVMe M.2 SSDs of the same model. For more information about configuring a RAID, see the storage controller user guide.
· The rear NVMe M.2 expander module supports being installed in a PCIe slot with x8 or higher bus bandwidth.
As a best practice, use NVMe M.2 SSDs to install the operating system.
Prerequisites
Take the following ESD prevention measures:
· Wear antistatic clothing.
· Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.
· Do not wear any conductive objects, such as jewelry or watches.
· When you replace a component, examine the slot and connector for damages. Make sure the pins are not damaged (bent for example) and do not contain any foreign objects.
Removing an NVMe M.2 SSD and an NVMe M.2 SSD expander module
1. Power off the server. For more information, see "Powering off the server."
2. Remove the server from the rack. For more information, see "Removing the server from a rack."
3. Remove the access panel:
a. Press the button on the locking lever and then lift the locking lever.
The access panel automatically slides to the server rear.
b. Lift the access panel to remove it from the server.
4. Disconnect all cables that hinder the replacement, if any.
5. Remove the riser card that holds the NVMe M.2 SSD. Lift the riser card to remove it from the chassis.
6. Remove the NVMe M.2 SSD from the riser card. Loosen the captive screws on the riser card and pull the NVMe M.2 SSD out of the slot.
7. Remove the NVMe M.2 SSD. Slide the locking tab, lift the SSD, and then pull the SSD out of the slot.
Installing an NVMe M.2 SSD and an NVMe M.2 SSD expander module
1. Install the NVMe M.2 SSD to the NVMe M.2 SSD expander module. Insert the connector of the SSD into the socket, slide the locking tab, press the SSD to secure the SSD into place, and then release the locking tab.
2. Install the NVMe M.2 SSD expander module on the riser card:
a. Remove the PCIe module blank. Open the cover the riser card, and then pull out the blank.
b. Install the NVMe M.2 SSD expander module to the riser card. Insert the NVMe M.2 SSD expander module into the PCIe slot along the guide rails, and then use screws to secure the NVMe M.2 SSD expander module.
3. Install the riser card on the server:
a. Lift the riser card blank to remove it from the chassis.
b. Install the riser card on the PCIe riser connector. Press the unlock button with the two standouts on the card aligned with the notches in the chassis, and slide the riser card until it snaps into place.
4. Connect cables to the riser card or PCIe modules, if any.
5. Install the access panel:
a. Place the access panel onto the server.
b. Slide the access panel to the server front.
c. Press down the locking lever on the access panel until it snaps into place.
6. Rack-mount the server. For more information, see "Rack-mounting the server."
7. Connect the power cord.
8. Power on the server. For more information, see "Powering on the server."
Replacing a serial & DSD module
Prerequisites
Take the following ESD prevention measures:
· Wear antistatic clothing.
· Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.
· Do not wear any conductive objects, such as jewelry or watches.
When you replace a component, examine the slot and connector for damages. Make sure the pins are not damaged (bent for example) and do not contain any foreign objects.
Restrictions and guidelines
To avoid the waste of SD card storage space, install two SD cards that have the same storage capacity.
Removing a serial & DSD module
1. Power off the server. For more information, see "Powering off the server."
2. Remove the serial & DSD module. Loosen the captive screw on the module and pull the module out of the slot.
Installing a serial & DSD module
1. Install the serial & DSD module. Insert the module into the slot and fasten the captive screw on the module.
2. Power on the server. For more information, see "Powering on the server."
Replacing an SD card
Prerequisites
Take the following ESD prevention measures:
· Wear antistatic clothing.
· Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.
· Do not wear any conductive objects, such as jewelry or watches.
When you replace a component, examine the slot and connector for damages. Make sure the pins are not damaged (bent for example) and do not contain any foreign objects.
Removing an SD card and serial & DSD module
1. Power off the server. For more information, see "Powering off the server."
2. Remove the serial & DSD module. Loosen the captive screw on the module and pull the module out of the slot.
3. Remove each of the SD cards installed on the serial & DSD module:
a. Press the SD card to release it.
b. Pull the SD card out of the slot.
Installing an SD card and serial & DSD module
1. Install a new SD card on the serial & DSD module. Insert the SD card into the slot and gently press the SD card to secure it in the slot.
2. Install the serial & DSD module on the server. Insert the module into the slot and fasten the captive screw on the module.
3. Power on the server. For more information, see "Powering on the server."
Adding an LCD smart management module
Prerequisites
Take the following ESD prevention measures:
· Wear antistatic clothing.
· Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.
· Do not wear any conductive objects, such as jewelry or watches.
When you replace a component, examine the slot and connector for damages. Make sure the pins are not damaged (bent for example) and do not contain any foreign objects.
Procedure
1. Power off the server. For more information, see "Powering off the server."
2. Remove the server from the rack. For more information, see "Removing the server from a rack."
3. Remove the access panel:
a. Press the button on the locking lever and then lift the locking lever.
The access panel automatically slides to the server rear.
b. Lift the access panel to remove it from the server.
4. Remove the fan cage. Pull up the ejector levers at both sides of the fan cage and lift the fan cage to remove it from the chassis.
5. Remove the drive or drive blank from the target slot.
6. Install the LCD smart management module:
a. Connect one end of the LCD module cable to the LCD smart management module.
b. Push the LCD smart management module into the slot until it snaps into place.
c. Connect the other end of the cable to the LCD smart management module connector on the system board.
7. Install the fan cage. Place the fan cage into the chassis and close the ejector levers.
8. Install the access panel:
a. Place the access panel onto the server.
b. Slide the access panel to the server front.
c. Press down the locking lever on the access panel until it snaps into place.
9. Rack-mount the server. For more information, see "Rack-mounting the server."
10. Connect the power cord.
11. Power on the server. For more information, see "Powering on the server."
Replacing the LCD smart management module
Prerequisites
Take the following ESD prevention measures:
· Wear antistatic clothing.
· Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.
· Do not wear any conductive objects, such as jewelry or watches.
When you replace a component, examine the slot and connector for damages. Make sure the pins are not damaged (bent for example) and do not contain any foreign objects.
Removing the LCD smart management module
1. Power off the server. For more information, see "Powering off the server."
2. Remove the server from the rack. For more information, see "Removing the server from a rack."
3. Remove the access panel:
a. Press the button on the locking lever and then lift the locking lever.
The access panel automatically slides to the server rear.
b. Lift the access panel to remove it from the server.
4. Remove the fan cage. Pull up the ejector levers at both sides of the fan cage and lift the fan cage to remove it from the chassis.
5. Remove the LCD smart management module:
a. Disconnect the LCD module cable from the system board.
b. Use a flat-head screwdriver or tweezers to press the clip of the LCD smart management module and pull the module out from the slot.
Installing the LCD smart management module
1. Install the LCD smart management module:
a. Connect one end of the LCD module cable to the LCD smart management module.
b. Push the LCD smart management module into the slot until it snaps into place.
c. Connect the other end of the cable to the LCD smart management module connector on the system board.
2. Install the fan cage. Place the fan cage into the chassis and close the ejector levers.
3. Install the access panel:
a. Place the access panel onto the server.
b. Slide the access panel to the server front.
c. Press down the locking lever on the access panel until it snaps into place.
4. Install the security bezel, if any. Place the right edge of the security bezel into the server, secure the left edge into place, and then use a key to lock the security bezel.
5. Rack-mount the server. For more information, see "Rack-mounting the server."
6. Connect the power cord.
7. Power on the server. For more information, see "Powering on the server."
Replacing a chassis ear
Replace a chassis ear if it fails or any of the components (for example, I/O components or VGA/USB connectors) fails.
The procedure is the same for the left and right chassis ears. This section uses the left chassis ear as an example.
Removing a chassis ear
1. Power off the server. For more information, see "Powering off the server."
2. Remove the server from the rack, if the space over the server is insufficient. For more information, see "Removing the server from a rack."
3. Remove the access panel:
a. Press the button on the locking lever and then lift the locking lever.
The access panel automatically slides to the server rear.
b. Lift the access panel to remove it from the server.
4. Remove the fan cage. Pull up the ejector levers at both sides of the fan cage and lift the fan cage to remove it from the chassis.
5. Remove the chassis air baffle. Lift the air baffle out of the chassis.
6. Remove the front I/O component cable assembly:
a. Disconnect the front I/O component cable assembly from the system board.
b. Remove the cable protection plate. Remove the captive screws that secure the cable protection plate, press the cable protection plate and slide it toward the rear of the chassis until you cannot slide it further, and then pull out the cable protection plate.
c. Remove the front I/O component cable assembly.
7. Remove the chassis ear. Remove the screws that secure the left chassis ear, and then pull the chassis ear until it is removed.
Installing a chassis ear
1. Install a chassis ear. Attach the chassis ear to the corresponding side of the server, and use screws to secure the chassis ear into place.
2. Install the front I/O component cable assembly:
a. Insert the front I/O component cable assembly into the cable cutout.
b. Install the cable protection plate on the chassis. Insert the cable protection plate along the slot and slide it toward the front of the chassis until you cannot slide it further, and then install the captive screws on the cable protection plate.
c. Connect the front I/O component cable assembly to the front I/O connector on the system board.
3. Install the fan cage. Place the fan cage into the chassis and close the ejector levers.
4. Install the chassis air baffle.
5. Install the access panel:
a. Place the access panel onto the server.
b. Slide the access panel to the server front.
c. Press down the locking lever on the access panel until it snaps into place.
6. Rack-mount the server. For more information, see "Rack-mounting the server."
7. Connect the power cord. For more information, see "Connecting the power cord."
8. Power on the server. For more information, see "Powering on the server."
Replacing a chassis air baffle
You might need to replace the chassis air baffle on the server if it is not compatible with the riser cards you are installing.
Removing a chassis air baffle
1. Power off the server. For more information, see "Powering off the server."
2. Remove the server from the rack, if the space over the server is insufficient. For more information, see "Removing the server from a rack."
3. Remove the access panel:
a. Press the button on the locking lever and then lift the locking lever.
The access panel automatically slides to the server rear.
b. Lift the access panel to remove it from the server.
4. Remove the chassis air baffle:
a. (Optional.) Disconnect the supercapacitor extension cable and remove the supercapacitor (if any) from the air baffle.
b. Press the tabs on the air baffle, and then lift the air baffle out of the chassis.
Installing a chassis air baffle
1. Install the chassis air baffle:
a. Place the chassis air baffle in the chassis.
b. (Optional.) Reinstall the supercapacitor on the air baffle.
2. Install the access panel:
a. Place the access panel onto the server.
b. Slide the access panel to the server front.
c. Press down the locking lever on the access panel until it snaps into place.
3. Rack-mount the server. For more information, see "Rack-mounting the server."
4. Connect the power cord. For more information, see "Connecting the power cord."
5. Power on the server. For more information, see "Powering on the server."
Replacing a fan module
The fan modules are hot swappable and supports N+1 redundancy.
If sufficient space is available for replacement, you can replace a fan module without removing the server from the rack.
Guidelines
The server must be fully configured with fan modules of the same model.
The server supports both single-rotor FAN-8038-2U-G6 fan module and dual-rotor FAN-8056-2U-G6 fan module. When any of the following conditions are met, you must install the FAN-8056-2U-G6 fan module:
· The 12LFF drive backplane, 25SFF drive backplane, two 8SFF UniBay drive backplanes, or three 8SFF UniBay drive backplanes are installed along with processors with a TDP of more than 230 W are all installed.
· The processors with a TDP of more than 200 W are installed and drives are installed at the server rear.
· The A2, A30, A40, A100, or A16 GPU modules are installed.
· The mid GPU module is installed.
· The MBF2H332A-AENOT network adapter is installed.
Removing a fan module
1. Power off the server. For more information, see "Powering off the server."
2. Remove the server from the rack. For more information, see "Removing the server from a rack."
3. Remove the access panel:
a. Press the button on the locking lever and then lift the locking lever.
The access panel automatically slides to the server rear.
b. Lift the access panel to remove it from the server.
4. Remove a fan module. Lift the fan module handle and hold the handle to pull the fan module out of the slot.
Installing a fan module
1. Install a new fan module. Insert the fan module into the slot and press the fan module until it is secured in position.
2. Install the access panel:
a. Place the access panel onto the server.
b. Slide the access panel to the server front.
c. Press down the locking lever on the access panel until it snaps into place.
3. Rack-mount the server if the server has been removed. For more information, see "Rack-mounting the server."
4. Connect the power cord if the power cord has been disconnected. For more information, see "Connecting the power cord."
5. Power on the server if the server has been powered off. For more information, see "Powering on the server."
Installing and setting up a TCM or TPM
Trusted platform module (TPM) is a microchip embedded in the system board. It stores encryption information (such as encryption keys) for authenticating server hardware and software. The TPM operates with drive encryption programs such as Microsoft Windows BitLocker to provide operating system security and data protection. For information about Microsoft Windows BitLocker, visit the Microsoft website at http://www.microsoft.com.
Trusted cryptography module (TCM) is a trusted computing platform-based hardware module with protected storage space, which enables the platform to implement password calculation.
Installation and setup flowchart
Figure 19 TCM/TPM installation and setup flowchart
Guidelines
· Do not remove an installed TCM or TPM. Once installed, the module becomes a permanent part of the system board.
· If you want to replace the failed TCM or TPM, remove the system board, and then contact H3C Support to replace the TCM or TPM and the system board.
· When installing or replacing hardware, H3C technicians cannot configure the TCM or TPM or enter the recovery key. For security reasons, only the user can perform the tasks.
· When replacing the system board, do not remove the TCM or TPM from the system board. H3C will provide a TCM or TPM with a spare system board for the replacement.
· Any attempt to remove an installed TCM or TPM from the system board breaks or disfigures the TCM or TPM security rivet. Upon locating a broken or disfigured rivet on an installed TCP or TPM, administrators should consider the system compromised and take appropriate measures to ensure the integrity of the system data.
· H3C is not liable for blocked data access caused by improper use of the TCM or TPM. For more information, see the encryption technology feature documentation provided by the operating system.
Prerequisites
Take the following ESD prevention measures:
· Wear antistatic clothing.
· Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.
· Do not wear any conductive objects, such as jewelry or watches.
When you replace a component, examine the slot and connector for damages. Make sure the pins are not damaged (bent for example) and do not contain any foreign objects.
Installing a TCM or TPM
1. Power off the server. For more information, see "Powering off the server."
2. Remove the server from the rack. For more information, see "Removing the server from a rack."
3. Remove the access panel:
a. Press the button on the locking lever and then lift the locking lever.
The access panel automatically slides to the server rear.
b. Lift the access panel to remove it from the server.
4. Remove all riser cards that hider the installation.
5. Install the TCM or TPM.
The installation procedure is the same for a TPM and a TCM. The following information uses a TPM to show the procedure:
a. Press the TPM into the TPM connector on the system board.
b. Insert the rivet pin.
c. Insert the security rivet into the hole in the rivet pin and press the security rivet until it is firmly seated.
6. Install the removed riser cards, if any.
7. Install the access panel:
a. Place the access panel onto the server.
b. Slide the access panel to the server front.
c. Press down the locking lever on the access panel until it snaps into place.
8. Rack-mount the server. For more information, see "Rack-mounting the server."
9. Connect the power cord. For more information, see "Connecting the power cord."
10. Power on the server. For more information, see "Powering on the server."
Enabling the TCM or TPM in the BIOS
1. Access the BIOS utility. For information about how to enter the BIOS utility, see the BIOS user guide.
2. Select Advanced > Trusted Computing, and press Enter.
3. Enable TCM or TPM. By default, the TCM and TPM are enabled for a server.
If the server is installed with a TPM, select TPM State > Enabled, and then press Enter.
If the TPM is installed with a TCM, select TCM State > Enabled, and then press Enter.
4. Log in to HDM to verify that the TCM or TPM is operating correctly. For more information, see HDM2 online help.
Configuring encryption in the operating system
For more information about this task, see the encryption technology feature documentation that came with the operating system.
The recovery key/password is generated during BitLocker setup, and can be saved and printed after BitLocker is enabled. When using BitLocker, always retain the recovery key/password. The recovery key/password is required to enter Recovery Mode after BitLocker detects a possible compromise of system integrity or firmware or hardware change.
For security purposes, follow these guidelines when retaining the recovery key/password:
· Always store the recovery key/password in multiple locations.
· Always store copies of the recovery key/password away from the server.
· Do not save the recovery key/password on the encrypted hard drive.
For more information about Microsoft Windows BitLocker drive encryption, visit the Microsoft website at http://technet.microsoft.com/en-us/library/cc732774.aspx.
Replacing a power supply
The power supplies are hot swappable.
Guidelines
· To avoid damage to hardware, use only H3C approved power supplies.
· The server supports 1+1 power supply redundancy.
· The power supplies installed on the server must be the same model. If they differ in model, HDM would raise an alarm.
· The power supplies are hot swappable.
· The system provides an overtemperature mechanism for power supplies. The power supplies automatically turn off when they encounter an overtemperature situation and automatically turn on when the overtemperature situation is removed.
For more information about the specifications of power supplies, see the power supply manuals for them.
Prerequisites
Take the following ESD prevention measures:
· Wear antistatic clothing.
· Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.
· Do not wear any conductive objects, such as jewelry or watches.
When you replace a component, examine the slot and connector for damages. Make sure the pins are not damaged (bent for example) and do not contain any foreign objects.
Removing a power supply
If two operating power supplies are present and the server rear has sufficient space for replacement, you can replace one of the power supplies without powering off the server.
To remove a power supply:
1. Power off the server. For more information, see "Powering off the server."
2. Remove the server from the rack. For more information, see "Removing the server from a rack."
3. Remove the power cord from the power supply:
a. Press the tab to disengage the ratchet from the tie mount, slide the cable clamp outward, and then release the tab.
b. Open the cable clamp and remove the power cord out of the clamp.
c. Unplug the power cord.
4. Uninstall the CMA on the side of the power supply, if any:
a. Take out cables that hinder the replacement from the cable baskets of the CMA. During this operation, make sure cables required for server operation remain connected.
b. Press the tab on the CMA connector next to the power supply and then pull the connector out.
5. Remove the power supply. Holding the power supply by its handle and pressing the retaining latch with your thumb, pull the power supply slowly out of the slot.
Installing a power supply
If only one power supply is present, install the new power supply in the slot for the replaced power supply.
To install a power supply:
1. Install a new power supply. Push the power supply into the slot until it snaps into place.
2. Installed the removed CMA, if any.
3. Rack-mount the server if the server has been removed. For more information, see "Rack-mounting the server."
4. Connect the power cord if the power cord has been disconnected. For more information, see "Connecting the power cord."
5. Power on the server if the server has been powered off. For more information, see "Powering on the server."
Replacing the NVMe VROC module
WARNING! To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them. |
Removing the NVMe VROC module
1. Power off the server. For more information, see "Powering off the server."
2. Remove the server from the rack. For more information, see "Removing the server from a rack."
3. Remove the access panel:
a. Press the button on the locking lever and then lift the locking lever.
The access panel automatically slides to the server rear.
b. Lift the access panel to remove it from the server.
4. Remove the NVMe VROC module. Hold the ring part of the NVMe VROC module and pull the module out.
Installing the NVMe VROC module
1. Install a new NVMe VROC module. Insert the NVMe VROC module onto the NVMe VROC module connector on the system board.
2. Installed the removed processor mezzanine board, if any.
3. Install the access panel:
a. Place the access panel onto the server.
b. Slide the access panel to the server front.
c. Press down the locking lever on the access panel until it snaps into place.
4. Rack-mount the server. For more information, see "Rack-mounting the server."
5. Connect the power cord. For more information, see "Connecting the power cord."
6. Power on the server. For more information, see "Powering on the server."
Replacing the system battery
WARNING! To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them. |
The server comes with a system battery (Panasonic BR2032) installed on the system board, which supplies power to the real-time clock and has a lifespan of 3 to 5 years. If the server no longer automatically displays the correct date and time, you might need to replace the battery. As a best practice, use the Panasonic BR2032 battery to replace the old one.
|
NOTE: The BIOS will restore to the default settings after the replacement. You must reconfigure the BIOS to have the desired settings, including the system date and time. For more information, see the BIOS user guide for the server. |
Prerequisites
Take the following ESD prevention measures:
· Wear antistatic clothing.
· Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.
· Do not wear any conductive objects, such as jewelry or watches.
When you replace a component, examine the slot and connector for damages. Make sure the pins are not damaged (bent for example) and do not contain any foreign objects.
Removing the system battery
1. Power off the server. For more information, see "Powering off the server."
2. Remove the server from the rack. For more information, see "Removing the server from a rack."
3. Remove the access panel:
a. Press the button on the locking lever and then lift the locking lever.
The access panel automatically slides to the server rear.
b. Lift the access panel to remove it from the server.
4. Remove the system battery. Pinch the system battery by its top edge and the battery will disengage from the battery holder.
|
NOTE: For environment protection purposes, dispose of the used-up system battery at a designated site. |
Installing the system battery
1. Install the system battery. Insert the system battery with the plus sign "+" facing up into the system battery holder, and press down the battery to secure it into place.
2. Install the access panel:
a. Place the access panel onto the server.
b. Slide the access panel to the server front.
c. Press down the locking lever on the access panel until it snaps into place.
3. Rack-mount the server. For more information, see "Rack-mounting the server."
4. Connect the power cord. For more information, see "Connecting the power cord."
5. Power on the server. For more information, see "Powering on the server."
6. Access the BIOS to reconfigure the system date and time. For more information, see the BIOS user guide for the server.
Replacing a rear 4GPU module
Prerequisites
· Take the following ESD prevention measures:
¡ Wear antistatic clothing.
¡ Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.
¡ Do not wear any conductive objects, such as jewelry or watches.
· When you replace a component, examine the slot and connector for damages. Make sure the pins are not damaged (bent for example) and do not contain any foreign objects.
Removing a rear 4GPU module
1. Power off the server. For more information, see "Powering off the server."
2. Remove the server from the rack. For more information, see "Removing the server from a rack."
3. Remove the access panel:
a. Press the button on the locking lever and then lift the locking lever.
The access panel automatically slides to the server rear.
b. Lift the access panel to remove it from the server.
4. (Optional.) Disconnect all cables that hinder the replacement, if any.
5. Remove the rear 4GPU module where the GPU module resides. Lift the rear 4GPU module to remove it out of the chassis.
6. Remove the GPU module from the rear 4GPU module:
a. Disconnect the cable from the GPU module, if any.
b. Pull the GPU module out from the slot.
Installing a rear 4GPU module
1. Install a GPU module on the rear 4GPU module:
a. Insert the GPU module into the PCIe slot along the guide rails.
b. Connect the GPU module power cord.
2. (Optional.) Reconnect other cables to the rear 4GPU module.
3. Install the rear 4GPU module on the server:
a. Insert the rear 4GPU module into the chassis along the slot.
b. Connect the rear 4GPU module power cord.
4. Install the access panel:
a. Place the access panel onto the server.
b. Slide the access panel to the server front.
c. Press down the locking lever on the access panel until it snaps into place.
5. Rack-mount the server. For more information, see "Rack-mounting the server."
6. Connect the power cord.
7. Power on the server. For more information, see "Powering on the server."
Installing a GPU module on the rear 4GPU module
Prerequisites
· Take the following ESD prevention measures:
¡ Wear antistatic clothing.
¡ Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.
¡ Do not wear any conductive objects, such as jewelry or watches.
· When you replace a component, examine the slot and connector for damages. Make sure the pins are not damaged (bent for example) and do not contain any foreign objects.
Removing a rear 4GPU module
1. Power off the server. For more information, see "Powering off the server."
2. Remove the server from the rack. For more information, see "Removing the server from a rack."
3. Remove the access panel:
a. Press the button on the locking lever and then lift the locking lever.
The access panel automatically slides to the server rear.
b. Lift the access panel to remove it from the server.
4. (Optional.) Disconnect all cables that hinder the removal, if any.
5. Remove the rear 4GPU module that is not installed with a GPU module. Lift the rear 4GPU module to remove it out of the chassis.
Installing a rear 4GPU module
1. Use a screwdriver to remove the mounting screw from a 4GPU module blank, and then remove the blank.
2. Install a GPU module on the rear 4GPU module:
a. Insert the GPU module into the PCIe slot along the guide rails.
b. Connect one end of the power cord to the GPU module based on the power cord label.
c. Connect the other end of the power cord to the 4GPU module.
3. (Optional.) Reconnect other cables to the rear 4GPU module.
4. Insert the rear 4GPU module into the chassis along the slot.
5. Install the access panel:
a. Place the access panel onto the server.
b. Slide the access panel to the server front.
c. Press down the locking lever on the access panel until it snaps into place.
6. Rack-mount the server. For more information, see "Rack-mounting the server."
7. Connect the power cord.
8. Power on the server. For more information, see "Powering on the server."
Replacing a mid GPU module
Prerequisites
· Take the following ESD prevention measures:
¡ Wear antistatic clothing.
¡ Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.
¡ Do not wear any conductive objects, such as jewelry or watches.
· When you replace a component, examine the slot and connector for damages. Make sure the pins are not damaged (bent for example) and do not contain any foreign objects.
Removing a mid GPU module
1. Power off the server. For more information, see "Powering off the server."
2. Remove the server from the rack. For more information, see "Removing the server from a rack."
3. Remove the access panel:
a. Press the button on the locking lever and then lift the locking lever.
The access panel automatically slides to the server rear.
b. Lift the access panel to remove it from the server.
4. (Optional.) Disconnect all cables that hinder the replacement, if any.
5. Remove the mid GPU module where the GPU module resides. Lift the mid GPU module to remove it out of the chassis.
6. Remove the GPU module from the mid GPU module:
a. Remove the riser card from the mid GPU module.
b. Disconnect the cable from the GPU module, if any.
c. Loosen the mounting screws on the riser card, and then pull the GPU module out from the slot.
Installing a mid GPU module
1. Install a GPU module on the riser card:
Insert the GPU module into the PCIe slot along the guide rails, and then fasten the mounting screws on the riser card.
2. Install the riser card on the mid GPU module.
a. Insert the riser card into the PCIe slot along the guide rails.
b. Connect the mid GPU module power cord.
3. (Optional.) Reconnect other cables to the mid GPU module.
4. Insert the mid GPU module into the chassis along the slot.
5. Install the access panel:
a. Place the access panel onto the server.
b. Slide the access panel to the server front.
c. Press down the locking lever on the access panel until it snaps into place.
6. Rack-mount the server. For more information, see "Rack-mounting the server."
7. Connect the power cord.
8. Power on the server. For more information, see "Powering on the server.
Removing and installing a blank
Install blanks over the empty slots if the following modules are not present and remove blanks before you install the following modules:
· Drives.
· LCD smart management module.
· Drive backplanes.
· Power supplies.
· Riser cards.
· PCIe modules.
· OCP network adapter.
Prerequisites
Take the following ESD prevention measures:
· Wear antistatic clothing.
· Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.
· Do not wear any conductive objects, such as jewelry or watches.
Procedures
Use Table 24 as a guide when you remove or install a blank for a hardware option.
Table 24 Removing or installing a blank
Task |
Procedure |
Remove a drive blank. |
Press the latches on the drive blank inward with one hand, and pull the drive blank out of the slot. |
Install a drive blank. |
Insert the drive blank into the slot. |
Remove the LCD smart management module blank. |
From the inside of the chassis, use a flat-head screwdriver to push aside the clip of the blank and push the blank outward to disengage the blank. Then, pull the blank out of the server. |
Install the LCD smart management module blank. |
Insert the blank into the slot and push the blank until you hear a click. |
Remove a drive backplane blank. |
From the inside of the chassis, use a flat-head screwdriver to push aside the clip of the blank and push the blank outward to disengage the blank. Then, pull the blank out of the server. |
Install a drive backplane blank. |
Insert the drive backplane blank into the slot and push the blank until you hear a click. |
Remove a power supply blank. |
Hold and pull the power supply blank out of the slot. |
Install a power supply blank. |
Insert the power supply blank into the slot with the TOP mark facing up. |
Remove a riser card blank. |
Lift the riser card blank to remove it from the connector. |
Install a riser card blank. |
Insert the riser card blank into the slot along guide rails. |
Remove a PCIe module blank. |
Open the retaining latch of the riser card and then lift the blank upwards. |
Install a PCIe module blank |
Insert the PCIe module blank into the slot and then close the retaining latch of the riser card. |
Remove an OCP network adapter blank |
Insert a screwdriver into the raised small hole on the OCP network adapter blank, and pull the blank out. |
Install an OCP network adapter blank |
Insert the OCP network adapter blank into the slot. |
Connecting internal cables
Compared with the AUX cables and power cords, more data cables (including SAS/SATA and NVMe data cables) are required and the cabling methods are more complicated. This section provides code information for data cables and you can use the information to identify cables and their connection methods.
Guidelines
Follow these guidelines when connecting the internal cables:
· Do not route the cables above the removable components, such as DIMMs.
· Route the internal cables without hindering installation or removal of other components or hindering other internal components.
· Route the cables neat and tidy in their own fixed spaces. Make sure the cables will not be squeezed or scratched by other internal components.
· Do not pull the connectors when routing the cables.
· Do not use a cable tie to bundle an excessive number of cables.
· Appropriately bind long cables. Coil and use cable ties to secure unused cables.
· Connect the drive cables until they click into place.
· Remove the cap (if any) from the target cable connector before connecting a cable to it.
· If you cannot identify the cables by labels provided with the cables, apply new labels to cables for easy identification.
Connecting drive cables
Drive cables include SAS/SATA data cables, NVMe data cables, power cords, and AUX cables. The server supports multiple drive configurations. This section takes the following eight typical drive configurations as examples to help users understand the cabling schemes for drives. For cabling schemes for other drive configurations, contact technical support.
· 8LFF SAS/SATA drives at the front
· 12LFF (8 SAS/SATA + 4 UniBay) drives at the front
· 12LFF (4 SAS/SATA + 8 UniBay) drives at the front
· 12LFF UniBay drives at the front
· 25SFF (17 SAS/SATA + 8UniBay) drives at the front
· 8SFF UniBay+8SFF UniBay+8SFF UniBay drives at the front and 2SFF SAS/SATA drives at the rear
· 12LFF SAS/SATA drives at the front + 2SFF SAS/SATA+x16 drives at the rear
· 12LFF (8 SAS/SATA + 4 UniBay) drives at the front + 2SFF UniBay drives at the rear
8LFF SAS/SATA drives at the front
1. Connect SAS/SATA data cables for the 8LFF drives at the front.
Figure 20 Connecting SAS/SATA data cables for the 8LFF drives at the front
No. |
Cable type |
Cable code |
Description |
1 |
SAS/SATA data cable |
0404A1KT |
From connector SAS PORT on the front backplane to connector C0 on the storage controller. |
|
NOTE: To use this cabling scheme, you can install the storage controller in slots 1 to 8. In this example, the storage controller is installed in slot 1. |
2. Connect an AUX cable and power cord for 8LFF drives at the front.
Figure 21 Connecting an AUX cable and power cord for the 8LFF drives at the front
No. |
Cable type |
Description |
1 |
AUX cable |
From connector AUX on the front drive backplane to connector AUX1 on the system board |
2 |
Power cord |
From connector PWR on the front drive backplane to connector PWR1 on the system board |
12LFF (8 SAS/SATA + 4 UniBay) drives at the front
1. Connect SAS/SATA data cables for the 12LFF drives at the front.
Figure 22 Connecting SAS/SATA data cables for the 12LFF drives at the front
No. |
Cable type |
Cable code |
Description |
1 |
SAS/SATA data cable |
0404A1QM |
From connector SAS PORT1 on the front backplane to connector C0 on the storage controller. |
2 |
SAS/SATA data cable |
0404A1RG |
From connector SAS PORT2 on the front backplane to connector C1 on the storage controller. |
|
NOTE: To use this cabling scheme, you can install the storage controller in slots 1 to 6. In this example, the storage controller is installed in slot 1. |
2. Connect power cords, AUX cable, and NVMe cables for 12LFF drives at the front.
Figure 23 Connecting power cords, AUX cable, and NVMe cables for 12LFF drives at the front
No. |
Cable type |
Cable code |
Description |
1 |
NVMe data cable |
0404A209 |
From connectors NVME A3 and NVME A4 on the front drive backplane to connector C1-P4C on the system board |
2 |
NVMe data cable |
0404A1WS |
From connector NVME A1/A2 on the front drive backplane to connector C1-P4A on the system board |
3 |
Power cord |
From connector PWR2 on the front drive backplane to connector PWR2 on the system board |
|
4 |
AUX cable |
From connector AUX on the front drive backplane to connector AUX1 on the system board |
|
5 |
Power cord |
From connector PWR1 on the front drive backplane to connector PWR1 on the system board |
12LFF (4 SAS/SATA + 8 UniBay) drives at the front
1. Connect an SAS/SATA data cable for the 12LFF drives at the front.
Figure 24 Connecting an SAS/SATA data cable for the 12LFF drives at the front
No. |
Cable type |
Cable code |
Description |
1 |
SAS/SATA data cable |
0404A1QM |
From connector SAS PORT on the front backplane to connector C0 on the storage controller. |
|
NOTE: To use this cabling scheme, you can install the storage controller in slots 1 to 8. In this example, the storage controller is installed in slot 1. |
2. Connect power cords, AUX cable, and NVMe cables for 12LFF drives at the front.
Figure 25 Connecting power cords, AUX cable, and NVMe cables for 12LFF drives at the front
No. |
Cable type |
Cable code |
Description |
|
1 |
NVMe data cable |
0404A1WS |
From connector NVME A1/A2 on the front drive backplane to connector C1-P3A on the system board |
|
2 |
NVMe data cable |
0404A1PY |
From connector NVME A3/A4 on the front drive backplane to connector C1-P3C on the system board |
|
3 |
NVMe data cable |
0404A1WS |
From connector NVME B1/B2 on the front drive backplane to connector C1-P4A on the system board |
|
4 |
Power cord |
From connector PWR2 on the front drive backplane to connector PWR2 on the system board |
||
5 |
Power cord |
From connector PWR1 on the front drive backplane to connector PWR1 on the system board |
||
6 |
NVMe data cable |
0404A1WT |
From connector NVME B3/B4 on the front drive backplane to connector C1-P4C on the system board |
|
7 |
AUX cable |
From connector AUX on the front drive backplane to connector AUX1 on the system board |
||
12LFF UniBay drives at the front
1. Connect SAS/SATA data cables for the 12LFF drives at the front.
Figure 26 Connecting SAS/SATA data cables for the 12LFF drives at the front
No. |
Cable type |
Cable code |
Description |
1 |
SAS/SATA data cable |
0404A1RE |
From connector SAS PORT1 on the front backplane to connectors M.2&SATA PORT1 and M.2&SATA PORT1 on the system board |
2 |
SAS/SATA data cable |
0404A1RF |
From connector SAS PORT2 on the front backplane to connector SATA PORT3 on the system board |
2. Connect power cords and AUX cable for 12LFF drives at the front.
Figure 27 Connecting power cords and AUX cable for 12LFF drives at the front
No. |
Cable type |
Description |
1 |
Power cord |
From connector PWR2 on the front drive backplane to connector PWR2 on the system board |
2 |
AUX cable |
From connector AUX on the front drive backplane to connector AUX1 on the system board |
3 |
Power cord |
From connector PWR1 on the front drive backplane to connector PWR1 on the system board |
3. Connect NVMe cables for 12LFF drives at the front.
Figure 28 Connecting NVMe cables for 12LFF drives at the front
No. |
Cable type |
Cable code |
Description |
1 |
NVMe data cable |
0404A209 |
From connectors A3 and A4 on the front drive backplane to connector C1-P3C on the system board |
2 |
NVMe data cable |
0404A1WS |
From connector NVME A1/A2 on the front drive backplane to connector C1-P3A on the system board |
3 |
NVMe data cable |
0404A1WS |
From connector NVMe-B1/B2 on the front drive backplane to connector C1-P4A on the system board |
4 |
NVMe data cable |
0404A1WT |
From connector NVMe-B3/B4 on the front drive backplane to connector C1-P4C on the system board |
5 |
NVMe data cable |
0404A207 |
From connectors C1 and C2 on the front drive backplane to connector C2-P4A on the system board |
6 |
NVMe data cable |
0404A1WS |
From connector NVME C3/C4 on the front drive backplane to connector C2-P4C on the system board |
25SFF (17 SAS/SATA + 8UniBay) drives at the front
1. Connect SAS/SATA data cables for the 25SFF drives at the front.
Figure 29 Connecting SAS/SATA data cables for the 25SFF drives at the front
No. |
Cable type |
Cable code |
Description |
1 |
SAS/SATA data cable |
0404A1QM |
From connector SAS PORT1 on the front backplane to connector C0 on the storage controller |
|
NOTE: To use this cabling scheme, you can install the storage controller in slots 1 to 8. In this example, the storage controller is installed in slot 1. |
2. Connect NVMe data cables, power cords, and AUX cable for 25SFF drives at the front.
Figure 30 Connecting NVMe data cables, power cords, and AUX cable for 25SFF drives at the front
No. |
Cable type |
Cable code |
Description |
1 |
NVMe data cable |
0404A1PY |
From connector NVME 1 on the front drive backplane to connector C1-P3A on the system board |
2 |
NVMe data cable |
0404A1PY |
From connector NVME 2 on the front drive backplane to connector C1-P3C on the system board |
3 |
NVMe data cable |
0404A1PY |
From connector NVME 3 on the front drive backplane to connector C1-P4A on the system board |
4 |
NVMe data cable |
0404A1PY |
From connector NVME 4 on the front drive backplane to connector C1-P4C on the system board |
5 |
Power cord |
From connector PWR3 on the front drive backplane to connector PWR3 on the system board |
|
6 |
AUX cable |
From connector AUX on the front drive backplane to connector AUX1 on the system board |
|
7 |
Power cord |
From connector PWR2 on the front drive backplane to connector PWR2 on the system board |
|
8 |
Power cord |
From connector PWR1 on the front drive backplane to connector PWR1 on the system board |
8SFF UniBay+8SFF UniBay+8SFF UniBay drives at the front and 2SFF SAS/SATA drives at the rear
1. Connect power cords and AUX cables for the 8SFF UniBay+8SFF UniBay+8SFF UniBay drives at the front.
Figure 31 Connecting power cords and AUX cables for the 8SFF UniBay+8SFF UniBay+8SFF UniBay drives at the front
No. |
Cable type |
Description |
1 |
AUX cable |
From connector AUX on the drive backplane in front bay 3 to connector AUX3 on the system board |
2 |
Power cord |
From connector PWR on the drive backplane in front bay 3 to connector AUX3 on the system board |
3 |
AUX cable |
From connector AUX on the drive backplane in front bay 2 to connector AUX2 on the system board |
4 |
Power cord |
From connector PWR on the drive backplane in front bay 2 to connector PWR2 on the system board |
5 |
Power cord |
From connector PWR on the drive backplane in front bay 1 to connector PWR1 on the system board |
6 |
AUX cable |
From connector AUX on the drive backplane in front bay 1 to connector AUX1 on the system board |
2. Connect NVMe cables for 8SFF UniBay+8SFF UniBay+8SFF UniBay drives at the front.
Figure 32 Connecting NVMe cables for 8SFF UniBay+8SFF UniBay+8SFF UniBay drives at the front (1)
No. |
Cable type |
Cable code |
Description |
1 |
NVMe data cable |
0404A1Q2 |
From connector NVME B1/B2 on the drive backplane in front bay 3 to connector C1-P3A on the system board |
2 |
NVMe data cable |
0404A1Q2 |
From connector NVME B3/B4 on the drive backplane in front bay 3 to connector C1-P3C on the system board |
3 |
NVMe data cable |
0404A1PY |
From connector NVME A1/A2 on the drive backplane in front bay 2 to connector C1-P4A on the system board |
4 |
NVMe data cable |
0404A1WT |
From connector NVME A3/A4 on the drive backplane in front bay 2 to connector C1-P4C on the system board |
5 |
NVMe data cable |
0404A2WS |
From connector NVME A1/A2 on the drive backplane in front bay 1 to connector C2-P3A on the system board |
6 |
NVMe data cable |
0404A1WS |
From connector NVME A3/A4 on the drive backplane in front bay 1 to connector C2-P3C on the system board |
7 |
NVMe data cable |
0404A1Q2 |
From connector NVME B1/B2 on the drive backplane in front bay 1 to connector C2-P4A on the system board |
8 |
NVMe data cable |
0404A1WS |
From connector NVME B3/B4 on the drive backplane in front bay 1 to connector C2-P4C on the system board |
Figure 33 Connecting NVMe cables for 8SFF UniBay+8SFF UniBay+8SFF UniBay drives at the front (2)
No. |
Cable type |
Cable code |
Description |
1 |
NVMe data cable |
0404A1PW |
From connector NVME B1/B2 on the drive backplane in front bay 2 to connector C2-P2A on the system board |
2 |
NVMe data cable |
0404A1PW |
From connector NVME B3/B4 on the drive backplane in front bay 2 to connector C2-P2C on the system board |
3 |
NVMe data cable |
0404A1PW |
From connector NVME A1/A2 on the drive backplane in front bay 3 to connector C1-P2A on the system board |
4 |
NVMe data cable |
0404A1PW |
From connector NVME A3/A4 on the drive backplane in front bay 3 to connector C1-P2C on the system board |
3. Connect a power cord, AUX cable, and SAS/SATA cable for 2SFF drives at the rear.
Figure 34 Connecting a power cord, AUX cable, and SAS/SATA cable for 2SFF drives at the rear
No. |
Cable type |
Cable code |
Description |
1 |
AUX cable |
From connector AUX on the rear drive backplane to connector AUX5 on the system board |
|
2 |
Power cord |
From connector PWR on the rear drive backplane to connector PWR4 on the system board |
|
3 |
NVMe data cable |
0404A1RP |
From connector SAS PORT on the rear drive backplane to connector M.2&SATA PORT1 on the system board |
12LFF SAS/SATA drives at the front + 2SFF SAS/SATA+x16 drives at the rear
4. Connect SAS/SATA data cables for the 12LFF drives at the front.
Figure 35 Connecting SAS/SATA data cables for the 12LFF drives at the front
No. |
Cable type |
Cable code |
Description |
1 |
SAS/SATA data cable |
0404A1RE |
From connector SAS PORT1 on the front drive backplane to connectors M.2&SATA PORT1 and SATA PORT2 on the system board |
2 |
SAS/SATA data cable |
0404A1RF |
From connector SAS PORT2 on the front drive backplane to connector SATA PORT3 on the system board |
|
NOTE: To use this cabling scheme, you can only install the storage controller in slot 6. |
5. Connect an AUX cable and power cords for the 12LFF drives at the front.
Figure 36 Connecting an AUX cable and power cords for the 12LFF drives at the front
No. |
Cable type |
Description |
1 |
Power cord |
From connector PWR2 on the front drive backplane to connector PWR2 on the system board |
2 |
AUX cable |
From connector AUX on the front drive backplane to connector AUX1 on the system board |
3 |
Power cord |
From connector PWR1 on the front drive backplane to connector PWR1 on the system board |
6. Connect an AUX cable, power cord, and SAS/SATA data cable for the 2SFF drives at the rear.
Figure 37 Connecting an AUX cable, power cord, and SAS/SATA data cable for the 2SFF drives at the rear
No. |
Cable type |
Cable code |
Description |
1 |
AUX cable |
From connector AUX on the rear drive backplane to connector AUX4 on the system board |
|
2 |
Power cord |
From connector PWR on the rear drive backplane to connector PWR5 on the system board |
|
3 |
SAS/SATA cable |
0404A1RU |
From connector SAS PORT on the rear drive backplane to connector C0 on the storage controller |
12LFF (8 SAS/SATA + 4 UniBay) drives at the front + 2SFF UniBay drives at the rear
1. Connect an SAS/SATA cable for 12LFF drives at the front and an NVMe data cable for 2SFF UniBay drives at the rear.
Figure 38 Connecting an SAS/SATA cables for 12LFF drives at the front and an NVMe data cable for 2SFF UniBay drives at the rear
No. |
Cable type |
Cable code |
Description |
1 |
NVMe data cable |
0404A1Q3 |
From connector NVME on the rear drive backplane to connector C2-P3A on the system board |
2 |
SAS/SATA data cable |
0404A1QM |
From connector SAS PORT1 on the front drive backplane to connector C0 on the storage controller |
|
NOTE: To use this cabling scheme, you can install the storage controller in slots 1 to 8. In this example, the storage controller is installed in slot 1. |
2. Connect power cords, AUX cable, and NVMe cables for 12LFF drives at the front.
Figure 39 Connecting power cords, AUX cable, and NVMe cables for 12LFF drives at the front
No. |
Cable type |
Cable code |
Description |
1 |
NVMe data cable |
0404A209 |
From connectors NVME A3 and NVME A4 on the front drive backplane to connector C1-P4C on the system board |
2 |
NVMe data cable |
0404A1WS |
From connector NVME A1/A2 on the front drive backplane to connector C1-P4A on the system board |
3 |
Power cord |
From connector PWR2 on the front drive backplane to connector PWR2 on the system board |
|
4 |
AUX cable |
From connector AUX on the front drive backplane to connector AUX1 on the system board |
|
5 |
Power cord |
From connector PWR1 on the front drive backplane to connector PWR1 on the system board |
3. Connect an AUX cable and power cord for the 2SFF drives at the rear.
Figure 40 Connecting an AUX cable and power cord for the 2SFF drives at the rear
No. |
Cable type |
Description |
1 |
AUX cable |
From connector AUX on the rear drive backplane to connector AUX5 on the system board |
2 |
Power cord |
From connector PWR on the rear drive backplane to connector PWR4 on the system board |
Connecting cables for the OCP network adapter
Figure 41 Connecting cables for the OCP network adapter
No. |
Cable type |
Cable code |
Description |
1 |
PCIe data cable |
0404A1SB |
From MCIO connector 1 (PCIe port1) and MCIO connector 2 (PCIe port 2) on the OCP network adapter to riser connector 3 (RISER3 PCIe X16) on the system board |
2 |
AUX cable |
From connector AUX on the OCP network adapter to connector AUX8 on the system board |
|
3 |
Power cord |
From connector AUX on the OCP network adapter to connector PWR6 on the system board |
Connecting the supercapacitor cable
Connecting cables for the mid GPU module
Figure 42 Connecting an AUX cable for the mid GPU module
No. |
Cable type |
Description |
1 |
Aux cable |
From connector AUX on the mid GPU module to connector AUX7 on the system board |
Figure 43 Connecting data cables for the mid GPU module
No. |
Cable type |
Cable code |
Description |
1 |
PCIe data cable |
0404A1RJ |
From connector PCIe port5 on the mid-position GPU module to connector C2-P4C on the system board |
2 |
PCIe data cable |
0404A1RJ |
From connector PCIe port4 on the mid-position GPU module to connector C2-P4A on the system board |
3 |
PCIe data cable |
0404A1RJ |
From connector PCIe port3 on the mid-position GPU module to connector C1-P3A on the system board |
4 |
PCIe data cable |
0404A1RJ |
From connector PCIe port2 on the mid-position GPU module to connector C1-P3C on the system board |
Figure 44 Connecting a power cord for the mid-position GPU module
No. |
Cable type |
Description |
1 |
Power cord |
From connector PWR on the mid-position GPU module to connector PWR6 on the system board |
Connecting cables for the rear 4GPU module
Figure 45 Connecting data cables for the rear 4GPU module
No. |
Cable type |
Cable code |
Description |
1 |
PCIe data cable |
0404A1Y8 |
From rear GPU module S3 in slot 14 to connector C2-P3A on the system board |
2 |
PCIe data cable |
0404A1Y8 |
From rear GPU module S4 in slot 14 to connector C2-P3C on the system board |
3 |
PCIe data cable |
0404A1Y8 |
From rear GPU module S2 in slot 13 to connector C2-P4C on the system board |
4 |
PCIe data cable |
0404A1Y8 |
From rear GPU module S1 in slot 13 to connector C2-P4A on the system board |
5 |
PCIe data cable |
0404A1Y9 |
From rear GPU module S3 in slot 12 to connector C1-P3A on the system board |
6 |
PCIe data cable |
0404A1Y9 |
From rear GPU module S4 in slot 12 to connector C1-P3C on the system board |
7 |
PCIe data cable |
0404A1Y9 |
From rear GPU module S2 in slot 11 to connector C1-P4C on the system board |
8 |
PCIe data cable |
0404A1Y9 |
From GPU module S1 in slot 11 to connector C1-P4A on the system board |
Figure 46 Connecting power cords for the rear 4GPU module
Connecting cables for the front M.2 SSD expander module
Figure 47 Connecting cables for the front M.2 SSD expander module
No. |
Cable type |
Cable code |
Description |
1 |
SAS/SATA/NVMe data & AUX cable |
0404A1S9 |
· From connector M.2 PORT on the M.2 SSD expander module to connector M.2&SATA PORT1 on the system board · From connector M.2 PORT on the M.2 SSD expander module to connector AUX on the system board |
Connecting cables for riser cards
Some riser cards can provide additional PCIe links for the slots on the card by connecting to the system board. This section introduces the cabling schemes for these riser cards. For detailed information about PCIe riser connectors, see "Riser card and PCIe module compatibility."
RC-3FHFL-2U-G6
The RC-3FHFL-2U-G6 riser card can connect to PCIe riser connector 1 or 2 on the system board. The connectors the cables connect to different depending on the PCIe riser connector, as shown in Table 25. For more information about the connector locations, see system board components in "Appendix A Server specifications." The cable connection method is similar regardless of which riser connector is used. This section uses riser connector 1 as an example.
Figure 48 Connecting PCIe cables for the RC-3FHFL-2U-G6 riser card
Riser connector |
Cable code |
Cable number |
Connector on the riser card |
Connector on the system board |
Riser1 |
0404A1Q1 |
1 |
SLOT1-A |
MCIO connector C1-P3A |
2 |
SLOT1-C |
MCIO connector C1-P3C |
||
0404A1QH (optional) |
3 |
SLOT2-A |
MCIO connector C1-P2A |
|
4 |
SLOT2-C |
MCIO connector C1-P2C |
||
Riser2 |
0404A1Q1 |
1 |
SLOT1-A |
MCIO connector C2-P4A |
2 |
SLOT1-C |
MCIO connector C2-P4C |
||
0404A1Q (optional) |
3 |
SLOT2-A |
MCIO connector C2-P2A |
|
4 |
SLOT2-C |
MCIO connector C2-P2C |
RC-3FHHL-2U-G6
The RC-3FHHL-2U-G6 riser card can connect to riser connector 1 or 2 on the system board. The connectors the cables connect to different depending on the riser connector, as shown in Table 26. For more information about the connector locations, see system board components in "Appendix A Server specifications." The cable connection method is similar regardless of which riser connector is used. This section uses riser connector 1 as an example.
Riser connector |
Cable code |
Cable number |
Connector on the riser card |
Connector on the system board |
Riser1 |
0404A1QH |
1 |
SLOT1-A |
MCIO connector C1-P2A |
0404A1QH (optional) |
2 |
SLOT1-C |
MCIO connector C1-P2C |
|
Riser2 |
0404A1QH |
1 |
SLOT1-A |
MCIO connector C2-P2A |
0404A1QH (optional) |
2 |
SLOT1-C |
MCIO connector C2-P2C |
RC-2HHHL-R4-2U-G6
The RC-2HHHL-R4-2U-G6 riser card is connected to riser connector 4. For more information about the connector locations, see system board components in "Appendix A Server specifications."
Figure 49 Connecting power cord, AUX cable, and PCIe cable for the RC-2HHHL-R4-2U-G6 riser card
Table 27 Cable connections
Riser connector |
Cable code |
Cable number |
Connector on the riser card |
Connector on the system board |
Riser4 |
N/A |
1 |
SLOT1 |
MCIO connector C1-P2C |
N/A |
2 |
SLOT2 |
MCIO connector C2-P2C |
|
0404A1T1 |
3 |
AUX |
AUX8 |
|
0404A1WW |
4 |
PWR |
PWR4 |
Connecting the LCD smart management module cable
Figure 50 Connecting the LCD smart management module cable
Connecting an inlet temperature sensor cable
Figure 51 Connecting an inlet temperature sensor cable
Connecting chassis ear cables
Figure 52 Connecting chassis ear cables
(1) Left chassis ear cable |
(2) Right chassis ear cable |
Maintenance
The following information describes the guidelines and tasks for daily server maintenance.
Guidelines
· Keep the equipment room clean and tidy. Remove unnecessary devices and objects from the equipment room.
· Make sure the temperature and humidity in the equipment room meet the server operating requirements.
· Regularly check the server from HDM for operating health issues.
· Keep the operating system and software up to date as required.
· Make a reliable backup plan:
¡ Back up data regularly.
¡ If data operations on the server are frequent, back up data as needed in shorter intervals than the regular backup interval.
¡ Check the backup data regularly for data corruption.
· Stock spare components on site in case replacements are needed. After a spare component is used, prepare a new one.
· Keep the network topology up to date to facilitate network troubleshooting.
Maintenance tools
The following are major tools for server maintenance:
· Hygrothermograph—Monitors the operating environment of the server.
· HDM and UniSystem—Monitors the operating status of the server.
Maintenance tasks
Observing LED status
Observe the LED status on the front and rear panels of the server to verify that the server modules are operating correctly. For more information about the status of the front and rear panel LEDs, see "Front panel" and "Rear panel."
Monitoring the temperature and humidity in the equipment room
Use a hygrothermograph to monitor the temperature and humidity in the equipment room.
The temperature and humidity in the equipment room must meet the server requirements described in "Environment requirements."
Examining cable connections
Verify that the cables and power cords are correctly connected.
Guidelines
· Do not use excessive force when connecting or disconnecting cables.
· Do not twist or stretch the cables.
· Organize the cables appropriately. For more information, see "Cabling guidelines."
Checklist
· The cable type is correct.
· The cables are correctly and firmly connected and the cable length is appropriate.
· The cables are in good condition and are not twisted or corroded at the connection point.
Viewing server status
To view basic information and status of the subsystems of the server, see "View device information" in H3C Servers HDM2 Online Help.
Collecting server logs
For the procedure for collecting server logs, see log downloading in H3C Servers HDM User Guide.
Updating firmware for the server
For the procedure for updating the HDM firmware, BIOS, or CPLD, see H3C Servers Firmware Update Guide.