H3C UIS-Cell 6000 G3 Hyper-Converged Infrastructure User Guide-5W101

HomeSupportCloud ComputingUIS-Cell 6000 G3 SeriesInstall & UpgradeLicensing GuidesH3C UIS-Cell 6000 G3 Hyper-Converged Infrastructure User Guide-5W101
01-Text
Title Size Download
01-Text 6.77 MB

Contents

Safety information· 1

Safety sign conventions· 1

Power source recommendations· 1

Installation safety recommendations· 2

General operating safety· 2

Electrical safety· 2

Rack mounting recommendations· 2

ESD prevention· 3

Cooling performance· 3

Battery safety· 4

Preparing for installation· 5

Rack requirements· 5

Installation site requirements· 6

Space and airflow requirements· 6

Temperature, humidity, and altitude requirements· 7

Cleanliness requirements· 7

Grounding requirements· 8

Installation tools· 8

Installing or removing the server 10

Installing the server 10

Installing the chassis rails and slide rails· 10

Rack-mounting the server 10

(Optional) Installing the CMA· 12

Connecting external cables· 12

Cabling guidelines· 12

Connecting a mouse, keyboard, and monitor 12

Connecting an Ethernet cable· 14

Connecting a USB device· 15

Connecting the power cord· 16

Securing cables· 18

Removing the server from a rack· 19

Powering on and powering off the server 20

Important information· 20

Powering on the server 20

Prerequisites· 20

Procedure· 20

Powering off the server 21

Guidelines· 21

Procedure· 21

Configuring the server 22

Powering on the server 22

Updating firmware· 22

Deploying and registering UIS Manger 22

Installing hardware options· 23

Installing the security bezel 23

Installing SAS/SATA drives· 23

Installing NVMe drives· 25

Installing power supplies· 26

Installing a compute module· 27

Installing air baffles· 29

Installing the low mid air baffle or GPU module air baffle to a compute module· 29

Installing the GPU module air baffle to a rear riser card· 31

Installing riser cards and PCIe modules· 33

Guidelines· 33

Installing a riser card and a PCIe module in a compute module· 33

Installing riser cards and PCIe modules at the server rear 35

Installing storage controllers and power fail safeguard modules· 39

Installing GPU modules· 43

Guidelines· 43

Installing a GPU module in a compute module· 43

Installing a GPU module to a rear riser card· 46

Installing Ethernet adapters· 46

Guidelines· 46

Installing an mLOM Ethernet adapter 46

Installing a PCIe Ethernet adapter 48

Installing PCIe M.2 SSDs· 49

Guidelines· 49

Installing a PCIe M.2 SSD in a compute module· 49

Installing a PCIe M.2 SSD at the server rear 51

Installing SD cards· 51

Installing an NVMe SSD expander module· 53

Installing the NVMe VROC module· 54

Installing a drive backplane· 54

Installing a diagnostic panel 55

Installing processors· 57

Installing DIMMs· 61

Guidelines· 61

Procedure· 64

Installing and setting up a TCM or TPM·· 67

Installation and setup flowchart 67

Installing a TCM or TPM·· 67

Enabling the TCM or TPM in the BIOS· 69

Configuring encryption in the operating system·· 69

Replacing hardware options· 70

Replacing the security bezel 70

Replacing a SAS/SATA drive· 70

Replacing an NVMe drive· 71

Replacing a compute module and its main board· 72

Removing a compute module· 72

Removing the main board of a compute module· 73

Installing a compute module and its main board· 74

Verifying the replacement 76

Replacing access panels· 76

Replacing a compute module access panel 76

Replacing the chassis access panel 77

Replacing a power supply· 78

Replacing air baffles· 80

Replacing air baffles in a compute module· 80

Replacing the power supply air baffle· 82

Replacing a riser card air baffle· 83

Replacing a riser card and a PCIe module· 85

Replacing the riser card and PCIe module in a compute module· 85

Replacing a riser card and PCIe module at the server rear 86

Replacing a storage controller 88

Guidelines· 89

Preparing for replacement 89

Procedure· 89

Verifying the replacement 90

Replacing the power fail safeguard module for a storage controller 90

Replacing a GPU module· 92

Replacing the GPU module in a compute module· 92

Replacing a GPU module at the server rear 93

Replacing an Ethernet adapter 93

Replacing an mLOM Ethernet adapter 94

Replacing a PCIe Ethernet adapter 95

Replacing an M.2 transfer module and a PCIe M.2 SSD·· 95

Replacing the M.2 transfer module and a PCIe M.2 SSD in a compute module· 95

Replacing an M.2 transfer module and a PCIe M.2 SSD at the server rear 97

Replacing an SD card· 97

Replacing the dual SD card extended module· 98

Replacing an NVMe SSD expander module· 99

Replacing the NVMe VROC module· 100

Replacing a fan module· 101

Replacing a processor 103

Guidelines· 103

Prerequisites· 103

Removing a processor 103

Installing a processor 105

Verifying the replacement 106

Replacing a DIMM·· 106

Replacing the system battery· 107

Removing the system battery· 108

Installing the system battery· 108

Verifying the replacement 109

Replacing drive backplanes· 109

Replacing the management module· 110

Removing the management module· 110

Installing the management module· 111

Replacing the PDB· 111

Removing the PDB· 112

Installing the PDB· 113

Replacing the midplane· 115

Removing the midplane· 115

Installing the midplane· 116

Replacing the diagnostic panel 117

Replacing chassis ears· 117

Replacing the TPM/TCM·· 119

Connecting internal cables· 120

Connecting drive cables· 120

Connecting drive cables in compute modules· 120

Storage controller cabling in riser cards at the server rear 124

Connecting the flash card on a storage controller 127

Connecting the GPU power cord· 128

Connecting the NCSI cable for a PCIe Ethernet adapter 129

Connecting the front I/O component cable from the right chassis ear 130

Connecting the cable for the front VGA and USB 2.0 connectors on the left chassis ear 130

Maintenance· 131

Guidelines· 131

Maintenance tools· 131

Maintenance tasks· 131

Observing LED status· 131

Monitoring the temperature and humidity in the equipment room·· 131

Examining cable connections· 132

Technical support 132

Appendix A  Server specifications· 133

Technical specifications· 133

Components· 135

Front panel 136

Front panel view of the server 136

Front panel view of a compute module· 138

LEDs and buttons· 139

Ports· 140

Rear panel 140

Rear panel view·· 140

LEDs· 141

Ports· 143

Main board of a compute module· 143

Main board components· 143

DIMM slots· 144

Management module· 145

Management module components· 145

System maintenance switches· 146

PDB· 147

Appendix B  Component specifications· 148

About component model names· 148

Software compatibility· 148

Processors· 148

DIMMs· 150

DRAM specifications· 150

DCPMM specifications· 151

DRAM DIMM rank classification label 151

HDDs and SSDs· 152

Drive specifications· 152

Drive LEDs· 154

Drive configurations and numbering· 155

PCIe modules· 160

Storage controllers· 160

NVMe SSD expander modules· 164

GPU modules· 164

PCIe Ethernet adapters· 166

FC HBAs· 167

mLOM Ethernet adapters· 169

Riser cards· 170

Riser card guidelines· 170

RS-FHHL-G3· 170

RS-GPU-R6900-G3· 171

RS-4*FHHL-G3· 171

RS-6*FHHL-G3-1· 173

RS-6*FHHL-G3-2· 174

Riser card and system board port mapping relationship· 175

Fan modules· 176

Air baffles· 177

Compute module air baffles· 177

Power supply air baffle· 177

Rear riser card air baffles· 178

Power supplies· 178

800 W power supply· 178

800 W high-voltage power supply· 179

1200 W power supply· 179

1600 W power supply· 180

800 W –48VDC power supply· 180

850 W high-efficiency Platinum power supply· 181

Expander modules· 181

Diagnostic panels· 182

Diagnostic panel specifications· 182

Diagnostic panel view·· 182

LEDs· 182

Fiber transceiver modules· 186

Storage options other than HDDs and SDDs· 186

NVMe VROC modules· 186

TPM/TCM modules· 186

Security bezels, slide rail kits, and CMA· 187

Appendix C  Managed hot removal of NVMe drives· 188

Performing a managed hot removal in Windows· 188

Prerequisites· 188

Procedure· 188

Performing a managed hot removal in Linux· 189

Prerequisites· 189

Performing a managed hot removal from the CLI 189

Performing a managed hot removal from the Intel®  ASM Web interface· 190

Appendix D  Environment requirements· 193

About environment requirements· 193

General environment requirements· 193

Operating temperature requirements· 193

Appendix E  Product recycling· 195

Appendix F  Glossary· 196

Appendix G  Acronyms· 198

 


Safety information

Safety sign conventions

To avoid bodily injury or damage to the server or its components, make sure you are familiar with the safety signs on the server chassis or its components.

Table 1 Safety signs

Sign

Description

Circuit or electricity hazards are present. Only H3C authorized or professional server engineers are allowed to service, repair, or upgrade the server.

WARNING WARNING!

To avoid bodily injury or damage to circuits, do not open any components marked with the electrical hazard sign unless you have authorization to do so.

Electrical hazards are present. Field servicing or repair is not allowed.

WARNING WARNING!

To avoid bodily injury, do not open any components with the field-servicing forbidden sign in any circumstances.

The surface or component might be hot and present burn hazards.

WARNING WARNING!

To avoid being burnt, allow hot surfaces or components to cool before touching them.

The server or component is heavy and requires more than one people to carry or move.

WARNING WARNING!

To avoid bodily injury or damage to hardware, do not move a heavy component alone. In addition, observe local occupational health and safety requirements and guidelines for manual material handling.

The server is powered by multiple power supplies.

WARNING WARNING!

To avoid bodily injury from electrical shocks, make sure you disconnect all power supplies if you are performing offline servicing.

 

Power source recommendations

Power instability or outage might cause data loss, service disruption, or damage to the server in the worst case.

To protect the server from unstable power or power outage, use uninterrupted power supplies (UPSs) to provide power for the server.

Installation safety recommendations

To avoid bodily injury or damage to the server, read the following information carefully before you operate the server.

General operating safety

To avoid bodily injury or damage to the server, follow these guidelines when you operate the server:

·          Only H3C authorized or professional server engineers are allowed to install, service, repair, operate, or upgrade the server.

·          Make sure all cables are correctly connected before you power on the server.

·          Place the server on a clean, stable table or floor for servicing.

·          To avoid being burnt, allow the server and its internal modules to cool before touching them.

Electrical safety

WARNING

WARNING!

If you put the server in standby mode (system LED in amber) with the power on/standby button on the front panel, the power supplies continue to supply power to some circuits in the server. To remove all power for servicing safety, you must first press the button until the system LED turns into steady amber, and then remove all power cords from the server.

 

To avoid bodily injury or damage to the server, follow these guidelines:

·          Always use the power cords that came with the server.

·          Do not use the power cords that came with the server for any other devices.

·          Power off the server when installing or removing any components that are not hot swappable.

Rack mounting recommendations

To avoid bodily injury or damage to the equipment, follow these guidelines when you rack mount a server:

·          Mount the server in a standard 19-inch rack.

·          Make sure the leveling jacks are extended to the floor and the full weight of the rack rests on the leveling jacks.

·          Couple the racks together in multi-rack installations.

·          Load the rack from the bottom to the top, with the heaviest hardware unit at the bottom of the rack.

·          Get help to lift and stabilize the server during installation or removal, especially when the server is not fastened to the rails. As a best practice, a minimum of four people are required to safely load or unload a rack. A fifth person might be required to help align the server if the server is installed higher than check level.

·          For rack stability, make sure only one unit is extended at a time. A rack might get unstable if more than one server unit is extended.

·          Make sure the rack is stable when you operate a server in the rack.

·          To maintain correct airflow and avoid thermal damage to the server, use blanks to fill empty rack units.

ESD prevention

Electrostatic charges that build up on people and tools might damage or shorten the lifespan of boards, the midplane, and electrostatic-sensitive components.

Preventing electrostatic discharge

To prevent electrostatic damage, follow these guidelines:

·          Transport or store the server with the components in antistatic bags.

·          Keep the electrostatic-sensitive components in the antistatic bags until they arrive at an ESD-protected area.

·          Place the components on a grounded surface before removing them from their antistatic bags.

·          Avoid touching pins, leads, or circuitry.

·          Make sure you are reliably grounded when touching an electrostatic-sensitive component or assembly.

Grounding methods to prevent electrostatic discharge

The following are grounding methods that you can use to prevent electrostatic discharge:

·          Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.

·          Take adequate personal grounding measures, including wearing antistatic clothing, static dissipative shoes, and antistatic gloves.

·          Use conductive field service tools.

·          Use a portable field service kit with a folding static-dissipating work mat.

Cooling performance

Poor cooling performance might result from improper airflow and poor ventilation and might cause damage to the server.

To ensure good ventilation and proper airflow, follow these guidelines:

·          Install blanks if the following module slots are empty:

¡  Compute module slots.

¡  Drive bays.

¡  Rear riser card bays.

¡  PCIe slots.

¡  Power supply slots.

·          Do not block the ventilation openings in the server chassis.

·          To avoid thermal damage to the server, do not operate the server for long periods in any of the following conditions:

¡  Access panel open or uninstalled.

¡  Air baffles uninstalled.

¡  Riser cards uninstalled.

¡  PCIe slots, drive bays, or power supply slots empty.

·          If the server is stacked in a rack with other devices, make sure there is a minimum clearance of 2 mm (0.08 in) below and above the server.

Battery safety

The server's management module contains a system battery, which is designed with a lifespan of 5 to 10 years.

If the server no longer automatically displays the correct date and time, you might need to replace the battery. When you replace the battery, follow these safety guidelines:

·          Do not attempt to recharge the battery.

·          Do not expose the battery to a temperature higher than 60°C (140°F).

·          Do not disassemble, crush, puncture, short external contacts, or dispose of the battery in fire or water.

·          Dispose of the battery at a designated facility. Do not throw the battery away together with other wastes.


Preparing for installation

Prepare a rack that meets the rack requirements and plan an installation site that meets the requirements of space and airflow, temperature, humidity, equipment room height, cleanliness, and grounding.

Rack requirements

IMPORTANT

IMPORTANT:

As a best practice to avoid affecting the server chassis, install power distribution units (PDUs) with the outputs facing backwards. If you install PDUs with the outputs facing the inside of the server, please perform onsite survey to make sure the cables won't affect the server rear.

 

The server is 4U high. The rack for installing the server must meet the following requirements:

·          A standard 19-inch rack.

·          A clearance of more than 50 mm (1.97 in) between the rack front posts and the front rack door.

·          A minimum of 1200 mm (47.24 in) in depth as a best practice. For installation limits for different rack depth, see Table 2.

Table 2 Installation limits for different rack depths

Rack depth

Installation limits

1000 mm (39.37 in)

·         H3C cable management arm (CMA) is not supported.

·         A clearance of 60 mm (2.36 in) is reserved from the server rear to the rear rack door for cabling.

·         The slide rails and PDUs might hinder each other. Perform onsite survey to determine the PDU installation location and the proper PDUs. If the PDUs hinder the installation and movement of the slide rails anyway, use other methods to support the server, a tray for example.

1100 mm (43.31 in)

Make sure the CMA does not hinder PDU installation at the server rear before installing the CMA. If the CMA hinders PDU installation, use a deeper rack or change the installation locations of PDUs.

1200 mm (47.24 in)

Make sure the CMA does not hinder PDU installation or cabling. If the CMA hinders PDU installation or cabling, change the installation locations of PDUs.

For detailed installation suggestions, see Figure 1.

 

Figure 1 Installation suggestions for a 1200 mm deep rack (top view)

(1) 1200 mm (47.24 in) rack depth

(2) A minimum of 50 mm (1.97 in) between the rack front posts and the front rack door

(3) 830 mm (32.68 in) between the rack front posts and the rear of the chassis, including power supply handles at the server rear (not shown in the figure)

(4) 830 mm (32.68 in) server depth, including chassis ears

(5) 950 mm (37.40 in) between the front rack posts and the CMA

(6) 860 mm (33.86 in) between the front rack posts and the rear ends of the slide rails

 

Installation site requirements

Space and airflow requirements

For convenient maintenance and heat dissipation, make sure the following requirements are met:

·          The passage for server transport is a minimum of 1500 mm (59.06 in) wide.

·          A minimum clearance of 1200 mm (47.24 in) is reserved from the front of the rack to the front of another rack or row of racks.

·          A minimum clearance of 800 mm (31.50 in) is reserved from the back of the rack to the rear of another rack or row of racks.

·          A minimum clearance of 1000 mm (39.37 in) is reserved from the rack to any wall.

·          The air intake and outlet vents of the server are not blocked.

·          The front and rear rack doors are adequately ventilated to allow ambient room air to enter the cabinet and allow the warm air to escape from the cabinet.

·          The air conditioner in the equipment room provides sufficient air flow for heat dissipation of devices in the room.

Figure 2 Airflow through the server

 (1) to (8) Directions of the airflow into the chassis and power supplies

(9) to (10) Directions of the airflow out of the chassis

 

Temperature, humidity, and altitude requirements

To ensure correct operation of the server, make sure the room temperature, humidity, and altitude meet the requirements as described in "Appendix C  Environment requirements."

Cleanliness requirements

Mechanically active substances buildup on the chassis might result in electrostatic adsorption, which causes poor contact of metal components and contact points. In the worst case, electrostatic adsorption can cause communication failure.

Table 3 Mechanically active substance concentration limit in the equipment room

Substance

Particle diameter

Concentration limit

Dust particles

≥ 5 µm

≤ 3 x 104 particles/m3

(No visible dust on desk in three days)

Dust (suspension)

≤ 75 µm

≤ 0.2 mg/m3

Dust (sedimentation)

75 µm to 150 µm

≤ 1.5 mg/(m2h)

Sand

≥ 150 µm

≤ 30 mg/m3

 

The equipment room must also meet limits on salts, acids, and sulfides to eliminate corrosion and premature aging of components, as shown in Table 4.

Table 4 Harmful gas limits in an equipment room

Gas

Maximum concentration (mg/m3)

SO2

0.2

H2S

0.006

NO2

0.04

NH3

0.05

Cl2

0.01

 

Grounding requirements

Correctly connecting the server grounding cable is crucial to lightning protection, anti-interference, and ESD prevention. The server can be grounded through the grounding wire of the power supply system and no external grounding cable is required.

Installation tools

Table 5 lists the tools that you might use during installation.

Table 5 Installation tools

Picture

Name

Description

T25 Torx screwdriver

For captive screws inside chassis ears.

T30 Torx screwdriver (Electric screwdriver)

For captive screws on processor heatsinks.

T15 Torx screwdriver (shipped with the server)

For replacing the management module.

T10 Torx screwdriver (shipped with the server)

For screws on PCIe modules.

Flat-head screwdriver

For replacing processors or the management module.

Phillips screwdriver

For screws on PCIe M.2 SSDs.

Cage nut insertion/extraction tool

For insertion and extraction of cage nuts in rack posts.

Diagonal pliers

For clipping insulating sleeves.

Tape measure

For distance measurement.

Multimeter

For resistance and voltage measurement.

ESD wrist strap

For ESD prevention when you operate the server.

Antistatic gloves

For ESD prevention when you operate the server.

Antistatic clothing

For ESD prevention when you operate the server.

Ladder

For high-place operations.

Interface cable (such as an Ethernet cable or optical fiber)

For connecting the server to an external network.

Monitor (such as a PC)

For displaying the output from the server.

 


Installing or removing the server

Installing the server

As a best practice, install hardware options to the server (if needed) before installing the server in the rack. For more information about how to install hardware options, see "Installing hardware options."

Installing the chassis rails and slide rails

Install the chassis rails to the server and the slide rails to the rack. For information about installing the rails, see the installation guide shipped with the rails.

Rack-mounting the server

WARNING

WARNING!

To avoid bodily injury, slide the server into the rack with caution for the sliding rails might squeeze your fingers.

 

1.        Remove the screws from both sides of the server, as shown in Figure 3.

Figure 3 Removing the screws from both sides of the server

 

2.        Remove the security bezel, if any. For more information, see "Replacing the security bezel."

3.        (Optional.) Remove the compute modules if the server is too heavy for mounting. For more information, see "Removing a compute module."

4.        Install the chassis handles. For more information, see the labels on the handles.

5.        Lift the server and slide the server into the rack along the slide rails, as shown in Figure 4.

Remove the chassis handles as you slide the server into the rack. For more information, see the labels on the handles.

Figure 4 Rack-mounting the server

 

6.        Secure the server, as shown in Figure 5:

a.    Push the server until the chassis ears are flush against the rack front posts, as shown by callout 1.

b.    Unlock the latches of the chassis ears, as shown by callout 2.

c.    Fasten the captive screws inside the chassis ears and lock the latches, as shown by callout 3.

Figure 5 Securing the server

 

7.        Install the removed compute modules. For more information, see "Installing a compute module."

 

CAUTION

CAUTION:

To avoid errors such as drive identification failure, make sure each compute module is installed into the correct slot. You can view the compute module label pasted on the module's package to determine the correct slot.

 

8.        Install the removed security bezel. For more information, see "Installing the security bezel."

(Optional) Installing the CMA

Install the CMA if the server is shipped with the CMA. For information about how to install the CMA, see the installation guide shipped with the CMA.

Connecting external cables

Cabling guidelines

WARNING

WARNING!

To avoid electric shock, fire, or damage to the equipment, do not connect communication equipment to RJ-45 Ethernet ports on the server.

 

·          For heat dissipation, make sure no cables block the inlet or outlet air vents of the server.

·          To easily identify ports and connect/disconnect cables, make sure the cables do not cross.

·          Label the cables for easy identification of the cables.

·          Wrap unused cables onto an appropriate position on the rack.

·          To avoid damage to cables when extending the server out of the rack, do not route the cables too tight if you use the CMA.

Connecting a mouse, keyboard, and monitor

About this task

Perform this task before you configure the BIOS, HDM, iFIST, or RAID on the server or enter the operating system of the server.

The server provides a maximum of two VGA connectors for connecting a monitor.

·          One on the front panel provided by the left chassis ear.

·          One on the rear panel.

 

WARNING

IMPORTANT!

The two VGA connectors on the server cannot be used at the same time.

 

The server does not provide ports for standard PS2 mouse and keyboard. To connect a PS2 mouse and keyboard, you must prepare a USB-to-PS2 adapter.

Procedure

1.        Connect one plug of a VGA cable to a VGA connector on the server, and fasten the screws on the plug.

Figure 6 Connecting a VGA cable

R170_048.png

 

2.        Connect the other plug of the VGA cable to the VGA connector on the monitor, and fasten the screws on the plug.

3.        Connect the mouse and keyboard.

¡  For a USB mouse and keyboard, directly connect the USB connectors of the mouse and keyboard to the USB connectors on the server.

¡  For a PS2 mouse and keyboard, insert the USB connector of the USB-to-PS2 adapter to a USB connector on the server. Then, insert the PS2 connectors of the mouse and keyboard into the PS2 receptacles of the adapter.

Figure 7 Connecting a PS2 mouse and keyboard by using a USB-to-PS2 adapter

R170_048-USB转接线.png

 

Connecting an Ethernet cable

About this task

Perform this task before you set up a network environment or log in to the HDM management interface through the HDM network port to manage the server.

Prerequisites

Install an mLOM or PCIe Ethernet adapter. For more information, see "Installing Ethernet adapters."

Procedure

1.        Determine the network port on the server.

¡  To connect the server to the external network, use an Ethernet port on an Ethernet adapter.

¡  To log in to the HDM management interface, use the HDM dedicated network port or shared network port. For the position of the HDM network port, see "Rear panel view."

2.        Determine the type of the Ethernet cable.

Verify the connectivity of the cable by using a link tester.

If you are replacing the Ethernet cable, make sure the new cable is of the same type with the old cable or compatible with the old cable.

3.        Label the Ethernet cable by filling in the names and numbers of the server and the peer device on the label.

As a best practice, use labels of the same kind for all cables.

If you are replacing the Ethernet cable, label the new cable with the same number as the number of the old cable.

4.        Connect one end of the Ethernet cable to the network port on the server and the other end to the peer device.

Figure 8 Connecting an Ethernet cable

R170_049.png

 

5.        Verify network connectivity.

After powering on the server, use the ping command to test the network connectivity. If the connection between the server and the peer device fails, make sure the Ethernet cable is correctly connected.

6.        Secure the Ethernet cable. For information about how to secure cables, see "Securing cables."

Connecting a USB device

About this task

Perform this task before you install the operating system of the server or transmit data through a USB device.

The server provides six USB connectors.

·          Five external USB connectors on the front and rear panels for connecting USB devices that are designed to be installed and removed very often:

¡  Two USB 2.0 connectors provided by the left chassis ear on the front panel.

¡  One USB 3.0 connector provided by the right chassis ear on the front panel.

¡  Two USB 3.0 connectors on the rear panel.

·          One internal USB 3.0 connector for connecting USB devices that are not designed to be installed and removed very often.

Guidelines

Before connecting a USB device, make sure the USB device can operate correctly and then copy data to the USB device.

USB devices are hot swappable. However, to connect a USB device to the internal USB connector or remove a USB device from the internal USB connector, power off the server first.

As a best practice for compatibility, purchase H3C certified USB devices.

Connecting a USB device to the internal USB connector

1.        Power off the server. For more information, see "Powering off the server."

2.        Disconnect all the cables from the management module.

3.        Remove the management module. For more information, see "Removing the management module."

4.        Connect the USB device. For the location of the internal USB connector, see "Management module components."

5.        Install the management module. For more information, see "Installing the management module."

6.        Reconnect the removed cables to the management module.

7.        Power on the server. For more information, see "Powering on the server."

8.        Verify that the server can identify the USB device.

If the server fails to identify the USB device, download and install the driver of the USB device. If the server still fails to identify the USB device after the driver is installed, replace the USB device.

Connecting a USB device to an external USB connector

1.        Connect the USB device.

2.        Verify that the server can identify the USB device.

If the server fails to identify the USB device, download and install the driver of the USB device. If the server still fails to identify the USB device after the driver is installed, replace the USB device.

Connecting the power cord

Guidelines

WARNING

WARNING!

To avoid damage to the equipment or even bodily injury, use the power cord that ships with the server.

 

Before connecting the power cord, make sure the server and components are installed correctly.

Procedure

1.        Insert the power cord plug into the power receptacle of a power supply at the rear panel, as shown in Figure 9.

Figure 9 Connecting the power cord

 

2.        Connect the other end of the power cord to the power source, for example, the power strip on the rack.

3.        Secure the power cord to avoid unexpected disconnection of the power cord.

a.    (Optional.) If the cable clamp is positioned too near the power cord and blocks the power cord plug connection, press down the tab on the cable mount and slide the clamp backward.

Figure 10 Sliding the cable clamp backward

 

b.    Open the cable clamp, place the power cord through the opening in the cable clamp, and then close the cable clamp, as shown by callouts 1, 2, 3, and 4 in Figure 11.

Figure 11 Securing the power cord

 

c.    Slide the cable clamp forward until it is flush against the edge of the power cord plug, as shown in Figure 12.

Figure 12 Sliding the cable clamp forward

 

Securing cables

Securing cables to the CMA

For information about how to secure cables to the CMA, see the installation guide shipped with the CMA.

Securing cables to slide rails by using cable straps

You can secure cables to either left slide rails or right slide rails by using the cable straps provided with the server. As a best practice for cable management, secure cables to left slide rails.

When multiple cable straps are used in the same rack, stagger the strap location, so that the straps are adjacent to each other when viewed from top to bottom. This positioning will enable the slide rails to slide easily in and out of the rack.

To secure cables to slide rails by using cable straps:

1.        Hold the cables against a slide rail.

2.        Wrap the strap around the slide rail and loop the end of the cable strap through the buckle.

3.        Dress the cable strap to ensure that the extra length and buckle part of the strap are facing outside of the slide rail.

Figure 13 Securing cables to a slide rail

Orch_140.png

 

Removing the server from a rack

1.        Power down the server. For more information, see "Powering off the server."

2.        Disconnect all peripheral cables from the server.

3.        Extend the server from the rack, as shown in Figure 14.

a.    Open the latches of the chassis ears.

b.    Loosen the captive screws.

c.    Slide the server out of the rack, during which install the chassis handles in sequence. For information about installing the chassis handles, see the labels on the handles.

Figure 14 Extending the server from the rack

 

4.        Place the server on a clean, stable surface.


Powering on and powering off the server

Important information

If the server is connected to external storage devices, make sure the server is the first device to power off and then the last device to power on. This restriction prevents the server from mistakenly identifying the external storage devices as faulty devices.

Powering on the server

Prerequisites

Before you power on the server, you must complete the following tasks:

·          Install the server and internal components correctly.

·          Connect the server to a power source.

Procedure

Powering on the server by pressing the power on/standby button

Press the power on/standby button to power on the server.

The server exits standby mode and supplies power to the system. The system power LED changes from steady amber to flashing green and then to steady green. For information about the position of the system power LED, see "LEDs and buttons."

Powering on the server from the HDM Web interface

1.        Log in to the HDM.

For information about how to log in to HDM, see the firmware update guide for the server.

2.        In the navigation pane, select Power Manager > Power Control, and then power on the server.

For more information, see HDM online help.

Powering on the server from the remote console interface

1.        Log in to HDM.

For information about how to log in to HDM, see the firmware update guide for the server.

2.        Log in to a remote console and then power on the server.

For information about how to log in to a remote console, see HDM online help.

Configuring automatic power-on

You can configure automatic power-on from HDM or the BIOS.

To configure automatic power-on from HDM:

1.        Log in to HDM.

For information about how to log in to HDM, see the firmware update guide for the server.

2.        In the navigation pane, select Power Manager > Meter Power.

The meter power configuration page opens.

3.        Click the Automatic power-on tab and then select Always power on.

4.        Click Save.

To configure automatic power-on from the BIOS, set AC Restore Settings to Always Power On. For more information, see the BIOS user guide for the server.

Powering off the server

Guidelines

Before powering off the server, you must complete the following tasks:

·          Install the server and internal components correctly.

·          Back up all critical data.

·          Make sure all services have stopped or have been migrated to other servers.

Procedure

Powering off the server from its operating system

1.        Connect a monitor, mouse, and keyboard to the server.

2.        Shut down the operating system of the server.

3.        Disconnect all power cords from the server.

Powering off the server by pressing the power on/standby button

IMPORTANT

IMPORTANT:

This method forces the server to enter standby mode without properly exiting applications and the operating system. If an application stops responding, you can use this method to force a shutdown. As a best practice, do not use this method when the applications and the operating system are operating correctly.

 

1.        Press and hold the power on/standby button until the system power LED turns into steady amber.

2.        Disconnect all power cords from the server.

Powering off the server from the HDM Web interface

1.        Log in to HDM.

For information about how to log in to HDM, see the firmware update guide for the server.

2.        In the navigation pane, select Power Manager > Power Control, and then power off the server.

For more information, see HDM online help.

3.        Disconnect all power cords from the server.

Powering off the server from the remote console interface

1.        Log in to HDM.

For information about how to log in to HDM, see the firmware update guide for the server.

2.        Log in to a remote console and then power off the server.

For information about how to log in to a remote console, see HDM online help.


Configuring the server

The following information describes the procedures to configure the server after the server installation is complete.

Powering on the server

1.        Power on the server. For information about the procedures, see "Powering on the server."

2.        Verify that the health LED on the front panel is steady green, which indicates that the system is operating correctly. For more information about the health LED status, see "LEDs and buttons."

Updating firmware

IMPORTANT

IMPORTANT:

Verify the hardware and software compatibility before firmware update. For information about the hardware and software compatibility, see the software release notes.

 

You can update the following firmware from FIST or HDM:

·          HDM.

·          BIOS.

·          CPLD.

·          PDBCPLD.

·          NDCPLD.

For information about the update procedures, see the firmware update guide for the server.

Deploying and registering UIS Manger

For information about deploying UIS Manager, see H3C UIS Manager Installation Guide.

For information about registering the licenses of UIS Manager, see H3C UIS Manager 6.5 License Registration Guide.


Installing hardware options

If you are installing multiple hardware options, read their installation procedures and identify similar steps to streamline the entire installation procedure.

Installing the security bezel

1.        Press the right edge of the security bezel into the groove in the right chassis ear on the server, as shown by callout 1 in Figure 15.

2.        Press the latch at the other end, close the security bezel, and then release the latch to secure the security bezel into place. See callouts 2 and 3 in Figure 15.

3.        Insert the key provided with the bezel into the lock on the bezel and lock the security bezel, as shown by callout 4 in Figure 15. Then, pull out the key and keep it safe.

 

CAUTION

CAUTION:

To avoid damage to the lock, hold down the key while you are turning the key.

 

Figure 15 Installing the security bezel

 

Installing SAS/SATA drives

Guidelines

The drives are hot swappable.

If you are using the drives to create a RAID, follow these restrictions and guidelines:

·          To build a RAID (or logical drive) successfully, make sure all drives in the RAID are the same type (HDDs or SSDs) and have the same connector type (SAS or SATA).

·          For efficient use of storage, use drives that have the same capacity to build a RAID. If the drives have different capacities, the lowest capacity is used across all drives in the RAID. Whether a drive with extra capacity can be used to build other RAIDs depends on the storage controller model. The following storage controllers do not allow the use of a drive for multiple RAIDs:

¡  RAID-LSI-9361-8i(1G)-A1-X.

¡  RAID-LSI-9361-8i(2G)-1-X.

¡  RAID-LSI-9460-8i(2G).

¡  RAID-LSI-9460-8i(4G).

¡  RAID-P460-B4.

¡  HBA-H460-B1.

·          If the installed drive contains RAID information, you must clear the information before configuring RAIDs. For more information, see the storage controller user guide for the server.

Procedure

1.        Remove the security bezel, if any. For more information, see "Replacing the security bezel."

2.        Press the latch on the drive blank inward with one hand, and pull the drive blank out of the slot, as shown in Figure 16.

Figure 16 Removing the drive blank

 

3.        Install the drive:

a.    Press the button on the drive panel to release the locking lever.

Figure 17 Releasing the locking lever

准备安装硬盘.png

 

b.    Insert the drive into the slot and push it gently until you cannot push it further.

c.    Close the locking lever until it snaps into place.

Figure 18 Installing a drive

 

4.        Install the security bezel. For more information, see "Installing the security bezel."

Verifying the installation

Use the following methods to verify that the drive is installed correctly:

·          Verify the drive properties (including capacity) and state by using one of the following methods:

¡  Log in to HDM. For more information, see HDM online help.

¡  Access the BIOS. For more information, see the storage controller user guide for the server.

¡  Access the CLI or GUI of the server.

·          Observe the drive LEDs to verify that the drive is operating correctly. For more information, see "Drive LEDs."

Installing NVMe drives

Guidelines

NVMe drives support hot insertion and managed hot removal.

Only one drive can be hot inserted at a time. To hot insert multiple NVMe drives, wait a minimum of 60 seconds for the previously installed NVMe drive to be identified before hot inserting another NVMe drive.

If you are using the drives to create a RAID, follow these restrictions and guidelines:

·          Make sure the NVMe VROC module is installed before you create a RAID.

·          For efficient use of storage, use drives that have the same capacity to build a RAID. If the drives have different capacities, the lowest capacity is used across all drives in the RAID. A drive with extra capacity cannot be used to build other RAIDs.

·          If the installed drive contains RAID information, you must clear the information before configuring RAIDs. For more information, see the storage controller user guide for the server.

Procedure

1.        Remove the security bezel, if any. For more information, see "Replacing the security bezel."

2.        Remove the drive blank. For more information, see "Installing SAS/SATA drives."

3.        Install the drive. For more information, see "Installing SAS/SATA drives."

4.        Install the removed security bezel. For more information, see "Installing the security bezel."

Verifying the installation

Use the following methods to verify that the drive is installed correctly:

·          Observe the drive LEDs to verify that the drive is operating correctly. For more information, see "Drive LEDs."

·          Access the CLI or GUI of the server to verify the drive properties (including capacity) and state.

Installing power supplies

Guidelines

CAUTION

CAUTION:

To avoid hardware damage, do not use third-party power supplies.

 

·          The power supplies are hot swappable.

·          The server provides four power supply slots. Before powering on the server, install two to achieve 1+1 redundancy, or install four power modules to achieve N+N redundancy.

·          Make sure the installed power supplies are the same model. HDM will perform power supply consistency check and generate an alarm if the power supply models are different.

·          Install power supplies in power supply slots 1 through 4 in sequence.

Procedure

1.        As shown in Figure 19, remove the power supply blank from the target power supply slot.

Figure 19 Removing the power supply blank

 

2.        Align the power supply with the slot, making sure its fan is on the left.

3.        Push the power supply into the slot until it snaps into place.

Figure 20 Installing a power supply

 

4.        Connect the power cord. For more information, see "Connecting the power cord."

Verifying the installation

Use one of the following methods to verify that the power supply is installed correctly:

·          Observe the power supply LED to verify that the power supply is operating correctly. For more information about the power supply LED, see LEDs in "Rear panel."

·          Log in to HDM to verify that the power supply is operating correctly. For more information, see HDM online help.

Installing a compute module

Guidelines

·          The server supports two compute modules: compute module 1 and compute module 2. For more information, see "Front panel."

·          The server supports 24SFF and 8SFF compute modules. For more information, see "Front panel of a compute module."

Procedure

The installation procedure is the same for 24SFF and 8SFF compute modules. This section uses an 8SFF compute module as an example.

To install a compute module:

1.        Identify the installation location of the compute module. For more information about the installation location, see "Drive configurations and numbering."

2.        Power off the server. For more information, see "Powering off the server."

3.        Remove the security bezel, if any. For more information, see "Replacing the security bezel."

4.        Press the latches at both ends of the compute module blank, and pull outward the blank, as shown in Figure 21.

Figure 21 Removing the compute module blank

 

5.        Install the compute module:

a.    Press the clips at both ends of the compute module inward to release the locking levers, as shown in Figure 22.

Figure 22 Releasing the locking levers

 

b.    Push the module gently into the slot until you cannot push it further. Then, close the locking levers at both ends to secure the module in place, as shown in Figure 23.

 

CAUTION

CAUTION:

To avoid module connector damage, do not use excessive force when pushing the module into the slot.

 

Figure 23 Installing the compute module

 

6.        Install the removed security bezel. For more information, see "Installing the security bezel."

7.        Connect the power cord. For more information, see "Connecting the power cord."

8.        Power on the server. For more information, see "Powering on the server."

Verifying the installation

Log in to HDM to verify that the compute module is operating correctly. For more information, see HDM online help.

Installing air baffles

For more information about air baffles available for the server, see "Air baffles."

Installing the low mid air baffle or GPU module air baffle to a compute module

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the security bezel, if any. For more information, see "Replacing the security bezel."

3.        Remove the compute module. For more information, see "Removing a compute module."

4.        Remove the compute module access panel. For more information, see "Replacing a compute module access panel."

5.        Remove the high mid air baffle. For more information, see "Replacing air baffles in a compute module."

6.        Install the low mid air baffle or GPU module air baffle.

¡  To install the low mid air baffle, align the two pin holes with the guide pins on the processor socket and the cable clamp. Then, gently press down the air baffle onto the main board of the compute module, as shown in Figure 24.

Figure 24 Installing the low mid air baffle

 

¡  To install the GPU module air baffle, align the two pin holes with the guide pins (near the compute module front panel) in the compute module. Then, gently press down the air baffle onto the main board and push it forward until you cannot push it any further, as shown in Figure 25.

Figure 25 Installing the GPU module air baffle

 

7.        Install a riser card and a PCIe module in the compute module. For more information, see "Installing a riser card and a PCIe module in a compute module."

8.        Install the compute module access panel. For more information, see "Replacing a compute module access panel."

9.        Install the compute module. For more information, see "Installing a compute module."

10.     Install the removed security bezel. For more information, see "Installing the security bezel."

11.     Connect the power cord. For more information, see "Connecting the power cord."

12.     Power on the server. For more information, see "Powering on the server."

Installing the GPU module air baffle to a rear riser card

The GPU module air baffle hinders GPU module installation. Make sure GPU modules have been installed to the rear riser card before you install the GPU module air baffle.

To install the GPU module air baffle to a rear riser card:

1.        Power off the server. For more information, see "Powering off the server."

2.        Disconnect external cables from the riser card, if the cables hinder air baffle installation.

3.        Remove the riser card. For more information, see "Replacing a riser card and PCIe module at the server rear."

4.        Remove the riser card air baffle.

5.        Install PCIe modules to the riser card. For more information, see "Installing riser cards and PCIe modules at the server rear."

6.        If an installed PCIe module requires external cables, remove the air baffle panel closer to the PCIe module installation slot for the cables to pass through.

Figure 26 Removing an air baffle panel from the GPU module air baffle

 

7.        Install the GPU module air baffle to the rear riser card. Tilt and insert the air baffle into the riser card, and then push the riser card until it snaps into place, as shown in Figure 27.

Figure 27 Installing the GPU module air baffle to the rear riser card

 

8.        Install the rear riser card. For more information, see "Installing riser cards and PCIe modules at the server rear."

9.        (Optional.) Connect external cables to the riser card.

10.     Connect the power cord. For more information, see "Connecting the power cord."

11.     Power on the server. For more information, see "Powering on the server."

Installing riser cards and PCIe modules

The server provides two PCIe riser connectors and three PCIe riser bays. The three riser bays are at the server rear and each compute module provides one riser connector. For more information about the locations of the bays and connectors, see "Rear panel view" and "Main board components", respectively.

Guidelines

·          You can install a small-sized PCIe module in a large-sized PCIe slot. For example, an LP PCIe module can be installed in an FHFL PCIe slot.

·          A PCIe slot can supply power to the installed PCIe module if the maximum power consumption of the module does not exceed 75 W. If the maximum power consumption exceeds 75 W, a power cord is required.

·          Make sure the number of installed PCIe modules requiring PCIe I/O resources does not exceed eleven. For more information about PCIe modules requiring PCIe I/O resources, see "PCIe modules."

·          If a processor is faulty or absent, the corresponding PCIe slots are unavailable. For more information about processor and PCIe slot mapping relationship, see "Riser cards."

·          For more information about riser card installation location, see Table 6.

Table 6 Riser card installation location

PCIe riser connector or bay

Riser card name

Available riser cards

PCIe riser connector 0 (in a compute module)

Riser card 0

·         RS-FHHL-G3

·         RS-GPU-R6900-G3 (available only for 8SFF compute modules)

PCIe riser bay 1 (at the server rear)

Riser card 1

·         RS-4*FHHL-G3

·         RS-6*FHHL-G3-1

PCIe riser bay 2 (at the server rear)

Riser card 2

RS-6*FHHL-G3-2

PCIe riser bay 3 (at the server rear)

Riser card 3

·         RS-4*FHHL-G3

·         RS-6*FHHL-G3-1

 

Installing a riser card and a PCIe module in a compute module

You can install an RS-GPU-R6900-G3 riser card only in an 8SFF compute module.

The installation method is the same for the RS-FHHL-G3 and RS-GPU-R6900-G3. This section uses the RS-FHHL-G3 as an example.

Procedure

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the security bezel, if any. For more information, see "Replacing the security bezel."

3.        Remove the compute module. For more information, see "Removing a compute module."

4.        Remove the compute module access panel. For more information, see "Replacing a compute module access panel."

5.        Remove the high mid air baffle. For more information, see "Replacing air baffles in a compute module."

6.        Install the low mid air baffle. For more information, see "Installing the low mid air baffle or GPU module air baffle to a compute module."

7.        Install a PCIe module to the riser card:

a.    Hold and rotate the latch upward to open it, as shown in Figure 28.

Figure 28 Opening the latch on the riser card

 

b.    Insert the PCIe module into the slot along the guide rails and close the latch on the riser card, as shown in Figure 29.

Figure 29 Installing the PCIe module

 

c.    Connect PCIe module cables, if any, to the PCIe module.

8.        Install the riser card to the compute module, as shown in Figure 30:

a.    Align the two pin holes on the riser card with the guide pins on the main board, and then insert the riser card in the PCIe riser connector.

b.    Fasten the captive screws to secure the riser card to the main board.

Figure 30 Installing the riser card to the compute module

 

c.    Connect PCIe module cables, if any, to the drive backplane.

9.        Install the compute module access panel. For more information, see "Replacing a compute module access panel."

10.     Install the compute module. For more information, see "Installing a compute module."

11.     Install the removed security bezel. For more information, see "Installing the security bezel."

12.     Connect the power cord. For more information, see "Connecting the power cord."

13.     Power on the server. For more information, see "Powering on the server."

Verifying the installation

Log in to HDM to verify that the PCIe module on the riser card is operating correctly. For more information, see HDM online help.

Installing riser cards and PCIe modules at the server rear

The procedure is the same for installing RS-6*FHHL-G3-1 and RS-6*FHHL-G3-2 riser cards. This section uses the RS-6*FHHL-G3-1 riser card as an example.

Procedure

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the riser card blank. As shown in Figure 31, hold the riser card blank with your fingers reaching into the holes on the blank and pull the blank out.

Figure 31 Removing the rear riser card blank

 

3.        Install the PCIe module to the riser card:

a.    Remove the riser card air baffle, if the PCIe module to be installed needs to connect cables. For more information, see "Replacing a riser card air baffle."

b.    Open the riser card cover. As shown by callouts 1 and 2 in Figure 32, pressing the two locking tabs on the riser card cover, rotate the cover outward.

c.    Pull the PCIe module blank out of the slot, as shown by callout 3.

Figure 32 Removing the PCIe module blank

 

d.    Insert the PCIe module into the PCIe slot along the guide rails, and then close the riser card cover, as shown in Figure 33.

Figure 33 Installing a PCIe module to the riser card

 

e.    Connect PCIe module cables, if any, to the PCIe module.

f.     Install the removed riser card air baffle. For more information, see "Replacing a riser card air baffle."

4.        Install the riser card to the server.

a.    Unlock the riser card. As shown in Figure 34, rotate the latch on the riser card upward to release the ejector lever.

Figure 34 Unlocking the riser card

 

b.    Install the riser card to the server. As shown in Figure 35, gently push the riser card into the bay until you cannot push it further, and then close the ejector lever to secure the riser card.

Figure 35 Installing the riser card to the server

 

5.        Connect PCIe module cables, if any.

6.        Connect the power cord. For more information, see "Connecting the power cord."

7.        Power on the server. For more information, see "Powering on the server."

Verifying the installation

Log in to HDM to verify that the PCIe module on the riser card is operating correctly. For more information, see HDM online help.

Installing storage controllers and power fail safeguard modules

A power fail safeguard module provides a flash card and a supercapacitor. When a system power failure occurs, this supercapacitor can provide power for a minimum of 20 seconds. During this interval, the storage controller transfers data from DDR memory to the flash card, where the data remains indefinitely or until the controller retrieves the data.

Guidelines

Make sure the power fail safeguard module is compatible with its connected storage controller. For the compatibility matrix, see "Storage controllers."

The supercapacitor might have a low charge after the power fail safeguard module is installed or after the server is powered on. If the system displays that the supercapacitor has low charge, no action is required. The system will charge the supercapacitor automatically.

Install storage controllers only in PCIe slots on riser card 1.

You cannot install the HBA-H460-B1 or RAID-P460-B4 storage controller on a server installed with one of the following storage controllers:

·          HBA-LSI-9300-8i-A1-X.

·          HBA-LSI-9311-8i.

·          HBA-LSI-9440-8i.

·          RAID-LSI-9361-8i(1G)-A1-X.

·          RAID-LSI-9361-8i(2G)-1-X.

·          RAID-LSI-9460-8i(2G).

·          RAID-LSI-9460-8i(4G).

·          RAID-LSI-9460-16i(4G).

Procedure

The procedure is the same for installing storage controllers of different models. This section uses the RAID-LSI-9361-8i(1G)-A1-X storage controller as an example.

To install a storage controller:

1.        Power off the server. For more information, see "Powering off the server."

2.        Disconnect external cables from the rear riser card if the cables hinder storage controller installation.

3.        Remove riser card 1. For more information, see "Replacing a riser card and PCIe module at the server rear."

4.        (Optional.) Install the flash card of the power fail safeguard module to the storage controller:

 

IMPORTANT

IMPORTANT:

Skip this step if no power fail safeguard module is required or the storage controller has a built-in flash card. For information about storage controllers with a built-in flash card, see "Storage controllers."

 

a.    Install the two internal threaded studs supplied with the power fail safeguard module on the storage controller.

Figure 36 Installing the internal threaded studs

 

b.    Use screws to secure the flash card on the storage controller.

Figure 37 Installing the flash card

 

5.        Install the storage controller to the riser card:

a.    (Optional.) Connect the flash card cable (P/N 0404A0VU) to the flash card.

Figure 38 Connecting the flash card cable to the flash card

 

b.    Install the storage controller to the riser card. For more information, see "Installing riser cards and PCIe modules at the server rear."

c.    (Optional.) Connect the flash card cable to the riser card. For more information, see "Connecting the flash card on a storage controller."

d.    Connect the storage controller cable to the riser card. For more information see "Storage controller cabling in riser cards at the server rear."

e.    Install the riser card air baffle. For more information, see "Replacing a riser card air baffle."

6.        Install the riser card to the server. For more information, see "Installing riser cards and PCIe modules at the server rear."

7.        (Optional.) Install the supercapacitor:

a.    Remove the security bezel, if any. For more information, see "Replacing the security bezel."

b.    Remove the corresponding compute module. For more information, see "Removing a compute module." For more information about the mapping relationship between PCIe slot and compute module, see "Riser cards."

c.    Remove the compute module access panel. For more information, see "Replacing a compute module access panel."

d.    Install the supercapacitor holder to the right air baffle. As shown in Figure 39, slide the holder gently until it snaps into place.

Figure 39 Installing the supercapacitor holder

 

e.    Connect one end of the supercapacitor cable (P/N 0404A0VT) provided with the flash card to the supercapacitor cable, as shown in Figure 40.

Figure 40 Connecting the supercapacitor cable

 

f.     Insert the cableless end of the supercapacitor into the holder. Pull a clip on the holder, insert the other end of the supercapacitor into the holder, and then release the clip, as shown in by callouts 1, 2, and 3 in Figure 41.

g.    Connect the other end of the supercapacitor cable to supercapacitor connector 1 on the compute module main board, as shown by callout 4 in Figure 41.

Figure 41 Installing the supercapacitor and connecting the supercapacitor cable

 

h.    Install the compute module access panel. For more information, see "Replacing a compute module access panel."

i.      Install the compute module. For more information, see "Installing a compute module."

j.      Install the removed security bezel. For more information, see "Installing the security bezel."

8.        Connect the power cord. For more information, see "Connecting the power cord."

9.        Power on the server. For more information, see "Powering on the server."

Verifying the installation

Log in to HDM to verify that the storage controller, flash card, and supercapacitor are operating correctly. For more information, see HDM online help.

Installing GPU modules

Guidelines

·          A riser card is required when you install a GPU module.

·          Make sure the number of installed GPU modules requiring PCIe I/O resources does not exceed eleven. For more information about GPU modules requiring PCIe I/O resources, see "GPU modules."

·          The available GPU modules and installation positions vary by riser card model and position. For more information, see "GPU module and riser card compatibility."

·          For heat dissipation purposes, if a 24SFF compute module and a GPU-P4-X or GPU-T4 GPU module are installed, do not install processors whose power exceeds 165 W. For more information about processor power, see "Processors."

Installing a GPU module in a compute module

Guidelines

You can install a GPU module only in an 8SFF compute module and an RS-GPU-R6900-G3 riser card is required for the installation.

For the GPU module to take effect, make sure processor 2 of the compute module is in position.

Procedure

The procedure is the same for installing GPU modules GPU-P4-X, GPU-P40-X, GPU-T4, GPU-P100, GPU-V100, and GPU-V100-32G. This section uses the GPU-P100 as an example.

To install a GPU module in a compute module:

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the security bezel, if any. For more information, see "Replacing the security bezel."

3.        Remove the compute module. For more information, see "Removing a compute module."

4.        Remove the compute module access panel. For more information, see "Replacing a compute module access panel."

5.        Remove the high mid air baffle. For more information, see "Replacing air baffles in a compute module."

6.        Install the GPU module air baffle to the compute module. For more information, see "Installing the low mid air baffle or GPU module air baffle to a compute module."

7.        If the GPU module is dual-slot wide, attach the support bracket provided with the GPU module to the GPU module. As shown in Figure 42, align screw holes in the support bracket with the installation holes in the GPU module. Then, use screws to attach the support bracket to the GPU module.

Figure 42 Installing the GPU module support bracket

 

8.        Install the GPU module and connect the GPU power cord:

a.    Hold and rotate the latch upward to open it. For more information, see "Installing a riser card and a PCIe module in a compute module."

b.    Connect the riser power end of the power cord (P/N 0404A0UC) to the riser card, as shown by callout 1 in Figure 43.

c.    Insert the GPU module into PCIe slot 1 along the guide rails, as shown by callout 2 in Figure 43.

d.    Connect the other end of the power cord to the GPU module and close the latch on the riser card, as shown by callouts 3 and 4 in Figure 43.

Figure 43 Installing a GPU module

 

9.        Install the riser card on PCIe riser connector 0. Align the pin holes on the riser card with the guide pins on the main board, and place the riser card on the main board. Then, fasten the captive screws to secure the riser card into place, as shown in Figure 44.

Figure 44 Installing the riser card

 

10.     Install the compute module access panel. For more information, see "Replacing a compute module access panel."

11.     Install the compute module. For more information, see "Installing a compute module."

12.     Install the removed security bezel. For more information, see "Installing the security bezel."

13.     Connect the power cord. For more information, see "Connecting the power cord."

14.     Power on the server. For more information, see "Powering on the server."

Installing a GPU module to a rear riser card

Guidelines

You can install GPU modules only to the riser card in riser bay 1 or 3.

To install only one GPU module to a rear riser card, install the GPU module in PCIe slot 2. To install two GPU modules to a rear riser card, install the GPU modules in PCIe slots 2 and 6.

The procedure is the same for GPU modules GPU-P4-X and GPU-T4. This section uses GPU module GPU-P4-X as an example.

Procedure

1.        Power off the server. For more information, see "Powering off the server."

2.        Disconnect external cables from the riser card, if the cables hinder GPU module installation.

3.        Remove the riser card. For more information, see "Replacing a riser card and PCIe module at the server rear."

4.        Install the GPU module to the riser card. For more information, see "Installing riser cards and PCIe modules at the server rear."

5.        Install the GPU module air baffle to the riser card. For more information, see "Installing the GPU module air baffle to a rear riser card."

6.        Install the riser card to the server. For more information, see "Installing riser cards and PCIe modules at the server rear."

7.        Connect the power cord. For more information, see "Connecting the power cord."

8.        Power on the server. For more information, see "Powering on the server."

Installing Ethernet adapters

Guidelines

You can install an mLOM Ethernet adapter only in the mLOM Ethernet adapter connector on riser card 1. For more information about the connector location, see "Riser cards." When the mLOM Ethernet adapter is installed, PCIe slot 4 on riser card 1 becomes unavailable.

To install a PCIe Ethernet adapter that supports NCSI, install it in PCIe slot 3 on riser card 1. If you install the Ethernet adapter in another slot, NCSI does not take effect.

By default, port 1 on the mLOM Ethernet adapter acts as the HDM shared network port. If only a PCIe Ethernet adapter exists and the PCIe Ethernet adapter supports NCSI, port 1 on the PCIe Ethernet adapter acts as the HDM shared network port. You can configure another port on the PCIe Ethernet adapter as the HDM shared network port from the HDM Web interface. For more information, see HDM online help.

Installing an mLOM Ethernet adapter

The procedure is the same for all mLOM Ethernet adapters. This section uses the NIC-GE-4P-360T-L3-M mLOM Ethernet adapter as an example.

Procedure

1.        Power off the server. For more information, see "Powering off the server."

2.        Disconnect external cables from riser card 1, if the cables hinder mLOM Ethernet adapter installation.

3.        Remove riser card 1. For more information, see "Replacing a riser card and PCIe module at the server rear."

4.        Install the mLOM Ethernet adapter to the riser card:

a.    Remove the screw from the mLOM Ethernet adapter slot and then remove the blank, as shown in Figure 45.

Figure 45 Removing the screw

 

b.    Open the riser card cover. For more information, see "Installing riser cards and PCIe modules at the server rear."

c.    Remove the PCIe module in PCIe slot 4 on the riser card, if a PCIe module is installed. For more information, see "Replacing a riser card and PCIe module at the server rear."

Remove the PCIe module blank, if no PCIe module is installed. For more information, see "Installing riser cards and PCIe modules at the server rear."

d.    Insert the mLOM Ethernet adapter into the slot along the guide rails, and fasten the screw to secure the Ethernet adapter into place, as shown in Figure 46. Then, close the riser card cover.

Figure 46 Installing an mLOM Ethernet adapter to the riser card

 

5.        If you have removed the riser card air baffle, install the removed riser card air baffle. For more information, see "Replacing a riser card air baffle."

6.        Install the riser card to the server. For more information, see "Installing riser cards and PCIe modules at the server rear."

7.        Connect network cables to the mLOM Ethernet adapter.

8.        Connect the power cord. For more information, see "Connecting the power cord."

9.        Power on the server. For more information, see "Powering on the server."

Verifying the installation

Log in to HDM to verify that the mLOM Ethernet adapter is operating correctly. For more information, see HDM online help.

Installing a PCIe Ethernet adapter

Procedure

1.        Power off the server. For more information, see "Powering off the server."

2.        Disconnect external cables from the target riser card if the cables hinder PCIe Ethernet adapter installation.

3.        Remove the riser card. For more information, see "Replacing a riser card and PCIe module at the server rear."

4.        Install the PCIe Ethernet adapter to the riser card. For more information, see "Installing riser cards and PCIe modules."

5.        (Optional.) If the PCIe Ethernet adapter supports NCSI, connect the NCSI cable from the PCIe Ethernet adapter to the NCSI connector on the riser card. For more information about the NCSI connector location, see "Riser cards." For more information about the cable connection method, see "Connecting the NCSI cable for a PCIe Ethernet adapter."

6.        If you have removed the riser card air baffle, install the removed riser card air baffle. For more information, see "Replacing a riser card air baffle."

7.        Install the riser card to the server. For more information, see "Installing riser cards and PCIe modules at the server rear."

8.        Connect network cables to the PCIe Ethernet adapter.

9.        Connect the power cord. For more information, see "Connecting the power cord."

10.     Power on the server. For more information, see "Powering on the server."

Verifying the installation

Log in to HDM to verify that the PCIe Ethernet adapter is operating correctly. For more information, see HDM online help.

Installing PCIe M.2 SSDs

Guidelines

CAUTION

CAUTION:

To avoid thermal damage to processors, do not install PCIe M.2 SSDs to a 24SFF compute module if an 8180, 8180M, 8168, 6154, 6146, 6144, or 6244 processor is installed in the compute module.

 

An M.2 transfer module and a riser card are required to install PCIe M.2 SSDs.

To ensure high availability, install two PCIe M.2 SSDs of the same model.

You can install a maximum of two PCIe M.2 SSDs on an M.2 transfer module. The installation procedure is the same for the two SSDs.

Installing a PCIe M.2 SSD in a compute module

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the security bezel, if any. For more information, see "Replacing the security bezel."

3.        Remove the compute module. For more information, see "Removing a compute module."

4.        Remove the compute module access panel. For more information, see "Replacing a compute module access panel."

5.        Remove the high mid air baffle. For more information, see "Replacing air baffles in a compute module."

6.        Install the low mid air baffle. For more information, see "Installing the low mid air baffle or GPU module air baffle to a compute module."

7.        Install the PCIe M.2 SSD to the M.2 transfer module:

a.    Install the internal threaded stud supplied with the transfer module onto the transfer module, as shown in Figure 47.

Figure 47 Installing the internal threaded stud

 

b.    Insert the connector of the SSD into the socket, and push down the other end of the SSD. Then, fasten the screw provided with the transfer module to secure the SSD into place, as shown in Figure 48.

Figure 48 Installing a PCIe M.2 SSD to the M.2 transfer module

 

8.        Install the M.2 transfer module to the riser card. For more information, see "Installing a riser card and a PCIe module in a compute module."

9.        Install the riser card to the compute module. For more information, see "Installing a riser card and a PCIe module in a compute module."

10.     Install the compute module access panel. For more information, see "Replacing a compute module access panel."

11.     Install the compute module. For more information, see "Installing a compute module."

12.     Install the removed security bezel. For more information, see "Installing the security bezel."

13.     Connect the power cord. For more information, see "Connecting the power cord."

14.     Power on the server. For more information, see "Powering on the server."

Installing a PCIe M.2 SSD at the server rear

1.        Power off the server. For more information, see "Powering off the server."

2.        Disconnect external cables from the riser card, if any.

3.        Remove the riser card. For more information, see "Replacing a riser card and PCIe module at the server rear."

4.        Install the PCIe M.2 SSD to the M.2 transfer module. For more information, see "Installing a PCIe M.2 SSD in a compute module."

5.        Install the M.2 transfer module to the riser card. For more information, see "Installing riser cards and PCIe modules at the server rear."

6.        Install the riser card to the server. For more information, see "Installing riser cards and PCIe modules at the server rear."

7.        Reconnect the external cables to the riser card.

8.        Connect the power cord. For more information, see "Connecting the power cord."

9.        Power on the server. For more information, see "Powering on the server."

Installing SD cards

Guidelines

To achieve 1+1 redundancy and avoid storage space waste, install two SD cards with the same capacity as a best practice.

Procedure

1.        Power off the server. For more information, see "Powering off the server."

2.        Disconnect all the cables from the management module.

3.        Remove the management module. For more information, see "Removing the management module."

4.        Orient the SD card with its golden plating facing the dual SD card extended module and insert the SD card into the slot, as shown in Figure 49.

Figure 49 Installing an SD card

 

5.        Installing the extended module to the management module. Align the two blue clips on the extended module with the bracket on the management module, and slowly insert the extended module downwards until it snaps into space, as shown in Figure 50.

Figure 50 Installing the dual SD card extended module

 

6.        Install the management module. For more information, see "Installing the management module."

7.        Reconnect the removed cables to the management module.

8.        Connect the power cord. For more information, see "Connecting the power cord."

9.        Power on the server. For more information, see "Powering on the server."

Installing an NVMe SSD expander module

Guidelines

A riser card in a compute module is required when you install an NVMe SSD expander module.

An NVMe SSD expander module is required only when NVMe drives are installed. For configurations that require an NVMe expander module, see "Drive configurations and numbering."

Procedure

The procedure is the same for installing a 4-port NVMe SSD expander module and an 8-port NVMe SSD expander module. This section uses a 4-port NVMe SSD expander module as an example.

To install an NVMe SSD expander module:

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the security bezel, if any. For more information, see "Replacing the security bezel."

3.        Remove the compute module. For more information, see "Removing a compute module."

4.        Remove the compute module access panel. For more information, see "Replacing a compute module access panel."

5.        Remove the high mid air baffle. For more information, see "Replacing air baffles in a compute module."

6.        Install the low mid air baffle. For more information, see "Installing the low mid air baffle or GPU module air baffle to a compute module."

7.        Connect the four NVMe data cables to the NVMe SSD expander module, as shown in Figure 51.

Make sure you connect the ports on the module with the correct NVMe data cable. For more information, see "Connecting drive cables."

Figure 51 Connecting an NVMe data cable to the NVMe SSD expander module

 

8.        Install the NVMe SSD expander module to the compute module by using a riser card. For more information, see "Installing riser cards and PCIe modules."

9.        Connect the NVMe data cables to the drive backplane. For more information, see "Connecting drive cables in compute modules."

Make sure you connect the ports on the drive backplane with the correct NVMe data cable. For more information, see "Connecting drive cables."

10.     Install the compute module access panel. For more information, see "Replacing a compute module access panel."

11.     Install the compute module. For more information, see "Installing a compute module."

12.     Install the removed security bezel. For more information, see "Installing the security bezel."

13.     Connect the power cord. For more information, see "Connecting the power cord."

14.     Power on the server. For more information, see "Powering on the server."

Installing the NVMe VROC module

1.        Identify the NVMe VROC module connector on the management module. For more information, see "Management module components."

2.        Power off the server. For more information, see "Powering off the server."

3.        Disconnect all the cables from the management module.

4.        Remove the management module. For more information, see "Removing the management module."

5.        Insert the NVMe VROC module onto the NVMe VROC module connector on the management module, as shown in Figure 52.

Figure 52 Installing the NVMe VROC module

 

6.        Install the management module. For more information, see "Installing the management module."

7.        Connect the power cord. For more information, see "Connecting the power cord."

8.        Power on the server. For more information, see "Powering on the server."

Installing a drive backplane

Guidelines

The installation locations of drive backplanes vary by drive configuration. For more information, see "Drive configurations and numbering."

When installing a 4SFF NVMe drive backplane, paste the stickers provided with the backplane over the drive number marks on the front panel of the corresponding compute module. This helps users identify NVMe drive bays. Make sure the numbers on the stickers correspond to the numbers on the front panel of the compute module. For more information about drive numbers, see "Drive configurations and numbering."

Procedure

The procedure is the same for installing a 4SFF drive backplane and a 4SFF NVMe drive backplane. This section uses a 4SFF drive backplane as an example.

To install a 4SFF drive backplane:

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the security bezel, if any. For more information, see "Replacing the security bezel."

3.        Remove the compute module. For more information, see "Removing a compute module."

4.        Remove the compute module access panel. For more information, see "Replacing a compute module access panel."

5.        Remove the air baffles that might hinder the installation in the compute module. For more information, see "Replacing air baffles in a compute module."

6.        Install the drive backplane. Place the backplane against the slot and fasten the captive screw on the backplane, as shown in Figure 53.

Figure 53 Installing the 4SFF drive backplane

 

7.        Connect cables to the drive backplane. For more information, see "Connecting drive cables in compute modules."

8.        Install the removed air baffles. For more information, see "Replacing air baffles in a compute module."

9.        Install the compute module access panel. For more information, see "Replacing a compute module access panel."

10.     Install drives. For more information, see "Installing SAS/SATA drives."

11.     Install the compute module. For more information, see "Installing a compute module."

12.     Install the removed security bezel. For more information, see "Installing the security bezel."

13.     Connect the power cord. For more information, see "Connecting the power cord."

14.     Power on the server. For more information, see "Powering on the server."

Installing a diagnostic panel

Guidelines

You can install a diagnostic panel in compute module 1.

Install the diagnostic panel in the last drive bay of drive cage bay 1 if drive cage bay 2 is not installed with a drive backplane. In any other conditions, install the diagnostic panel in the last drive bay of drive cage bay 2. For more information about the locations, see "Front panel view of a compute module."

Identify the diagnostic panel cable before you install the diagnostic panel. The P/N for the cable is 0404A0SP.

Procedure

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the security bezel, if any. For more information, see "Replacing the security bezel."

3.        Remove the drive or blank in the slot for the installation. For more information about removing a drive, see "Replacing a SAS/SATA drive." For more information about removing a drive blank, see "Installing SAS/SATA drives."

4.        Install the diagnostic panel:

a.    Connect the diagnostic panel cable (P/N 0404A0SP), as shown in Figure 54.

Figure 54 Connecting the diagnostic panel cable

 

b.    Push the diagnostic panel into the slot until it snaps into place, as shown in Figure 55.

Figure 55 Installing the diagnostic panel

 

5.        Install the removed security bezel. For more information, see "Installing the security bezel."

6.        Connect the power cord. For more information, see "Connecting the power cord."

7.        Power on the server. For more information, see "Powering on the server."

Installing processors

Guidelines

·          To avoid damage to the processors or main board, only H3C-authorized personnel and professional server engineers are allowed to install a processor.

·          Make sure the processors are the same model if multiple processors are installed.

·          The pins in the processor socket are very fragile. Make sure a processor socket cover is installed on an empty processor socket.

·          To avoid ESD damage, put on an ESD wrist strap before performing this task, and make sure the wrist strap is reliably grounded.

·          To ensure good ventilation of the processors, do not install any PCIe modules in a 24SFF compute module installed with 8180, 8180M, 8168, 6154, 6146, 6144, or 6244 processors.

·          For the server to operate correctly, make sure processor 1 of compute module 1 is always in position.

·          The server provides four processor sockets for processor installation. Identify the installation locations of processors as shown in Table 7.

Table 7 Process installation locations

Number of processors

Installation locations

1

Socket 1 of compute module 1.

2

·         For processors of model 5xxx, install a processor in socket 1 of compute module 1 and the other in socket 1 of compute module 2.

·         For processors of model 6xxx or 8xxx, install both processors in compute module 1.

4

Install the processors in all sockets.

 

·          For the location of compute modules, see "Front panel view of the server." For the location of processor sockets, see "Main board components."

Procedure

1.        Back up all server data.

2.        Power off the server. For more information, see "Powering off the server."

3.        Remove the security bezel, if any. For more information, see "Replacing the security bezel."

4.        Remove the compute module. For more information, see "Removing a compute module."

5.        Remove the compute module access panel. For more information, see "Replacing a compute module access panel."

6.        Remove the air baffles that might hinder the installation in the compute module. For more information, see "Replacing air baffles in a compute module."

7.        Install a processor onto the retaining bracket, as shown in Figure 56:

 

CAUTION

CAUTION:

To avoid damage to the processor, always hold the processor by its edges. Never touch the gold contacts on the processor bottom.

 

a.    As shown by callout 1, align the small triangle on the processor with the alignment triangle in the retaining bracket, and align the guide pin on the bracket with the notch on the triangle side of the processor.

b.    As shown by callout 2, lower the processor gently and make sure the guide pins on the opposite side of the bracket fit snugly into notches on the processor.

Figure 56 Installing a processor onto the retaining bracket

 

8.        Install the retaining bracket onto the heatsink:

 

CAUTION

CAUTION:

When you remove the protective cover over the heatsink, be careful not to touch the thermal grease on the heatsink.

 

a.    Lift the cover straight up until it is removed from the heatsink, as shown in Figure 57.

Figure 57 Removing the protective cover

 

b.    Install the retaining bracket onto the heatsink. As shown in Figure 58, align the alignment triangle on the retaining bracket with the cut-off corner of the heatsink. Place the bracket on top of the heatsink, with the four corners of the bracket clicked into the four corners of the heatsink.

Figure 58 Installing the processor onto the heatsink

 

9.        Remove the processor socket cover.

 

CAUTION

CAUTION:

·      Take adequate ESD preventive measures when you remove the processor socket cover.

·      Be careful not to touch the pins on the processor socket, which are very fragile. Damage to pins will incur main board replacement.

·      Keep the pins on the processor socket clean. Make sure the socket is free from dust and debris.

 

Hold the cover by the notches on its two edges and lift it straight up and away from the socket. Put the cover away for future use.

Figure 59 Removing the processor socket cover

 

10.     Install the retaining bracket and heatsink onto the server, as shown in Figure 60.

a.    Place the heatsink on the processor socket. Make sure the alignment triangle on the retaining bracket and the pin holes in the heatsink are aligned with the cut-off corner and guide pins of the processor socket, respectively, as shown by callout 1.

b.    Fasten the captive screws on the heatsink in the sequence shown by callouts 2 through 5.

 

CAUTION

CAUTION:

Use an electric screwdriver and set the torque to 1.4 Nm (12 in-lbs) when fastening the screws. Failure to do so may result in poor contact of the processor and the main board or damage to the pins in the processor socket.

 

Figure 60 Attaching the retaining bracket and heatsink to the processor socket

 

11.     Install DIMMs. For more information, see "Installing DIMMs."

12.     Install the removed air baffles. For more information, see "Replacing air baffles in a compute module."

13.     Install the compute module access panel. For more information, see "Replacing a compute module access panel."

14.     Install the compute module. For more information, see "Installing a compute module."

15.     Install the removed security bezel. For more information, see "Installing the security bezel."

16.     Connect the power cord. For more information, see "Connecting the power cord."

17.     Power on the server. For more information, see "Powering on the server."

Verifying the installation

Log in to HDM to verify that the processor is operating correctly. For more information, see HDM online help.

Installing DIMMs

The server supports DCPMMs and DRAM DIMMs (both LRDIMM and RDIMM supported). Compared with DRAM DIMMs, DCPMMs provide larger capacity and can protect data from getting lost in case of unexpected system failures.

Both DCPMMs and DRAM DIMMs are referred to as DIMMs in this document, unless otherwise stated.

Guidelines

WARNING

WARNING!

The DIMMs are not hot swappable.

 

You can install a maximum of 12 DIMMs for each processor, six DIMMs per memory controller. For more information, see "DIMM slots."

For a DIMM to operate at 2933 MHz, make sure the following conditions are met:

·          Use Cascade Lake processors that support 2933 MHz data rate.

·          Use DIMMs with a maximum of 2933 MHz data rate.

·          Install only one DIMM per channel.

The supported DIMMs vary by processor model, as shown in Table 8.

Table 8 Supported DIMMs of a processor

Processor

Supported DIMMs

Skylake

Only DRAM DIMMs.

Cascade Lake

·         Only DRAM DIMMs.

·         Mixture of DCPMM and DRAM DIMMs.

Jintide-C series

Only DRAM DIMMs.

 

Guidelines for installing only DRAM DIMMs

When you install only DRAM DIMMs, follow these restrictions and guidelines:

·          Make sure all DRAM DIMMs installed on the server have the same specifications.

·          Make sure the corresponding processor is present before powering on the server.

·          Make sure the number of ranks per channel does not exceed eight.

·          For the memory mode setting to take effect, make sure the following installation requirements are met when you install DRAM DIMMs for a processor:

 

Memory mode

DIMM requirements

Independent

·         If only one processor is present, see Figure 61.

·         If two processors of model 6xxx or 8xxx are present, see Figure 62.

·         If two processors of model 5xxx are present, see Figure 63.

·         If four processors are present, see Figure 64.

Mirror

Partial Mirror

·         A minimum of two DIMMs for a processor.

·         This mode does not support DIMM population schemes that are not recommended in Figure 61, Figure 62, Figure 63, and Figure 64.

·         If only one processor is present, see Figure 61.

·         If two processors of model 6xxx or 8xxx are present, see Figure 62.

·         If two processors of model 5xxx are present, see Figure 63.

·         If four processors are present, see Figure 64.

Memory Rank Sparing

·         A minimum of 2 ranks per channel.

·         If only one processor is present, see Figure 61.

·         If two processors of model 6xxx or 8xxx are present, see Figure 62.

·         If two processors of model 5xxx are present, see Figure 63.

·         If four processors are present, see Figure 64.

 

 

NOTE:

If the DIMM configuration does not meet the requirements for the configured memory mode, the system uses the default memory mode (Independent mode). For more information about memory modes, see the BIOS user guide for the server.

 

Figure 61 DIMM population schemes (one processor present)

 

Figure 62 DIMM population schemes (two processors of model 6xxx or 8xxx present)

 

Figure 63 DIMM population schemes (two processors of model 5xxx present)

 

Figure 64 DIMM population schemes (four processors present)

 

Guidelines for mixture installation of DCPMMs and DRAM DIMMs

When you install DRAM DIMMs and DCPMMs on the server, follow these restrictions and guidelines:

·          Make sure the corresponding processors are present before powering on the server.

·          Make sure all DRAM DIMMs have the same product code and all DCPMMs have the same product code.

·          As a best practice to increase memory bandwidth, install DRAM and DCPMM DIMMs in different channels.

·          A channel supports a maximum of one DCPMM.

·          As a best practice, install DCPMMs symmetrically across the two memory processing units for a processor.

·          To install both DRAM DIMM and DCPMM in a channel, install the DRAM DIMM in the white slot and the DCPMM in the black slot. To install only one DIMM in a channel, install the DIMM in the white slot if the DIMM is DCPMM.

Procedure

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the security bezel, if any. For more information, see "Replacing the security bezel."

3.        Remove the compute module. For more information, see "Removing a compute module."

4.        Remove the compute module access panel. For more information, see "Replacing a compute module access panel."

5.        Remove the riser card in the compute module, if any. For more information, see "Replacing the riser card and PCIe module in a compute module."

6.        Disconnect the cable between the supercapacitor and the main board, if the supercapacitor is installed over the target DIMM slot.

7.        Remove air baffles in the compute module. For more information, see "Replacing air baffles in a compute module."

8.        Install a DIMM:

a.    Identify the location of the DIMM slot.

Figure 65 DIMM slots numbering

 

b.    Open the DIMM slot latches.

c.    Align the notch on the DIMM with the connector key in the DIMM slot and press the DIMM into the socket until the latches lock the DIMM in place, as shown in Figure 66.

To avoid damage to the DIMM, do not use force to press the DIMM into the socket when you encounter resistance. Instead, re-align the notch with the connector key, and then reinsert the DIMM again.

Figure 66 Installing a DIMM

 

9.        Install the removed air baffles. For more information, see "Replacing air baffles in a compute module."

10.     Reconnect the cable between the supercapacitor and the main board.

11.     Install the removed riser card in the compute module. For more information, see "Installing a riser card and a PCIe module in a compute module."

12.     Install the compute module access panel. For more information, see "Replacing a compute module access panel."

13.     Install the compute module. For more information, see "Installing a compute module."

14.     Install the removed security bezel. For more information, see "Installing the security bezel."

15.     Connect the power cord. For more information, see "Connecting the power cord."

16.     Power on the server. For more information, see "Powering on the server."

Verifying the installation

Use one of the following methods to verify that the memory size is correct:

·          Access the GUI or CLI of the server:

¡  In the GUI of a windows OS, click the Start icon in the bottom left corner, enter msinfo32 in the search box, and then click the msinfo32 item.

¡  In the CLI of a Linux OS, execute the cat /proc/meminfo command.

·          Log in to HDM. For more information, see HDM online help.

·          Access the BIOS. For more information, see the BIOS user guide for the server.

If the memory size is incorrect, re-install or replace the DIMM.

 

 

NOTE:

It is normal that the CLI or GUI of the server OS displays a smaller memory size than the actual size if the mirror or memory rank sparing memory mode is enabled. In this situation, you can verify the memory size from HDM.

 

Installing and setting up a TCM or TPM

Installation and setup flowchart

Figure 67 TCM/TPM installation and setup flowchart

 

Installing a TCM or TPM

Guidelines

·          Do not remove an installed TCM or TPM. Once installed, the module becomes a permanent part of the management module.

·          When installing or replacing hardware, H3C service providers cannot enable the TCM or TPM or the encryption technology. For security reasons, only the customer can enable these features.

·          When replacing the management module, do not remove the TCM or TPM from the management module. H3C will provide a TCM or TPM with the spare management module for management module, TCM, or TPM replacement.

·          Any attempt to remove an installed TCM or TPM from the management module breaks or disfigures the TCM or TPM security rivet. Upon locating a broken or disfigured rivet on an installed TCP or TPM, administrators should consider the system compromised and take appropriate measures to ensure the integrity of the system data.

·          H3C is not liable for blocked data access caused by improper use of the TCM or TPM. For more information, see the encryption technology feature documentation provided by the operating system.

Procedure

The installation procedure is the same for a TPM and a TCM. The following information uses a TPM to show the procedure.

To install a TPM:

1.        Power off the server. For more information, see "Powering off the server."

2.        Disconnect all the cables from the management module.

3.        Remove the management module. For more information, see "Removing the management module."

4.        Install the TPM:

a.    Press the TPM into the TPM connector on the management module, as shown in Figure 68.

Figure 68 Installing a TPM

Orch_144.png

 

b.    Insert the rivet pin as shown by callout 1 in Figure 69.

c.    Insert the security rivet into the hole in the rivet pin and press the security rivet until it is firmly seated, as shown by callout 2 in Figure 69.

Figure 69 Installing the security rivet

 

5.        Install the management module. For more information, see "Installing the management module."

6.        Reconnect the cables to the management module.

7.        Connect the power cord. For more information, see "Connecting the power cord."

8.        Power on the server. For more information, see "Powering on the server."

Enabling the TCM or TPM in the BIOS

By default, the TCM and TPM are enabled for a server. For more information about configuring the TCM or TPM from the BIOS, see the BIOS user guide for the server.

You can log in to HDM to verify that the TCM or TPM is operating correctly. For more information, see HDM online help.

Configuring encryption in the operating system

For more information about this task, see the encryption technology feature documentation that came with the operating system.

The recovery key/password is generated during BitLocker setup, and can be saved and printed after BitLocker is enabled. When using BitLocker, always retain the recovery key/password. The recovery key/password is required to enter Recovery Mode after BitLocker detects a possible compromise of system integrity or firmware or hardware change.

For security purposes, follow these guidelines when retaining the recovery key/password:

·          Always store the recovery key/password in multiple locations.

·          Always store copies of the recovery key/password away from the server.

·          Do not save the recovery key/password on the encrypted hard drive.

For more information about Microsoft Windows BitLocker drive encryption, visit the Microsoft website at http://technet.microsoft.com/en-us/library/cc732774.aspx.


Replacing hardware options

If you are replacing multiple hardware options, read their replacement procedures and identify similar steps to streamline the entire replacement procedure.

Replacing the security bezel

1.        Insert the key provided with the bezel into the lock on the bezel and unlock the security bezel, as shown by callout 1 in Figure 70.

 

CAUTION

CAUTION:

To avoid damage to the lock, hold down the key while you are turning the key.

 

2.        Press the latch at the left end of the bezel, open the security bezel, and then release the latch, as shown by callouts 2 and 3 in Figure 70.

3.        Pull the right edge of the security bezel out of the groove in the right chassis ear to remove the security bezel, as shown by callout 4 in Figure 70.

Figure 70 Removing the security bezel

 

4.        Install a new security bezel. For more information, see "Installing the security bezel."

Replacing a SAS/SATA drive

The drives are hot swappable.

To configure RAID settings after the drive is replaced, see the storage controller user guide for the server.

Procedure

1.        Remove the security bezel, if any. For more information, see "Replacing the security bezel."

2.        Observe the drive LEDs to verify that the drive is not selected by the storage controller and is not performing a RAID migration or rebuilding. For more information about drive LEDs, see "Drive LEDs."

3.        Remove the drive, as shown in Figure 71:

a.    Press the button on the drive panel to release the locking lever, as shown by callout 1.

b.    Hold the locking lever and pull the drive out of the slot, as shown by callout 2.

Figure 71 Removing a drive

 

4.        Install a new drive. For more information, see "Installing SAS/SATA drives."

5.        Install the removed security bezel, if any. For more information, see "Installing the security bezel."

Verifying the replacement

For information about the verification, see "Installing SAS/SATA drives."

Replacing an NVMe drive

The drives support hot insertion and managed hot removal.

To configure RAID settings after the drive is replaced, see the storage controller user guide for the server.

Procedure

1.        Identify the NVMe drive to be removed and perform managed hot removal for the drive. For more information about managed hot removal, see "Appendix D  Managed hot removal of NVMe drives."

2.        Remove the security bezel, if any. For more information, see "Replacing the security bezel."

3.        Remove the drive. For more information, see "Replacing a SAS/SATA drive."

4.        Install a new drive. For more information, see "Installing SAS/SATA drives."

5.        Install the removed security bezel, if any. For more information, see "Installing the security bezel."

Verifying the replacement

For information about the verification, see "Installing NVMe drives."

Replacing a compute module and its main board

WARNING

WARNING!

To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them.

 

CAUTION

CAUTION:

To prevent electrostatic discharge, place the removed parts on an antistatic surface or in antistatic bags.

 

Removing a compute module

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the security bezel, if any. For more information, see "Replacing the security bezel."

3.        Remove the compute module:

a.    Press the clips at both ends of the compute module inward to release the locking levers.

Figure 72 Unlocking a compute module

 

b.    As shown by callout 1 in Figure 73, press the locking levers downward to disengage the compute module from the midplane.

c.    As shown by callout 2 in Figure 73, pull the compute module out of the server.

Figure 73 Removing a compute module

 

Removing the main board of a compute module

1.        Remove the compute module. For more information, see "Removing a compute module."

2.        Remove the components in the compute module:

a.    Remove the drives. For more information, see "Replacing a SAS/SATA drive."

b.    Remove the compute module access panel. For more information, see "Replacing a compute module access panel."

c.    Disconnect all the cables from the main board.

d.    Remove the riser card and PCIe module, if any. For more information, see "Replacing the riser card and PCIe module in a compute module."

e.    Remove the air baffles. For more information, see "Replacing air baffles in a compute module."

f.     Remove the DIMMs. For more information, see "Replacing a DIMM."

g.    Remove the processors and heatsinks. For more information, see "Removing a processor."

h.    Remove the drive backplanes. For more information, see "Replacing drive backplanes."

3.        Remove the main board:

a.    Remove the 16 screws on the main board, as shown in Figure 74.

Figure 74 Removing the screws on a main board

 

b.    Lift the cable clamp and the riser card bracket from the main board, as shown by callouts 1 and 2 Figure 75.

c.    Lift the main board slowly out of the compute module, as shown by callout 3 in Figure 75.

Figure 75 Removing the cable clamp, riser card bracket, and main board

 

Installing a compute module and its main board

1.        Install the main board in the compute module:

a.    Align the installation holes on the main board with the screw holes on the compute module, as shown by callout 1 in Figure 76. Then, place the main board slowly in the compute module.

b.    Put the cable clamp and the riser card bracket onto the main board, as shown by callouts 2 and 3 in Figure 76. Make sure the installation holes on the cable clamp and the riser card bracket are aligned with the screw holes on the main board.

Figure 76 Installing the main board, cable clamp, and riser card bracket

 

c.    Fasten the 16 screws on the main board, as shown in Figure 77.

Figure 77 Securing the screws on a main board

 

2.        Install the components in the compute module:

a.    Install drive backplanes. For more information, see "Installing a drive backplane."

b.    Install processors and heatsinks. For more information, see "Installing processors."

c.    Install DIMMs. For more information, see "Installing DIMMs."

d.    Install air baffles in the compute module. For more information, see "Replacing air baffles in a compute module."

e.    (Optional.) Installing the riser card and PCIe module. For more information, see "Installing a riser card and a PCIe module in a compute module."

f.     Connect cables to the main board.

g.    Install drives. For more information, see "Installing SAS/SATA drives."

h.    Install the compute module access panel. For more information, see "Replacing a compute module access panel."

3.        Install the compute module. For more information, see "Installing a compute module."

4.        Install the removed security bezel. For more information, see "Installing the security bezel."

5.        Connect the power cord. For more information, see "Connecting the power cord."

6.        Power on the server. For more information, see "Powering on the server."

Verifying the replacement

Log in to HDM to verify that each component in the compute module is operating correctly and no alert is generated. For more information, see HDM online help.

Replacing access panels

WARNING

WARNING!

To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them.

 

CAUTION

CAUTION:

To avoid thermal damage to the server, do not operate the server for long periods with the access panel open or uninstalled.

 

Replacing a compute module access panel

The procedure is the same for 8SFF and 24SFF compute modules. This section uses an 8SFF compute module as an example.

To replace the access panel of a compute module:

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the security bezel, if any. For more information, see "Replacing the security bezel."

3.        Remove the compute module. For more information, see "Removing a compute module."

4.        Remove the compute module access panel as shown in Figure 78:

a.    Pressing the locking tabs on both sides of the access panel, slide the access panel backward, as shown by callout 1.

b.    Lift the access panel to remove it, as shown by callout 2.

Figure 78 Removing the access panel

 

5.        Install a new compute module access panel, as shown in Figure 79:

a.    Place the access panel on top of the compute module. Make sure the pegs inside the access panel are aligned with the grooves on both sides of the compute module.

b.    Slide the access panel toward the front of the compute module until it snaps into place.

Figure 79 Installing a compute module access panel

 

6.        Install the compute module. For more information, see "Installing a compute module."

7.        Install the security bezel. For more information, see "Installing the security bezel."

8.        Connect the power cord. For more information, see "Connecting the power cord."

9.        Power on the server. For more information, see "Powering on the server."

Replacing the chassis access panel

To replace the chassis access panel:

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the server from the rack, if the space over the server is insufficient. For more information, see "Removing the server from a rack."

3.        Remove the  chassis access panel. The removal process is the same for the compute module access panel and chassis access panel. For more information, see "Replacing a compute module access panel."

4.        Install a new access panel, as shown in Figure 80.

a.    Place the access panel on top of the server, with the standouts on the inner side of the access panel aligned with the square holes at the server rear.

b.    Slide the access panel toward the server front until it snaps into place.

Figure 80 Installing the chassis access panel

 

5.        Rack-mount the server. For more information, see "Installing the server."

6.        Connect the power cord. For more information, see "Connecting the power cord."

7.        Power on the server. For more information, see "Powering on the server."

Replacing a power supply

The power supplies are hot swappable.

If more than one operating power supply is present, you can replace a power supply without powering off the server.

Procedure

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the power cord from the power supply, as shown in Figure 81:

a.    Press the tab to disengage the ratchet from the tie mount, slide the cable clamp outward, and then release the tab, as shown by callouts 1 and 2.

b.    Open the cable clamp and remove the power cord out of the clamp, as shown by callouts 3 and 4.

c.    Unplug the power cord, as shown by callout 5.

Figure 81 Removing the power cord

 

3.        Holding the power supply by its handle and pressing the retaining latch with your thumb, pull the power supply slowly out of the slot, as shown in Figure 82.

Figure 82 Removing the power supply

 

4.        Install a new power supply. For more information, see "Installing power supplies."

5.        Connect the power cord. For more information, see "Connecting the power cord."

6.        Power on the server. For more information, see "Powering on the server."

Verifying the replacement

Use the following methods to verify that the power supply has been replaced correctly:

·          Observe the power supply LED to verify that the power supply LED is steady or flashing green. For more information about the power supply LED, see LEDs in "Rear panel."

·          Log in to HDM to verify that the power supply status is correct. For more information, see HDM online help.

Replacing air baffles

WARNING

WARNING!

To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them.

 

Replacing air baffles in a compute module

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the security bezel, if any. For more information, see "Replacing the security bezel."

3.        Remove the compute module. For more information, see "Removing a compute module."

4.        Remove the compute module access panel. For more information, see "Replacing a compute module access panel."

5.        If you are to remove the low mid air baffle, remove the riser card. If you are to remove a left or right air baffle, remove the supercapacitor and the supercapacitor holder attached to the air baffle.

For information about removing a riser card, see "Replacing a riser card and a PCIe module." For information about removing a supercapacitor, see "Replacing the power fail safeguard module for a storage controller."

6.        Remove air baffles in a compute module. Hold and lift the air baffle out of the compute module, as shown in Figure 83.

The removal procedure is the same for the high mid air baffle, low mid air baffle, and bilateral air baffles. The following figure uses the high mid air baffle for illustration.

Figure 83 Removing the high mid air baffle in a compute module

 

7.        Install new air baffles:

¡  To install the high mid air baffle, place the air baffle in the compute module. Make sure the two pin holes on the air baffle align with the guide pins on the main board and the cable clamp, as shown in Figure 84.

Figure 84 Installing the high mid air baffle

 

¡  To install the low mid air baffle, see "Installing the low mid air baffle or GPU module air baffle to a compute module" for more information.

¡  To install a left or right air baffle, place the air baffle in the compute module, as shown in Figure 85, with the standouts on the air baffle aligned with the notches on the side of the compute module.

The installation procedure is the same for the bilateral air baffles. The following figure uses the right air baffle for illustration.

Figure 85 Installing the right air baffle

 

8.        Install the removed riser cards, if any. For more information, see "Replacing a riser card and a PCIe module."

9.        Install the removed supercapacitor and supercapacitor holder. For more information, see "Installing storage controllers and power fail safeguard modules."

10.     Install the compute module access panel. For more information, see "Replacing a compute module access panel."

11.     Install the compute module. For more information, see "Installing a compute module."

12.     Install the removed security bezel. For more information, see "Installing the security bezel."

13.     Connect the power cord. For more information, see "Connecting the power cord."

14.     Power on the server. For more information, see "Powering on the server."

Replacing the power supply air baffle

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the server if the space over the server is insufficient. For more information, see "Removing the server from a rack."

3.        Remove the chassis access panel. For more information, see "Replacing the chassis access panel."

4.        Remove the power supply air baffle. Hold the notches on the air baffle and lift the air baffle out of the chassis, as shown in Figure 86.

Figure 86 Removing the power supply air baffle

 

5.        Install the power supply air baffle. Place the air baffle in the chassis, as shown in Figure 87. Make sure the four standouts on the air baffle are aligned with the notches on the chassis.

Figure 87 Installing the power supply air baffle

 

6.        Install the chassis access panel. For more information, see "Replacing the chassis access panel."

7.        Mount the server in a rack. For more information, see "Installing the server."

8.        Connect the power cord. For more information, see "Connecting the power cord."

9.        Power on the server. For more information, see "Powering on the server."

Replacing a riser card air baffle

1.        Power off the server. For more information, see "Powering off the server."

2.        Disconnect external cables from the riser card at the server rear if the cables hinder air baffle replacement.

3.        Remove the riser card. For more information, see "Replacing a riser card and PCIe module at the server rear."

4.        Remove the air baffle in the riser card. Squeeze the clips at both sides of the air baffle, and lift the air baffle out of the riser card, as shown in Figure 88.

Figure 88 Removing the riser card air baffle

 

5.        Install a new air baffle. Squeeze the clips at both sides of the air baffle, place the air baffle into place, and release the clips, as shown in Figure 89.

Make sure the standouts at both ends of the air baffle are aligned with the notches inside the riser card.

Figure 89 Installing a riser card air baffle

 

6.        Install the riser card. For more information, see "Installing riser cards and PCIe modules at the server rear."

7.        Reconnect the external cables to the riser card.

8.        Connect the power cord. For more information, see "Connecting the power cord."

9.        Power on the server. For more information, see "Powering on the server."

Replacing a riser card and a PCIe module

WARNING

WARNING!

To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them.

 

Replacing the riser card and PCIe module in a compute module

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the security bezel, if any. For more information, see "Replacing the security bezel."

3.        Remove the compute module. For more information, see "Removing a compute module."

4.        Remove the compute module access panel. For more information, see "Replacing a compute module access panel."

5.        Remove the riser card in the compute module:

a.    Disconnect all PCIe cables from the riser card.

b.    Loosen the captive screw on the riser card, and lift the riser card slowly out of the compute module, as shown in Figure 90.

Figure 90 Removing the RS-FHHL-G3 riser card

 

6.        Hold and rotate the latch upward to unlock the riser card, and then pull the PCIe module out of the slot, as shown in Figure 91.

Figure 91 Removing a PCIe module

 

7.        Install a new riser card and PCIe module. For more information, see "Installing riser cards and PCIe modules."

8.        Install the compute module access panel. For more information, see "Replacing a compute module access panel."

9.        Install the compute module. For more information, see "Installing a compute module."

10.     Install the removed security bezel. For more information, see "Installing the security bezel."

11.     Connect the power cord. For more information, see "Connecting the power cord."

12.     Power on the server. For more information, see "Powering on the server."

Replacing a riser card and PCIe module at the server rear

The procedure is the same for all the riser cards at the server rear. This section uses the RS-6*FHHL-G3-1 riser card as an example.

Procedure

1.        Power off the server. For more information, see "Powering off the server."

2.        Disconnect external cables from the riser card, if the cables hinder riser card replacement.

3.        Remove the riser card, as shown in Figure 92:

a.    As shown by callout 1, press the latch upward to release the ejector lever on the riser card.

b.    As shown by callout 2, rotate the ejector lever down to disengage the riser card from the midplane, and then pull the riser card out of the riser bay.

Figure 92 Removing the riser card

 

4.        Remove the PCIe module in the rear riser card.

a.    If the PCIe module has any cables, remove the riser card air baffle first. For more information, see "Replacing a riser card air baffle."

b.    Disconnect the cable between the PCIe module and the riser card, if any.

c.    Open the riser card cover. For more information, see "Installing riser cards and PCIe modules at the server rear."

d.    Remove the screw on the PCIe module, if any. Then, pull the module out of the riser card, as shown in Figure 93.

Figure 93 Removing a PCIe module in the riser card

 

5.        Install a new PCIe module to the riser card. For more information, see "Installing riser cards and PCIe modules at the server rear."

6.        Install the riser card. For more information, see "Installing riser cards and PCIe modules at the server rear."

7.        (Optional.) Connect external cables to the riser card.

8.        Connect the power cord. For more information, see "Connecting the power cord."

9.        Power on the server. For more information, see "Powering on the server."

Verifying the replacement

Log in to HDM to verify that the PCIe module in the riser card is operating correctly. For more information, see HDM online help.

Replacing a storage controller

WARNING

WARNING!

To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them.

 

Guidelines

To replace the storage controller with a controller of a different model, reconfigure RAID after the replacement. For more information, see the storage controller user guide for the server.

To replace the storage controller with a controller of the same model, make sure the following configurations remain the same after replacement:

·          Storage controller operating mode.

·          Storage controller firmware version.

·          BIOS boot mode.

·          First boot option in Legacy mode.

For more information, see the storage controller user guide and the BIOS user guide for the server.

Preparing for replacement

To replace the storage controller with a controller of the same model, identify the following information before the replacement:

·          Storage controller location and cabling.

·          Storage controller model, operating mode, and firmware version.

·          BIOS boot mode.

·          First boot option in Legacy mode.

To replace the storage controller with a controller of a different model, back up data in drives and then clear RAID information before the replacement.

Procedure

The replacement procedure is the same for storage controllers of different models. This section uses the RAID-LSI-9361-8i(1G)-A1-X storage controller as an example.

To replace a storage controller:

1.        Power off the server. For more information, see "Powering off the server."

2.        Disconnect external cables from the riser card that holds the storage controller.

3.        Remove the riser card. For more information, see "Replacing a riser card and PCIe module at the server rear."

4.        Disconnect cables from the storage controller.

5.        Remove the storage controller. For more information, see "Replacing a riser card and PCIe module at the server rear."

6.        Remove the power fail safeguard module as needed. For more information, see "Replacing the power fail safeguard module for a storage controller."

7.        Install the removed power fail safeguard module to a new storage controller as needed. For more information, see "Replacing the power fail safeguard module for a storage controller."

8.        Install the new storage controller. For more information, see "Installing storage controllers and power fail safeguard modules."

9.        Install the riser card that holds the storage controller. For more information, see "Installing riser cards and PCIe modules at the server rear."

10.     Connect the external cables to the riser card.

11.     Connect the power cord. For more information, see "Connecting the power cord."

12.     Power on the server. For more information, see "Powering on the server."

Verifying the replacement

Log in to HDM to verify that the storage controller is in a correct state. For more information, see HDM online help.

Replacing the power fail safeguard module for a storage controller

WARNING

WARNING!

To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them.

 

CAUTION

CAUTION:

To avoid server errors, do not replace the power fail safeguard module when a drive is performing RAID migration or rebuilding. The Fault/UID LED is off and the Present/Active LED is flashing green on a drive if the drive is performing migration or rebuilding.

 

Procedure

1.        Power off the server. For more information, see "Powering off the server."

2.        Disconnect external cables from the riser card that holds the storage controller.

3.        Remove the riser card. For more information, see "Replacing a riser card and PCIe module at the server rear."

4.        Disconnect cables that might hinder the replacement.

5.        Remove the storage controller. For more information, see "Replacing a riser card and PCIe module at the server rear."

6.        Remove the screws that secure the flash card on the storage controller, and then remove the flash card, as shown in Figure 94.

Figure 94 Removing the flash card on the storage controller

 

7.        Remove the security bezel, if any. For more information, see "Replacing the security bezel."

8.        Remove the compute module. For more information, see "Removing a compute module."

9.        Remove the compute module access panel. For more information, see "Replacing a compute module access panel."

10.     Remove the supercapacitor, as shown in Figure 95.

a.    Disconnect the cable between the main board and the supercapacitor, as shown by callout 1.

b.    Pull the clip on the supercapacitor holder, take the supercapacitor out of the holder, and then release the clip, as shown by callouts 2 and 3.

Figure 95 Removing the supercapacitor

 

11.     Lift the retaining latch at the bottom of the supercapacitor holder, slide the holder to remove it, and then release the retaining latch, as shown in Figure 96.

Figure 96 Removing the supercapacitor holder

 

12.     Install a new power fail safeguard module. For more information, see "Installing storage controllers and power fail safeguard modules."

13.     Connect the power cord. For more information, see "Connecting the power cord."

14.     Power on the server. For more information, see "Powering on the server."

Verifying the replacement

Log in to HDM to verify that the flash card and the supercapacitor are in a correct state. For more information, see HDM online help.

Replacing a GPU module

WARNING

WARNING!

To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them.

 

Replacing the GPU module in a compute module

The replacement procedure is the same for the GPU-P40-X, GPU-P100, and GPU-V100 GPU modules. This section uses the GPU-P100 as an example.

To replace the GPU module in a compute module:

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the security bezel, if any. For more information, see "Replacing the security bezel."

3.        Remove the compute module. For more information, see "Removing a compute module."

4.        Remove the compute module access panel. For more information, see "Replacing a compute module access panel."

5.        Remove the riser card in the compute module. Loosen the captive screw on the riser card, and lift the riser card slowly out of the compute module, as shown in Figure 97.

Figure 97 Removing the riser card

 

6.        Remove the GPU module from the riser card, as shown in Figure 98:

a.    Hold and rotate the latch to the left to unlock the riser card, as shown by callout 1.

b.    Disconnect the GPU power cord from the GPU module, as shown by callout 2.

c.    Pull the GPU module out of the slot and then disconnect the GPU power cord from the riser card, as shown by callouts 3 and 4.

Figure 98 Removing a GPU module

 

7.        Install a new GPU module. For more information, see "Installing a GPU module in a compute module."

8.        Install the compute module access panel. For more information, see "Replacing a compute module access panel."

9.        Install the compute module. For more information, see "Installing a compute module."

10.     Install the removed security bezel. For more information, see "Installing the security bezel."

11.     Connect the power cord. For more information, see "Connecting the power cord."

12.     Power on the server. For more information, see "Powering on the server."

Replacing a GPU module at the server rear

1.        Power off the server. For more information, see "Powering off the server."

2.        Disconnect external cables from the riser card, if any.

3.        Remove the rear riser card and the GPU module. For more information, see "Replacing a riser card and PCIe module at the server rear."

4.        Install a new GPU module to the riser card and install the riser card. For more information, see "Installing riser cards and PCIe modules at the server rear."

5.        Reconnect the external cables to the riser card.

6.        Connect the power cord. For more information, see "Connecting the power cord."

7.        Power on the server. For more information, see "Powering on the server."

Replacing an Ethernet adapter

WARNING

WARNING!

To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them.

 

Replacing an mLOM Ethernet adapter

The procedure is the same for mLOM Ethernet adapters of different models. This section uses the NIC-GE-4P-360T-L3-M mLOM Ethernet adapter as an example.

Procedure

1.        Power off the server. For more information, see "Powering off the server."

2.        Disconnect external cables from the riser card that holds the mLOM Ethernet adapter.

3.        Remove the riser card. For more information, see "Replacing a riser card and PCIe module at the server rear."

4.        Remove the mLOM Ethernet adapter, as shown in Figure 99:

a.    Open the riser card cover. For more information, see "Installing riser cards and PCIe modules at the server rear."

b.    Remove the screw that secures the mLOM Ethernet adapter, and then pull the Ethernet adapter out of the slot, as shown in Figure 99.

Figure 99 Removing an mLOM Ethernet adapter

 

5.        Install a new mLOM Ethernet adapter. For more information, see "Installing an mLOM Ethernet adapter."

6.        Install the riser card. For more information, see "Installing riser cards and PCIe modules at the server rear."

7.        Reconnect the external cables to the riser card.

8.        Connect the power cord. For more information, see "Connecting the power cord."

9.        Power on the server. For more information, see "Powering on the server."

Verifying the replacement

Log in to HDM to verify that the mLOM Ethernet adapter is in a correct state. For more information, see HDM online help.

Replacing a PCIe Ethernet adapter

WARNING

WARNING!

To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them.

 

Procedure

1.        Power off the server. For more information, see "Powering off the server."

2.        Disconnect external cables from the riser card that holds the PCIe Ethernet adapter.

3.        Remove the riser card and the PCIe Ethernet adapter. For more information, see "Replacing a riser card and PCIe module at the server rear."

4.        Install a new PCIe Ethernet adapter. For more information, see "Installing a PCIe Ethernet adapter."

5.        If you have removed the riser card air baffle, install the removed riser card air baffle.. For more information, see "Replacing a riser card air baffle."

6.        Install the riser card to the server. For more information, see "Installing riser cards and PCIe modules at the server rear."

7.        Reconnect the external cables to the riser card.

8.        Connect the power cord. For more information, see "Connecting the power cord."

9.        Power on the server. For more information, see "Powering on the server."

Verifying the replacement

Log in to HDM to verify that the PCIe Ethernet adapter is in a correct state. For more information, see HDM online help.

Replacing an M.2 transfer module and a PCIe M.2 SSD

WARNING

WARNING!

To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them.

 

Replacing the M.2 transfer module and a PCIe M.2 SSD in a compute module

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the security bezel, if any. For more information, see "Replacing the security bezel."

3.        Remove the compute module. For more information, see "Removing a compute module."

4.        Remove the compute module access panel. For more information, see "Replacing a compute module access panel."

5.        Remove the M.2 transfer module from the riser card. For more information, see "Replacing the riser card and PCIe module in a compute module."

6.        Remove the PCIe M.2 SSD:

a.    Remove the screw that secures the SSD on the transfer module. Tilt the SSD by the screw-side edge, and then pull the SSD out of the connector, as shown in Figure 100.

Figure 100 Removing a PCIe M.2 SSD

 

b.    Remove the internal threaded stud on the transfer module, as shown in Figure 101.

Figure 101 Removing the internal threaded stud

 

7.        Install a new M.2 transfer module and a new PCIe M.2 SSD. For more information, see "Installing a PCIe M.2 SSD in a compute module."

8.        Install the compute module access panel. For more information, see "Replacing a compute module access panel."

9.        Install the compute module. For more information, see "Installing a compute module."

10.     Install the removed security bezel. For more information, see "Installing the security bezel."

11.     Connect the power cord. For more information, see "Connecting the power cord."

12.     Power on the server. For more information, see "Powering on the server."

Replacing an M.2 transfer module and a PCIe M.2 SSD at the server rear

1.        Power off the server. For more information, see "Powering off the server."

2.        Disconnect external cables from the riser card that holds the M.2 transfer module.

3.        Remove the riser card and the M.2 transfer module. For more information, see "Replacing a riser card and PCIe module at the server rear."

4.        Remove the PCIe M.2 SSD. For more information, see "Replacing the M.2 transfer module and a PCIe M.2 SSD in a compute module."

5.        Install a new M.2 transfer and a new PCIe M.2 SSD to the riser card. For more information, "Installing a PCIe M.2 SSD at the server rear."

6.        Install the riser card to the server. For more information, see "Installing riser cards and PCIe modules at the server rear."

7.        Reconnect the external cables to the riser card.

8.        Connect the power cord. For more information, see "Connecting the power cord."

9.        Power on the server. For more information, see "Powering on the server."

Replacing an SD card

WARNING

WARNING!

To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them.

 

To replace an SD card:

1.        Power off the server. For more information, see "Powering off the server."

2.        Disconnect all the cables from the management module.

3.        Remove the management module. For more information, see "Removing the management module."

4.        Press the SD card to release it and then pull the SD card out of the slot, as shown in Figure 102.

Figure 102 Removing an SD card

 

5.        Install a new SD card. For more information, see "Installing SD cards."

6.        Install the management module. For more information, see "Installing the management module."

7.        Reconnect the cables to the management module.

8.        Connect the power cord. For more information, see "Connecting the power cord."

9.        Power on the server. For more information, see "Powering on the server."

Replacing the dual SD card extended module

WARNING

WARNING!

To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them.

 

To replace the dual SD card extended module:

1.        Power off the server. For more information, see "Powering off the server."

2.        Disconnect all the cables from the management module.

3.        Remove the management module. For more information, see "Removing the management module."

4.        Press the clip on the dual SD card extended module, as shown by callout 1 in Figure 103. Pull the module out of the connector, and then release the clip.

Figure 103 Removing the dual SD card extended module

 

5.        Remove the SD cards installed on the extended module, as shown in Figure 102.

6.        Install a new dual SD card extended module to the management module and install the removed SD cards. For more information, see "Installing SD cards."

7.        Install the management module. For more information, see "Installing the management module."

8.        Reconnect the cables to the management module.

9.        Connect the power cord. For more information, see "Connecting the power cord."

10.     Power on the server. For more information, see "Powering on the server."

Replacing an NVMe SSD expander module

WARNING

WARNING!

To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them.

 

The procedure is the same for replacing a 4-port NVMe SSD expander module and an 8-port NVMe SSD expander module. This section uses a 4-port NVMe SSD expander module as an example.

To replace an NVMe SSD expander module:

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the security bezel, if any. For more information, see "Replacing the security bezel."

3.        Remove the compute module. For more information, see "Removing a compute module."

4.        Remove the compute module access panel. For more information, see "Replacing a compute module access panel."

5.        Remove the NVMe SSD expander module:

a.    Disconnect the expander module from the front drive backplanes by removing the cables from the front drive backplanes.

b.    Remove the PCIe riser card that holds the NVMe SSD expander module. For more information, see "Replacing the riser card and PCIe module in a compute module."

c.    Disconnect cables from the NVMe SSD expander module, as shown in Figure 104.

Figure 104 Disconnecting cables from an NVMe SSD expander module

 

6.        Install a new NVMe SSD expander module. For more information, see "Installing an NVMe SSD expander module."

7.        Install the compute module access panel. For more information, see "Replacing a compute module access panel."

8.        Install the compute module. For more information, see "Installing a compute module."

9.        Install the removed security bezel. For more information, see "Installing the security bezel."

10.     Connect the power cord. For more information, see "Connecting the power cord."

11.     Power on the server. For more information, see "Powering on the server."

Verifying the replacement

Log in to HDM to verify that the NVMe expander module is in a correct state. For more information, see HDM online help.

Replacing the NVMe VROC module

WARNING

WARNING!

To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them.

 

To replace the NVMe VROC module:

1.        Power off the server. For more information, see "Powering off the server."

2.        Disconnect all the cables from the management module.

3.        Remove the management module. For more information, see "Removing the management module."

4.        Hold the ring part of the NVMe VROC module and pull the module out of the management module, as shown in Figure 105.

Figure 105 Removing the NVMe VROC module

 

5.        Install a new NVMe VROC module. For more information, see "Installing the NVMe VROC module."

6.        Install the management module. For more information, see "Installing the management module."

7.        Reconnect the cables to the management module.

8.        Connect the power cord. For more information, see "Connecting the power cord."

9.        Power on the server. For more information, see "Powering on the server."

Replacing a fan module

WARNING

WARNING!

To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them.

 

CAUTION

CAUTION:

To avoid thermal damage to the server, do not operate the server for long periods with the access panel open or uninstalled.

 

The fan modules are hot swappable. If sufficient space is available for replacement, you can replace a fan module without powering off the server or removing the server from the rack. The following procedure is provided based on the assumption that no sufficient space is available for replacement.

Procedure

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the server from the rack. For more information, see "Removing the server from a rack."

3.        Remove the chassis access panel. For more information, see "Replacing the chassis access panel."

4.        Pinch the latches on both sides of the fan module to pull the fan module out of the slot, as shown in Figure 106.

Figure 106 Removing a fan module

 

5.        Install a new fan module. Insert the fan module into the slot, as shown in Figure 107.

Figure 107 Installing a fan module

 

6.        Install the chassis access panel. For more information, see "Replacing the chassis access panel."

7.        Rack-mount the server. For more information, see "Rack-mounting the server."

8.        Connect the power cord. For more information, see "Connecting the power cord."

9.        Power on the server. For more information, see "Powering on the server."

Verifying the replacement

Log in to HDM to verify that the fan module is in a correct state. For more information, see HDM online help.

Replacing a processor

WARNING

WARNING!

To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them.

 

Guidelines

·          To avoid damage to a processor or a compute module main board, only H3C authorized or professional server engineers can install, replace, or remove a processor.

·          Make sure the processors on the server are the same model.

·          The pins in the processor sockets are very fragile and prone to damage. Make sure a processor socket cover is installed on an empty processor socket.

·          For the server to operate correctly, make sure processor 1 of compute module 1 is in position. For more information about processor locations, see "Main board components."

Prerequisites

To avoid ESD damage, wear an ESD wrist strap before performing this task, and make sure the wrist strap is reliably grounded.

Removing a processor

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the security bezel, if any. For more information, see "Replacing the security bezel."

3.        Remove the compute module. For more information, see "Removing a compute module."

4.        Remove the compute module access panel. For more information, see "Replacing a compute module access panel."

5.        Remove the riser card and PCIe module in the compute module, if they hinder processor replacement. For more information, see "Replacing the riser card and PCIe module in a compute module."

6.        Remove air baffles that might hinder the replacement in the compute module. For more information, see "Replacing air baffles in a compute module."

7.        Remove the processor heatsink, as shown in Figure 108:

 

CAUTION

CAUTION:

Be careful not to touch the pins on the processor socket, which are very fragile. Damage to pins will incur main board replacement.

 

a.    Loosen the captive screws in the same sequence as shown by callouts 1 to 4.

b.    Lift the heatsink slowly to remove it, as shown by callout 5.

Figure 108 Removing a processor heatsink

 

8.        Remove the processor retaining bracket from the heatsink, as shown in Figure 109:

a.    Insert a flat-head tool (such as a flat-head screwdriver) into the notch marked with TIM BREAKER to pry open the retaining bracket, as shown by callout 1.

b.    Press the four clips in the four corners of the bracket to release the retaining bracket.

You must press the clip shown by callout 2 and its cater-cornered clip outward, and press the other two clips inward as shown by callout 3.

 

 

NOTE:

Callouts 2 and 3 show the upside-down views of the retaining bracket and the heatsink.

 

c.    Lift the retaining bracket to remove it from the heatsink, as shown by callout 4.

Figure 109 Removing the processor retaining bracket

 

9.        Separate the processor from the retaining bracket with one hand pushing down and the other hand tilting the processor, as shown in Figure 110.

Figure 110 Separating the processor from the retaining bracket

 

Installing a processor

1.        Install the processor onto the retaining bracket. For more information, see "Installing processors."

2.        Smear thermal grease onto the processor:

a.    Clean the processor and heatsink with isopropanol wipes. Allow the isopropanol to evaporate before continuing.

b.    Use the thermal grease injector to inject 0.6 ml of thermal grease to the five dots on the processor, 0.12 ml for each dot, as shown in Figure 111.

Figure 111 Smearing thermal grease onto the processor

R190_027.png

 

3.        Install the retaining bracket onto the heatsink. For more information, see "Installing processors."

4.        Install the heatsink onto the server. For more information, see "Installing processors."

5.        Paste bar code label supplied with the processor over the original processor label on the heatsink.

 

IMPORTANT

IMPORTANT:

This step is required for you to obtain H3C's processor servicing.

 

6.        Install the removed air baffles in the compute module. For more information, see "Replacing air baffles in a compute module."

7.        Install the removed riser card and PCIe module in the compute module. For more information, see "Installing a riser card and a PCIe module in a compute module."

8.        Install the compute module access panel. For more information, see "Replacing a compute module access panel."

9.        Install the compute module. For more information, see "Installing a compute module."

10.     Install the removed security bezel. For more information, see "Installing the security bezel."

11.     Connect the power cord. For more information, see "Connecting the power cord."

12.     Power on the server. For more information, see "Powering on the server."

Verifying the replacement

Log in to HDM to verify that the processor is in a correct state. For more information, see HDM online help.

Replacing a DIMM

WARNING

WARNING!

To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them.

 

Procedure

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the security bezel, if any. For more information, see "Replacing the security bezel."

3.        Remove the compute module. For more information, see "Removing a compute module."

4.        Remove the compute module access panel. For more information, see "Replacing a compute module access panel."

5.        Remove the riser card and PCIe module in the compute module. For more information, see "Replacing the riser card and PCIe module in a compute module."

6.        Remove air baffles in the compute module. For more information, see "Replacing air baffles in a compute module."

7.        Open the DIMM slot latches and pull the DIMM out of the slot, as shown in Figure 112.

Figure 112 Removing a DIMM

 

8.        Install a new DIMM. For more information, see "Installing DIMMs."

9.        Install the removed air baffles in the compute module. For more information, see "Replacing air baffles in a compute module."

10.     Install the removed riser card and PCIe module in the compute module. For more information, see "Installing a riser card and a PCIe module in a compute module."

11.     Install the compute module access panel. For more information, see "Replacing a compute module access panel."

12.     Install the compute module. For more information, see "Installing a compute module."

13.     Install the removed security bezel. For more information, see "Installing the security bezel."

14.     Connect the power cord. For more information, see "Connecting the power cord."

15.     Power on the server. For more information, see "Powering on the server."

During server startup, you can access BIOS to configure the memory mode of the newly installed DIMM. For more information, see the BIOS user guide for the server.

Verifying the replacement

For information about the verification method, see "Installing DIMMs."

Replacing the system battery

WARNING

WARNING!

To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them.

 

The server comes with a system battery (Panasonic BR2032) installed on the management module, which supplies power to the real-time clock and has a lifespan of 5 to 10 years. If the server no longer automatically displays the correct date and time, you might need to replace the battery. As a best practice, use the Panasonic BR2032 battery to replace the old one.

 

 

NOTE:

The BIOS will restore to the default settings after the replacement. You must reconfigure the BIOS to have the desired settings, including the system date and time. For more information, see the BIOS user guide for the server.

 

Removing the system battery

1.        Power off the server. For more information, see "Powering off the server."

2.        Disconnect all the cables from the management module.

3.        Remove the management module. For more information, see "Removing the management module."

4.        Pinch the system battery by its top edge and lift the battery out of the battery holder slowly, as shown in Figure 113.

Figure 113 Removing the system battery

 

 

NOTE:

For environment protection purposes, dispose of the used-up system battery at a designated site.

 

Installing the system battery

1.        Insert the system battery into the system battery holder on the management module, as shown in Figure 114.

Figure 114 Installing the system battery

 

2.        Install the management module. For more information, see "Installing the management module."

3.        Connect cables to the management module.

4.        Connect the power cord. For more information, see "Connecting the power cord."

5.        Power on the server. For more information, see "Powering on the server."

6.        Access the BIOS to reconfigure the system date and time. For more information, see the BIOS user guide for the server.

Verifying the replacement

Verify that the system date and time is displayed correctly on HDM or the connector monitor.

Replacing drive backplanes

WARNING

WARNING!

To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them.

 

The procedure is the same for 4SFF, 4SFF NVMe, and 24SFF drive backplanes. This section uses a 4SFF drive backplane as an example.

To replace the drive backplane:

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the security bezel, if any. For more information, see "Replacing the security bezel."

3.        Remove the compute module. For more information, see "Removing a compute module."

4.        Remove the drives attached to the backplane. For more information, see "Replacing a SAS/SATA drive."

5.        Remove the compute module access panel. For more information, see "Replacing a compute module access panel."

6.        Remove the air baffles that might hinder the replacement in the compute module. For more information, see "Replacing air baffles in a compute module."

7.        Disconnect all the cables from the backplane.

8.        Loosen the captive screw on the backplane, slowly lift the backplane, and then pull it out of the compute module, as shown in Figure 115.

Figure 115 Removing a 4SFF SAS/SATA drive backplane

 

9.        Install a new drive backplane. For more information, see "Installing a drive backplane."

10.     Reconnect the cables to the backplane. For more information, see "Connecting drive cables."

11.     Install the removed air baffles in the compute module. For more information, see "Replacing air baffles in a compute module."

12.     Install the compute module access panel. For more information, see "Replacing a compute module access panel."

13.     Install the removed drives. For more information, see "Installing SAS/SATA drives."

14.     Install the compute module. For more information, see "Installing a compute module."

15.     Install the removed security bezel. For more information, see "Installing the security bezel."

16.     Connect the power cord. For more information, see "Connecting the power cord."

17.     Power on the server. For more information, see "Powering on the server."

Replacing the management module

WARNING

WARNING!

To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them.

 

Removing the management module

1.        Power off the server. For more information, see "Powering off the server."

2.        Disconnect all the cables from the management module.

3.        Loosen the captive screw on the ejector lever of the management module, rotate the ejector lever outward, and then pull the module out of the server, as shown in Figure 116.

Figure 116 Removing the management module

 

4.        Remove the dual SD card extended module. For more information, see "Replacing an NVMe SSD expander module."

5.        Remove the NVMe VROC module. For more information, see "Replacing the NVMe VROC module."

Installing the management module

1.        Install the NVMe VROC module. For more information, see "Installing the NVMe VROC module."

2.        Install the dual SD card extended module. For more information, see "Installing SD cards."

3.        Install the TPM or TCM. For more information, see "Installing and setting up a TCM or TPM."

4.        Push the management module into the slot, rotate the ejector lever inward, and then fasten the captive screw on the ejector lever to secure the module into place, as shown in Figure 117.

Figure 117 Installing the management module

 

5.        Reconnect the cables to the management module.

6.        Connect the power cord. For more information, see "Connecting the power cord."

7.        Power on the server. For more information, see "Powering on the server."

Replacing the PDB

WARNING

WARNING!

To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them.

 

Removing the PDB

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the server from the rack, if the space over the server is insufficient. For more information, see "Removing the server from a rack."

3.        Remove the chassis access panel. For more information, see "Replacing the chassis access panel."

4.        Remove the power supply air baffle. For more information, see "Replacing the power supply air baffle."

5.        Remove all fan modules. For more information, see "Replacing a fan module."

6.        Remove all power supplies. For more information, see "Replacing a power supply."

7.        Remove the management module. For more information, see "Removing the management module."

8.        Remove the PDB:

a.    Disconnect cables from the PDB, as shown in Figure 118.

Figure 118 Disconnecting cables from the PDB

 

b.    Press the locking tabs on the PDB to release the ejector levers of the PDB, as shown in Figure 119.

Figure 119 Unlocking the PDB

 

c.    Pull up the extension handles on the ejector levers. Hold the handles and rotate the ejector levers downward, as shown by callouts 1 and 2 in Figure 120.

d.    Pull the PDB out of the slot, as shown by callout 3 in Figure 120.

Figure 120 Removing the PDB

 

Installing the PDB

1.        Install a new PDB:

a.    Unlock the PDB. For more information, see "Removing the PDB."

b.    Push the PDB into the slot until you cannot push it further, as shown by callout 1 in Figure 121.

c.    Close the ejector levers, as shown by callout 2 in Figure 121.

Figure 121 Installing the PDB

 

d.    Connect cables to the PDB, as shown in Figure 122.

Figure 122 Connecting cables to the PDB

 

2.        Install the management module. For more information, see "Installing the management module."

3.        Install the removed power supplies. For more information, see "Installing power supplies."

4.        Install the removed fan modules. For more information, see "Replacing a fan module."

5.        Install the power supply air baffle. For more information, see "Replacing the power supply air baffle."

6.        Install the chassis access panel. For more information, see "Replacing the chassis access panel."

7.        Rack-mount the server. For more information, see "Rack-mounting the server."

8.        Connect the power cord. For more information, see "Connecting the power cord."

9.        Power on the server. For more information, see "Powering on the server."

Replacing the midplane

WARNING

WARNING!

To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them.

 

Removing the midplane

Procedure

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the server from the rack. For more information, see "Removing the server from a rack."

3.        Remove the security bezel, if any. For more information, see "Replacing the security bezel."

4.        Remove compute modules. For more information, see "Removing a compute module."

5.        Remove the chassis access panel. For more information, see "Replacing the chassis access panel."

6.        Remove the power supply air baffle. For more information, see "Replacing the power supply air baffle."

7.        Remove fan modules. For more information, see "Replacing a fan module."

8.        Remove power supplies. For more information, see "Replacing a power supply."

9.        Remove the management module. For more information, see "Removing the management module."

10.     Remove the PDB. For more information, see "Removing the PDB."

11.     Remove riser cards at the server rear. For more information, see "Replacing a riser card and PCIe module at the server rear."

12.     Remove the midplane, as shown in Figure 123:

a.    Remove the eight screws that secure the midplane to the server, as shown by callout 1.

b.    Pull the midplane outward and lift the midplane out of the chassis, as shown by callout 2.

Figure 123 Removing the midplane

 

Installing the midplane

Procedure

1.        Install a midplane, as shown in Figure 124:

a.    Insert the midplane into the server along the slide rails, and push the midplane toward the server front until you cannot push it further, as shown by callout 1.

b.    Fasten the eight screws to secure the midplane into place, as shown by callout 2.

Figure 124 Installing the midplane

 

2.        Install the removed riser cards at the server rear. For more information, see "Installing riser cards and PCIe modules at the server rear."

3.        Install the PDB. For more information, see "Installing the PDB."

4.        Install the management module. For more information, see "Installing the management module."

5.        Install the removed power supplies. For more information, see "Installing power supplies."

6.        Install the removed fan modules. For more information, see "Replacing a fan module."

7.        Install the power supply air baffle. For more information, see "Replacing the power supply air baffle."

8.        Install the chassis access panel. For more information, see "Replacing the chassis access panel."

9.        Install the removed compute modules. For more information, see "Installing a compute module."

10.     Install the removed security bezel. For more information, see "Installing the security bezel."

11.     Rack-mount the server. For more information, see "Rack-mounting the server."

12.     Connect external cables to the rear riser cards as needed.

13.     Connect the power cord. For more information, see "Connecting the power cord."

14.     Power on the server. For more information, see "Powering on the server."

Replacing the diagnostic panel

WARNING

WARNING!

To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them.

 

To replace the diagnostic panel:

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the security bezel, if any. For more information, see "Replacing the security bezel."

3.        Remove the diagnostic panel, as shown in Figure 125:

a.    Press the release button on the diagnostic panel, as shown by callout 1. The diagnostic panel pops out.

b.    Hold the diagnostic panel by its front edge to pull it out of the slot, as shown by callout 2.

Figure 125 Removing the diagnostic panel

 

4.        Install a new diagnostic panel. For more information, see "Installing a diagnostic panel."

5.        Install the removed security bezel. For more information, see "Replacing the security bezel."

6.        Connect the power cord. For more information, see "Connecting the power cord."

7.        Power on the server. For more information, see "Powering on the server."

Replacing chassis ears

WARNING

WARNING!

To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them.

 

The procedure is the same for the left and right chassis ears. This section uses the left chassis ear as an example.

To replace the left chassis ear:

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the server from the rack, if the space over the server is insufficient. For more information, see "Removing the server from a rack."

3.        Remove the security bezel, if any. For more information, see "Replacing the security bezel."

4.        Remove compute modules. For more information, see "Removing a compute module."

5.        Remove the chassis access panel. For more information, see "Replacing the chassis access panel."

6.        Remove the power supply air baffle. For more information, see "Replacing the power supply air baffle."

7.        Disconnect the front VGA and USB 2.0 cable from the PDB, and pull the cable out of the cable cutout as shown in Figure 126.

Figure 126 Removing the front VGA and USB 2.0 cable

 

8.        Remove the screws that secure the left chassis ear, and then pull the chassis ear until it is removed, as shown in Figure 127.

Figure 127 Removing the left chassis ear

 

9.        Install a new left chassis ear. Attach the left chassis ear to the left side of the server, and use screws to secure it into place.

10.     Insert the front VGA and USB 2.0 cable into cable cutout in the chassis and connect the cable to the PDB.

11.     Install the power supply air baffle. For more information, see "Replacing the power supply air baffle."

12.     Install the chassis access panel. For more information, see "Replacing the chassis access panel."

13.     Install the compute modules. For more information, see "Installing a compute module."

14.     Install the removed security bezel. For more information, see "Installing the security bezel."

15.     Rack-mount the server. For more information, see "Rack-mounting the server."

16.     Connect the power cord. For more information, see "Connecting the power cord."

17.     Power on the server. For more information, see "Powering on the server."

Replacing the TPM/TCM

To avoid system damage, do not remove the installed TPM/TCM.

If the installed TPM/TCM is faulty, remove the management module, and contact H3C Support for management module and TPM/TCM replacement.


Connecting internal cables

Properly route the internal cables and make sure they are not squeezed.

Connecting drive cables

Connecting drive cables in compute modules

24SFF SAS/SATA drive cabling

Connect SAS port 1 on the 24SFF drive backplane to SAS port A1 on the main board and connect SAS port 2 to SAS port A2, as shown in Figure 128. For information about the main board layout, see "Main board components."

Figure 128 24SFF SAS/SATA drive backplane connected to the main board

(1) and (3) Power cords

(2) AUX signal cable

(4) and (5) SAS/SATA data cables

 

8SFF SAS/SATA drive cabling

Figure 129 8SFF SAS/SATA drive backplane connected to the main board

(1) and (3) AUX signal cables

(2) and (4) Power cords

(5) SAS/SATA data cable 1 (for drive cage bay 2/4)

(6) SAS/SATA data cable 2 (for drive cage bay 1/3)

 

8SFF NVMe drive cabling

To install 8SFF NVMe drives, you must install an 8-port NVMe SSD expander module to riser card 0 in the compute module.

When connecting NVMe data cables, make sure you connect the corresponding peer ports with the correct NVMe data cable, as shown in Figure 130.

Figure 130 8SFF NVMe drive backplane connected to the main board

(1) and (5) AUX signal cables

(2) and (6) Power cords

(3) and (4) NVMe data cables

 

 

NOTE:

In the figure, A1 to A4 and B1 to B4 represent data ports NVMe A1 to NVMe A4 and data ports NVMe B1 to B4 on the NVMe SSD expander module. NVMe 1 to NVMe 4 represents the labels on NVMe data cables.

 

Hybrid 4SFF SAS/SATA and 4SFF NVMe drive cabling

To install 4SFF NVMe drives, you must install a 4-port NVMe SSD expander module to riser card 0 in the compute module.

Connect 4SFF SAS/SATA drives to the main board and connect 4SFF NVMe drives to the 4-port NVMe SSD expander module. When connecting NVMe data cables, make sure you connect the corresponding peer ports with the correct NVMe data cable, as shown in Figure 131.

Figure 131 Hybrid 4SFF SAS/SATA and 4SFF NVMe drive cabling

(1) and (5) AUX signal cables

(2) and (6) Power cords

(3) SATA data cable

(4) NVMe data cables

 

 

NOTE:

In the figure, 1 to 4 represent data ports NVMe 1 to NVMe 4 on the NVMe SSD expander module. NVMe 1 to NVMe 4 represents the labels on NVMe data cables.

 

4SFF SAS/SATA drive cabling

Figure 132 4SFF SAS/SATA drive backplane connected to the main board

(1) AUX signal cable

(2) Power cord

(3) SAS/SATA data cable

 

4SFF NVMe drive cabling

To install 4SFF NVMe drives, you must install a 4-port NVMe SSD expander module to riser card 0 in the compute module.

When connecting NVMe data cables, make sure you connect the corresponding peer ports with the correct NVMe data cable, as shown in Figure 131.

Figure 133 4SFF NVMe drive cabling

(1) NVMe data cables

(2) AUX signal cable

(5) Power cord

 

 

NOTE:

In the figure, 1 to 4 represent data ports NVMe 1 to NVMe 4 on the NVMe SSD expander module. NVMe 1 to NVMe 4 represents the labels on NVMe data cables.

 

Storage controller cabling in riser cards at the server rear

When connecting storage controller data cables, make sure you connect the corresponding peer ports with the correct storage controller data cable. Use Table 9 and Table 10 to determine the ports to be connected and the cable to use.

Table 9 Storage controller cabling method (for all storage controllers except for the RAID-LSI-9460-16i(4G))

Location of drives

Location of the storage controller

Storage controller data cable

Riser card SAS port

Cabling method

Compute module 1

Slot 1, 2, or 3

SAS PORT A1/B1

SAS port A1

See Figure 134.

SAS PORT A2/B2

SAS port A2

Compute module 2

Slot 4, 5, or 6

SAS PORT A1/B1

SAS port B1

See Figure 135.

SAS PORT A2/B2

SAS port B2

 

Table 10 Storage controller cabling method (for the RAID-LSI-9460-16i(4G) storage controller)

Location of drives

Location of the storage controller

Storage controller data connector

Storage controller data cable

Riser card SAS port

Cabling method

Any compute module

Any slot

C0

SAS PORT A1/B1

SAS port A1

See Figure 136 and Figure 137.

C1

SAS PORT A2/B2

SAS port A2

C2

SAS PORT A1/B1

SAS port B1

C3

SAS PORT A2/B2

SAS port B2

 

Figure 134 Connecting the storage controller cable in slot 3 (for all storage controllers except for the RAID-LSI-9460-16i(4G))

 

Figure 135 Connecting the storage controller cable in slot 6 (for all storage controllers except for the RAID-LSI-9460-16i(4G))

 

Figure 136 Connecting the storage controller cables in slot 3 (for the RAID-LSI-9460-16i(4G))

 

Figure 137 Connecting the storage controller cables in slot 6 (for the RAID-LSI-9460-16i(4G))

 

Connecting the flash card on a storage controller

When connecting a flash card cable and a supercapacitor cable, make sure you connect the correct supercapacitor connectors in the riser card and on the main board. Use Table 11 to determine the method for flash card and supercapacitor cabling.

Table 11 Flash card and supercapacitor cabling method

Location of the storage controller

Supercapacitor connector on the riser card

Supercapacitor connector on the main board

Slot 1, 2, or 3

Supercapacitor connector 1 (See Figure 138)

Supercapacitor connector 1 in compute module 1

Slot 4, 5, or 6

Supercapacitor connector 2 (See Figure 139)

Supercapacitor connector 1 in compute

module 2

 

Figure 138 Connecting the flash card on a storage controller in slot 3

 

Figure 139 Connecting the flash card on a storage controller in slot 6

 

Connecting the GPU power cord

The method for connecting the GPU power cord is the same for different GPU models. This section uses the GPU-P100 as an example.

Figure 140 Connecting the GPU power cord

 

Connecting the NCSI cable for a PCIe Ethernet adapter

Figure 141 Connecting the NCSI cable for a PCIe Ethernet adapter

 

Connecting the front I/O component cable from the right chassis ear

Figure 142 Connecting the front I/O component cable

 

Connecting the cable for the front VGA and USB 2.0 connectors on the left chassis ear

Figure 143 Connecting the cable for the front VGA and USB 2.0 connectors on the left chassis ear

 


Maintenance

The following information describes the guidelines and tasks for daily server maintenance.

Guidelines

·          Keep the equipment room clean and tidy. Remove unnecessary devices and objects from the equipment room.

·          Make sure the temperature and humidity in the equipment room meet the server operating requirements.

·          Regularly check the server from HDM for operating health issues.

·          Keep the operating system and software up to date as required.

·          Make a reliable backup plan:

¡  Back up data regularly.

¡  If data operations on the server are frequent, back up data as needed in shorter intervals than the regular backup interval.

¡  Check the backup data regularly for data corruption.

·          Stock spare components on site in case replacements are needed. After a spare component is used, prepare a new one.

·          Keep the network topology up to date to facilitate network troubleshooting.

Maintenance tools

The following are major tools for server maintenance:

·          HygrothermographMonitors the operating environment of the server.

·          HDM, FIST, and iFIST—Monitors the operating status of the server.

Maintenance tasks

Observing LED status

Observe the LED status on the front and rear panels of the server to verify that the server modules are operating correctly. For more information about the status of the front and rear panel LEDs, see "Front panel" and "Rear panel."

Monitoring the temperature and humidity in the equipment room

Use a hygrothermograph to monitor the temperature and humidity in the equipment room.

The temperature and humidity in the equipment room must meet the server requirements described in "Environment requirements."

Examining cable connections

Verify that the cables and power cords are correctly connected.

Guidelines

·          Do not use excessive force when connecting or disconnecting cables.

·          Do not twist or stretch the cables.

·          Organize the cables appropriately. Make sure components to be installed or replaced do not and will not touch any cables.

Checklist

·          The cable type is correct.

·          The cables are correctly and firmly connected and the cable length is appropriate.

·          The cables are in good condition and are not twisted or corroded at the connection point.

Technical support

If you encounter any complicated problems during daily maintenance or troubleshooting, contact H3C Support.

Before contacting H3C Support, collect the following server information to facilitate troubleshooting:

·          Log and sensor information:

¡  Log information:

-      Event logs.

-      HDM audit logs and update logs.

-      SDS logs.

-      Diagnostics logs.

For information about how to collect log information, see HDM and iFIST online help.

¡  Sensor information.

To collect the sensor information, you must log in to the HDM Web interface. For more information, see HDM online help.

·          Product serial number.

·          Product model and name.

·          Snapshots of error messages and descriptions.

·          Hardware change history, including installation, replacement, insertion, and removal of hardware.

·          Third-party software installed on the server.

·          Operating system type and version.

 


Appendix A  Server specifications

A UIS-Cell 6000 G3 system contains a server hardware platform, an HCI core, and a management platform called UIS Manager. The following information provides only specifications of the hardware platform.

 

 

NOTE:

The information in this document might differ from your product if it contains custom configuration options or features.

 

Figure 144 Chassis view

 

The servers come in the models listed in Table 12. These models support different drive configurations.

Table 12 Server models

Model

Maximum drive configuration

48SFF

48 SFF SAS/SATA drives.

32SFF

·         24 SFF SAS/SATA drives and 8 SFF NVMe drives.

·         28 SFF SAS/SATA drives and 4 SFF NVMe drives.

·         32 SFF SAS/SATA drives.

16SFF

·         4 SFF SAS/SATA drives and 12 SFF NVMe drives.

·         8 SFF SAS/SATA drives and 8 SFF NVMe drives.

·         12 SFF SAS/SATA drives and 4 SFF NVMe drives.

·         16 SFF SAS/SATA drives.

·         16 SFF NVMe drives.

 

Technical specifications

Item

48SFF

32SFF

16SFF

Dimensions (H × W × D)

·         Without a security bezel and chassis ears: 174.8 × 444 × 807.4 mm (6.88 × 17.48 × 31.79 in)

·         With a security bezel: 174.8 × 444 × 829.7 mm (6.88 × 17.48 × 32.67 in)

Max. weight

67 kg (147.71 lb)

Processors

4 × Intel Purley processors

(Up to 3.8 GHz base frequency, maximum 205 W power consumption, 38.5 MB L3 cache, and a maximum of 28 cores per processor)

Memory

A maximum of 48 DIMMs

Supports mixture of DCPMMs and DRAM DIMMs

Max storage

·         SAS drives: 115.2 TB

·         SATA drives: 184.32 TB

·         SAS drives: 76.8 TB

·         SATA drives: 122.88 TB

·         SAS + NVMe drives: 57.6 + 64 TB

·         SATA + NVMe drives: 92.16 + 64 TB

·         SAS drives: 38.4 TB

·         SATA drives: 61.44 TB

·         NVMe drives: 128 TB

·         SAS + NVMe drives: 9.6 + 96 TB

·         SATA + NVMe drives: 15.36 + 96 TB

Chipset

Intel C622 Lewisburg chipset

I/O connectors

·         6 × USB connectors:

¡  4 × USB 3.0 connectors (one on the right chassis ear, two at the server rear, and one in the management module)

¡  2 × USB 2.0 connectors (provided by the left chassis ear)

·         8 × SATA connectors (4 connectors on the main board of each compute module)(The connectors must be used with storage controllers)

·         1 × RJ-45 HDM dedicated port at the server rear

·         2 × VGA connectors (one at the server rear and one at the server front)

·         1 × BIOS serial port at the server rear

Expansion slots

·         20 × PCIe 3.0 slots

·         1 × mLOM Ethernet adapter connector

·         1 × dual SD card extended module connector

Optical drives

External USB optical drives

Power supplies

4 × hot-swappable power supplies, N + N redundancy

800 W, 800 W 336V high-voltage, 850 W high-efficiency Platinum, 800 W 48VDC, 1200 W, and 1600 W power supplies

Standards

CE EMC

CE RoHS

CCC

FCC

ICES-003

VCCI

 

Components

Figure 145 Server components

 

Table 13 Server components

Item

Description

(1) Chassis access panel

N/A

(2) Power supply air baffle

Provides ventilation aisles for power supplies.

(3) Dual SD card extended module

Provides two SD card slots.

(4) System battery

Supplies power to the system clock.

(5) Management module

Provides system management and monitoring features, and a management network port, a VGA connector, and USB connectors.

(6) NVMe VROC module

Works with Intel VMD to provide RAID capability for the server to virtualize storage resources of NVMe drives.

(7) PDB

Used for installing power supplies and the management module, and provides cable connectors for the front I/O component, and VGA and USB 2.0 connectors.

(8) Midplane

Provides data and power channels in the server.

(9) Fan module

Supports hot swapping. Fans in the fan modules support N+1 redundancy.

(10) Power supply

Supplies power to the server and supports hot swapping and N+N redundancy.

(11) Riser card blank

Installed on an empty PCIe riser bays to ensure good ventilation.

(12) Rear riser card 2

Installed on PCIe riser bay 2 at the server rear.

(13) Rear riser cards 1 or 3

Installed on PCIe riser bay 1 or 3 at the server rear.

(14) Riser card air baffle

Provides ventilation aisles for PCIe modules in riser cards at the server rear.

(15) Chassis

N/A

(16) Chassis ears

Attach the server to the rack. The right ear is integrated with the front I/O component, and the left ear is integrated with VGA and USB 2.0 connectors. The serial label pull tab on the left ear provides the HDM default login settings and document QR code.

(17) LCD

The LCD is not supported currently.

Displays basic server information and operating status, and allows users to perform basic server settings.

(18) Diagnostic panel

Displays information about faulty components for quick diagnosis.

(19) Drive

Drive for data storage.

(20) Drive backplane

Provides power and data channels for drives.

(21) Right air baffle

Provides ventilation aisles for processor heatsinks and DIMMs, and an installation location for a supercapacitor.

(22) Compute module and its main board

Integrates all compute module parts and components.

(23) Supercapacitor

Supplies power to the flash card of the power fail safeguard module, which enables the storage controller to back up data to the flash card for protection when power outage occurs.

(24) Supercapacitor holder

Secures a supercapacitor in the chassis.

(25) Memory

Stores computing data and data exchanged with external storage.

(26) Low mid air baffle

Used in a compute module when the RS-FHHL-G3 riser card is installed to provide ventilation aisles for DIMMs.

(27) Riser card 0

Installed on the PCIe riser connector in a compute module.

(28) Processor

Integrates a memory processing unit and a PCIe controller to provide data processing capabilities for the server.

(29) Processor retaining bracket

Attaches a processor to the heatsink.

(30) Processor heatsink

Cools the processor.

(31) Compute module access panel

N/A

 

Front panel

Front panel view of the server

Figure 146, Figure 147, and Figure 148 show the front panel views of 48SFF, 32SFF, and 16SFF servers, respectively.

Figure 146 48SFF front panel

(1) Serial label pull tab module

(2) USB 2.0 connectors

(3) VGA connector

(4) Compute module 1

(5) SAS/SATA drive or diagnostic panel (optional)

(6) USB 3.0 connector

(7) Compute module 2

 

Figure 147 32SFF front panel

(1) Serial label pull tab module

(2) USB 2.0 connectors

(3) VGA connector

(4) Drive cage bay 1

(5) Compute module 1

(6) Drive cage bay 2

(7) SAS/SATA drive, NVMe drive, or diagnostic panel (optional)

(8) USB 3.0 connector

(9) Compute module 2

 

Figure 148 16SFF front panel

(1) Serial label pull tab module

(2) USB 2.0 connectors

(3) VGA connector

(4) Drive cage bay 1

(5) Compute module 1

(6) Drive cage bay 2

(7) SAS/SATA drive, NVMe drive, or diagnostic panel (optional)

(8) USB 3.0 connector

(9) Drive cage bay 4

(10) Compute module 2

(11) Drive cage bay 3

 

Front panel view of a compute module

Figure 149 and Figure 150 show the front panel views of 24SFF and 8SFF compute modules, respectively.

Figure 149 24SFF compute module front panel

(1) 24SFF SAS/SATA drives

(2) Diagnostic panel (optional)

 

Figure 150 8SFF compute module front panel

(1) Drive cage bay 1/3 for 4SFF SAS/SATA or NVMe drives (optional)

(2) Diagnostic panel (optional)

(3) Drive cage bay 2/4 for 4SFF SAS/SATA or NVMe drives (optional)

(4) Diagnostic panel (optional)

 

 

NOTE:

Drive cage bays 1 and 2 are for compute module 1, and drive cage bays 3 and 4 are for compute module 2.

 

LEDs and buttons

The LED and buttons are the same on all server models. Figure 151 shows the front panel LEDs and buttons. Table 14 describes the status of the front panel LEDs.

Figure 151 Front panel LEDs and buttons

(1) Power on/standby button and system power LED

(2) UID button LED

(3) Health LED

(4) mLOM Ethernet adapter Ethernet port LED

 

Table 14 LEDs and buttons on the front panel

Button/LED

Status

Power on/standby button and system power LED

·         Steady green—The system has started.

·         Flashing green (1 Hz)—The system is starting.

·         Steady amber—The system is in Standby state.

·         Off—No power is present. Possible reasons:

¡  No power source is connected.

¡  No power supplies are present.

¡  The installed power supplies are faulty.

¡  The system power LED is not connected correctly.

UID button LED

·         Steady blue—UID LED is activated. The UID LED can be activated by using the following methods:

¡  Press the UID button LED.

¡  Activate the UID LED from HDM.

·         Flashing blue:

¡  1 Hz—The firmware is being upgraded or the system is being managed from HDM.

¡  4 Hz—HDM is restarting. To restart HDM, press the UID button LED for eight seconds.

·         Off—UID LED is not activated.

Health LED

·         Steady green—The system is operating correctly.

·         Flashing green (4 Hz)—HDM is initializing.

·         Flashing amber (0.5 Hz)—A predictive alarm has occurred.

·         Flashing amber (1 Hz)—A general alarm has occurred.

·         Flashing red (1 Hz)—A severe alarm has occurred.

If a system alarm is present, log in to HDM to obtain more information about the system running status.

mLOM Ethernet adapter Ethernet port LED

·         Steady green—A link is present on the port.

·         Flashing green (1 Hz)—The port is receiving or sending data.

·         Off—No link is present on the port.

 

Ports

Table 15 Ports on the front panel

Port

Type

Description

USB connector

USB 3.0/2.0

Connects the following devices:

·         USB flash drive.

·         USB keyboard or mouse.

·         USB optical drive for operating system installation.

VGA connector

DB-15

Connects a display terminal, such as a monitor or KVM device.

 

Rear panel

Rear panel view

Figure 152 shows the rear panel view.

Figure 152 Rear panel components

(1) Power supply 1

(2) Power supply 2

(3) VGA connector

(4) BIOS serial port

(5) HDM dedicated network port (1 Gbps, RJ-45, default IP address 192.168.1.2/24)

(6) USB 3.0 connectors

(7) Power supply 3

(8) Power supply 4

(9) PCIe riser bay 3:

·         PCIe slots 1 through 3 (processor 2 in compute module 1)

·         PCIe slots 4 through 6 (processor 2 in compute module 2)

(10) PCIe riser bay 2:

·         PCIe slot 1 (processor 2 in compute module 1)

·         PCIe slot 2 (processor 1 in compute module 1)

·         PCIe slots 3, 4, and 6 (processor 1 in compute module 2)

·         PCIe slot 5 (processor 2 in compute module 2)

(11) PCIe riser bay 1:

·         PCIe slots 1 through 3 (processor 1 in compute module 1)

·         PCIe slots 4 through 6 (processor 1 in compute module 2)

 

 

NOTE:

·      If a processor is not present, the corresponding PCIe slots are unavailable.

·      Some PCIe modules require PCIe I/O resources. Make sure the number of PCIe modules requiring PCIe I/O resources does not exceed eleven. For more information, see "PCIe modules."

 

LEDs

Figure 153 shows the rear panel LEDs. Table 16 describes the status of the rear panel LEDs.

Figure 153 Rear panel LEDs

(1) Power supply LED for power supply 1

(2) Power supply LED for power supply 2

(3) UID LED

(4) Link LED of the Ethernet port

(5) Activity LED of the Ethernet port

(6) Power supply LED for power supply 3

(7) Power supply LED for power supply 4

 

Table 16 LEDs on the rear panel

LED

Status

Power supply LED

·         Steady green—The power supply is operating correctly.

·         Flashing green (1 Hz)—Power is being input correctly but the system is not powered on.

·         Flashing green (0.33 Hz)—The power supply is in standby state and does not output power.

·         Flashing green (2 Hz)—The power supply is updating its firmware.

·         Steady amber—Either of the following conditions exists:

¡  The power supply is faulty.

¡  The power supply does not have power input, but another power supply has correct power input.

·         Flashing amber (1 Hz)—An alarm has occurred on the power supply.

·         Off—No power supplies have power input, which can be caused by an incorrect power cord connection or power source shutdown.

UID LED

·         Steady blue—UID LED is activated. The UID LED can be activated by using the following methods:

¡  Press the UID button LED.

¡  Enable UID LED from HDM.

·         Flashing blue:

¡  1 Hz—The firmware is being updated or the system is being managed by HDM.

¡  4 Hz—HDM is restarting. To restart HDM, press the UID button LED for eight seconds.

·         Off—UID LED is not activated.

Link LED of the Ethernet port

·         Steady green—A link is present on the port.

·         Off—No link is present on the port.

Activity LED of the Ethernet port

·         Flashing green (1 Hz)—The port is receiving or sending data.

·         Off—The port is not receiving or sending data.

 

Ports

Table 17 Ports on the rear panel

Port

Type

Description

HDM dedicated network port

RJ-45

Establishes a network connection to manage HDM from its Web interface.

USB connector

USB 3.0

Connects the following devices:

·         USB flash drive.

·         USB keyboard or mouse.

·         USB optical drive for operating system installation.

VGA connector

DB-15

Connects a display terminal, such as a monitor or KVM device.

BIOS serial port

RJ-45

The BIOS serial port is used for the following purposes:

·         Log in to the server when the remote network connection to the server has failed.

·         Establish a GSM modem or encryption lock connection.

Power receptacle

Standard single-phase

Connects the power supply to the power source.

 

Main board of a compute module

Main board components

8SFF and 24SFF compute modules have the same main board layout.

Figure 154 Main board components

(1) SAS port B2 (×4 SAS ports) for PCIe riser bay 3

(2) SAS port B1 (×4 SAS ports) for PCIe riser bay 3

(3) Supercapacitor connector 2 for PCIe riser bay 3

(4) PCIe riser connector 0 for processor 2

(5) Supercapacitor connector 1 for PCIe riser bay 1

(6) SAS port A1 (×4 SAS ports) for PCIe riser bay 1

(7) SAS port A2 (×4 SAS ports) for PCIe riser bay 1

(8) LCD connector

(9) Drive backplane power connector 1

(10) Drive backplane AUX connector 1

(11) Drive backplane power connector 2

(12) Drive backplane AUX connector 2

 

For information about the supported PCIe riser cards and their installation locations, see "Riser cards."

DIMM slots

The server provides six DIMM channels per processor, 12 channels in total. Each channel contains one white-coded slot and one black-coded slot, as shown in Table 18.

Table 18 DIMM slot numbering and color-coding scheme

Processor

DlMM slots

Processor 1

A1 through A6 (white coded)

A7 through A12 (black coded)

Processor 2

B1 through B6 (white coded)

B7 through B12 (black coded)

 

8SFF and 24SFF compute modules have the same physical layout of the DIMM slots on the main board, as shown in Figure 155. For more information about the DIMM slot population rules, see the guidelines in "Installing DIMMs."

Figure 155 DIMM physical layout

 

Management module

Management module components

Figure 156 Management module components

(1) Dual SD card extended module slot

(2) TPM/TCM connector

(3) System maintenance switches

(4) NVMe VROC module connector

(5) System battery

(6) Internal USB 3.0 connector

 

System maintenance switches

Use the system maintenance switch if you forget HDM username, HDM password, or BIOS password, or need to restore default BIOS settings, as described in Table 19. To identify the location of the switch, see Figure 156.

Table 19 System maintenance switch

Item

Description

Remarks

Switch 1

·         OFF (default)HDM login requires the username and password of a valid HDM user account.

·         ON—HDM login requires the default username and password.

For security purposes, turn off the switch after you complete tasks with the default username and password as a best practice.

Switch 5

·         OFF (default)—Normal server startup.

·         ON—Restores the default BIOS settings.

To restore the default BIOS settings, turn on the switch and then start the server. The default BIOS settings will be restored. Before the next server startup, power off the server and then turn off the switch to perform a normal server startup.

Switch 6

·         OFF (default)—Normal server startup.

·         ON—Clears all passwords from the BIOS at server startup.

To clear all passwords from the BIOS, turn on the switch and then start the server. All the passwords will be cleared from the BIOS. Before the next server startup, turn off the switch to perform a normal server startup.

Switches 2, 3, 4, 7, and 8

Reserved.

N/A

 

PDB

Figure 157 PDB components

(1) Front I/O connector

(2) Front VGA and USB 2.0 connector

 


Appendix B  Component specifications

This appendix provides information about hardware options available for the server at the time of this writing. The hardware options available for the server are subject to change over time. For more information about hardware options supported by the server, visit the query tool at http://www.h3c.com/cn/home/qr/default.htm?id=367.

About component model names

The model name of a hardware option in this document might differ slightly from its model name label.

A model name label might add a prefix or suffix to the hardware-coded model name for purposes such as identifying the matching server brand or applicable region. For example, the DDR4-2666-8G-1Rx8-R memory model represents memory module labels including DDR4-2666-8G-1Rx8-R, DDR4-2666-8G-1Rx8-R-F, and DDR4-2666-8G-1Rx8-R-S, which have different suffixes.

Software compatibility

All the components are compatible with the following software versions:

·          HDM-1.10.28P01 and later versions.

·          BIOS-1.00.10P01 and later versions.

Processors

Table 20 SkyLake processors

Model

Base frequency

Power

Number of cores

Cache (L3)

Supported max. data rate of DIMMs

8180

2.5 GHz

205 W

28

38.50 MB

2666 MHz

8176

2.1 GHz

165 W

28

38.50 MB

2666 MHz

8170

2.1 GHz

165 W

26

35.75 MB

2666 MHz

8168

2.7 GHz

205 W

24

33.00 MB

2666 MHz

8164

2.0 GHz

150 W

26

35.75 MB

2666 MHz

8160

2.1 GHz

150 W

24

33.00 MB

2666 MHz

8158

3.0 GHz

150 W

12

25.00 MB

2666 MHz

8156

3.6 GHz

105 W

4

17.00 MB

2666 MHz

8153

2.0 GHz

125 W

16

22.00 MB

2666 MHz

6154

3.0 GHz

200 W

18

24.75 MB

2666 MHz

6152

2.1 GHz

140 W

22

30.25 MB

2666 MHz

6150

2.7 GHz

165 W

18

24.75 MB

2666 MHz

6148

2.4 GHz

150 W

20

27.50 MB

2666 MHz

6146

3.2 GHz

165 W

12

24.75 MB

2666 MHz

6144

3.5 GHz

150 W

8

24.75 MB

2666 MHz

6142

2.6 GHz

150 W

16

22.00 MB

2666 MHz

6140

2.3 GHz

140 W

18

24.75 MB

2666 MHz

6138

2.0 GHz

125 W

20

27.50 MB

2666 MHz

6136

3.0 GHz

150 W

12

24.75 MB

2666 MHz

6134

3.2 GHz

130 W

8

24.75 MB

2666 MHz

6132

2.6 GHz

140 W

14

19.25 MB

2666 MHz

6130

2.1 GHz

125 W

16

22.00 MB

2666 MHz

6128

3.4 GHz

115 W

6

19.25 MB

2666 MHz

6126

2.6 GHz

125 W

12

19.25 MB

2666 MHz

5122

3.6 GHz

105 W

4

16.50 MB

2666 MHz

5120

2.2 GHz

105 W

14

19.25 MB

2400 MHz

5118

2.3 GHz

105 W

12

16.5 MB

2400 MHz

5117

2.0 GHz

105 W

14

19.25 MB

2400 MHz

5115

2.4 GHz

85 W

10

13.75 MB

2400 MHz

8180M

2.5 GHz

205 W

28

38.50 MB

2666 MHz

8176M

2.1 GHz

165 W

28

38.50 MB

2666 MHz

8170M

2.1 GHz

165 W

26

35.75 MB

2666 MHz

8160M

2.1 GHz

150 W

24

33.00 MB

2666 MHz

6142M

2.6 GHz

150 W

16

22.00 MB

2666 MHz

6140M

2.3 GHz

140 W

18

24.75 MB

2666 MHz

6134M

3.2 GHz

130 W

8

24.75 MB

2666 MHz

 

Table 21 Cascade Lake processors

Model

Base frequency

Power

Number of cores

Cache (L3)

Supported max. data rate of DIMMs

8276

2.2 GHz

165

28

38.5 MB

2933 MHz

8260

2.4 GHz

165

24

35.75 MB

2933 MHz

6252

2.1 GHz

150

24

35.75 MB

2933 MHz

6248

2.5 GHz

150

20

27.5 MB

2933 MHz

6244

3.6 GHz

150

8

24.75 MB

2933 MHz

6242

2.8 GHz

150

16

22 MB

2933 MHz

6230

2.1 GHz

125

20

27.5 MB

2933 MHz

5220

2.2 GHz

125

18

24.75 MB

2666 MHz

5218

2.3 GHz

125

16

22 MB

2666 MHz

6240C

2.6 GHz

150

18

24.75 MB

2933 MHz

6246

3.3 GHz

165

12

24.75 MB

2933 MHz

8253

2.2 GHz

125

16

22 MB

2933 MHz

8268

2.9 GHz

205

24

35.75 MB

2900 MHz

6234

3.3 GHz

130

8

24.75 MB

2933 MHz

5222

3.8 GHz

105

4

16.5 MB

3800 MHz

8280

2.7 GHz

205

28

38.5 MB

2700 MHz

6238

2.1 GHz

140

22

30.25 MB

2933 MHz

6254

3.1 GHz

200

18

24.75 MB

2933 MHz

5215

2.5 GHz

85

10

13.75 MB

2666 MHz

5217

3.0 GHz

115

8

24.75 MB

2667 MHz

6226

2.7 GHz

125

12

19.25 MB

2933 MHz

8276L

2.2 GHz

165

28

38.5 MB

2700 MHz

8255C

2.5 GHz

165

24

35.75 MB

2933 MHz

8276M

2.2 GHz

165

28

38.5 MB

2200 MHz

6254

3.1 GHz

200

18

24.75 MB

2933 MHz

5218N

2.3 GHz

110

16

22 MB

2666 MHz

6230N

2.3 GHz

125

20

27.5 MB

2933 MHz

6262V

1.9 GHz

135

24

33 MB

2400 MHz

5215L

2.5 GHz

85

10

13.75 MB

2666 MHz

6222V

1.8 GHz

125

20

27.5 MB

2400 MHz

5215M

2.5 GHz

85

10

13.75 MB

2666 MHz

6240M

2.6 GHz

150

18

24.75 MB

2933 MHz

 

DIMMs

The server provides 6 DIMM channels per processor, 24 channels in total. Each DIMM channel has two DIMM slots and supports a maximum of eight ranks. For the physical layout of DIMM slots, see "DIMM slots."

DRAM specifications

Product code

Model

Type

Capacity

Data rate

Rank

0231A6SP

DDR4-2666-16G-1Rx4-R

RDIMM

16 GB

2666 MHz

Single-rank

0231A6SS

DDR4-2666-32G-2Rx4-R

RDIMM

32 GB

2666 MHz

Dual-rank

0231A8QJ

DDR4-2666-64G-4Rx4-L

LRDIMM

64 GB

2666 MHz

Quad-rank

0231AC4S

DDR4-2933P-16G-1Rx4-R

RDIMM

16 GB

2933 MHz

Single-rank

0231AC4V

DDR4-2933P-16G-2Rx8-R

RDIMM

16 GB

2933 MHz

Dual-rank

0231AC4T

DDR4-2933P-32G-2Rx4-R

RDIMM

32 GB

2933 MHz

Dual-rank

0231AC4N

DDR4-2933P-64G-2Rx4-R

RDIMM

64 GB

2933 MHz

Dual-rank

 

DCPMM specifications

Product code

Model

Type

Capacity

Data rate

0231AC5R

AP-128G-NMA1XBD128GQSE

Apache Pass

128 GB

2666 MHz

0231AC65

AP-512G-NMA1XBD512GQSE

Apache Pass

512 GB

2666 MHz

 

DRAM DIMM rank classification label

A DIMM rank is a set of memory chips that the system accesses while writing or reading from the memory. On a multi-rank DIMM, only one rank is accessible at a time.

To determine the rank classification of a DRAM DIMM, use the label attached to the DIMM, as shown in Figure 158.

Figure 158 DRAM DIMM rank classification label

 

Table 22 DIMM rank classification label description

Callout

Description

Remarks

1

Capacity

·         8GB.

·         16GB.

·         32GB.

2

Number of ranks

·         1R— One rank.

·         2R—Two ranks.

·         4R—Four ranks.

·         8R—Eight ranks.

3

Data width

·         ×4—4 bits.

·         ×8—8 bits.

4

DIMM generation

Only DDR4 is supported.

5

Data rate

·         2133P—2133 MHz.

·         2400T—2400 MHz.

·         2666V—2666 MHz.

·         2933Y—2933 MHz.

6

DIMM type

·         L—LRDIMM.

·         R—RDIMM.

 

HDDs and SSDs

Drive specifications

SAS HDDs

Model

Form factor

Capacity

Rate

Rotating speed

HDD-300G-SAS-12G-15K-SFF

SFF

300 GB

12 Gbps

15000 RPM

HDD-300G-SAS-12G-10K-SFF-EP

SFF

300 GB

12 Gbps

10000 RPM

HDD-600G-SAS-12G-15K-SFF-1

SFF

600 GB

12 Gbps

15000 RPM

HDD-900G-SAS-12G-10K-SFF

SFF

900 GB

12 Gbps

10000 RPM

HDD-900G-SAS-12G-15K-SFF

SFF

900 GB

12 Gbps

15000 RPM

HDD-1.2T-SAS-12G-10K-SFF

SFF

1.2 TB

12 Gbps

10000 RPM

HDD-1.8T-SAS-12G-10K-SFF

SFF

1.8 TB

12 Gbps

10000 RPM

HDD-2.4T-SAS-12G-10K-SFF

SFF

2.4 TB

12 Gbps

10000 RPM

 

SATA HDDs

Model

Form factor

Capacity

Rate

Rotating speed

HDD-1T-SATA-6G-7.2K-SFF-1

SFF

1 TB

6 Gbps

7200 RPM

HDD-2T-SATA-6G-7.2K-SFF

SFF

2 TB

6 Gbps

7200 RPM

 

SATA SSDs

Model

Vendor

Form factor

Capacity

Rate

SSD-240G-SATA-6G-EV-SFF-i-1

Intel

SFF

240 GB

6 Gbps

SSD-240G-SATA-6G-EM-SFF-i-2

Intel

SFF

240 GB

6 Gbps

SSD-480G-SATA-6G-EV-SFF-i-2

Intel

SFF

480 GB

6 Gbps

SSD-480G-SATA-6G-EM-SFF-i-3

Intel

SFF

480 GB

6 Gbps

SSD-480G-SATA-6G-SFF-EV-sa

Intel

SFF

480 GB

6 Gbps

SSD-480G-SATA-6G-SFF-2

Micron

SFF

480 GB

6 Gbps

SSD-960G-SATA-6G-SFF-2

Micron

SFF

960 GB

6 Gbps

SSD-960G-SATA-6G-EM-SFF-i-2

Intel

SFF

960 GB

6 Gbps

SSD-960G-SATA-6G-EV-SFF-i

Intel

SFF

960 GB

6 Gbps

SSD-960G-SATA-6G-EM-SFF-m

Micron

SFF

960 GB

6 Gbps

SSD-1.92T-SATA-6G-EM-SFF-i-1

Intel

SFF

1.92 TB

6 Gbps

SSD-1.92T-SATA-6G-EV-SFF-i

Intel

SFF

1.92 TB

6 Gbps

SSD-1.92T-SATA-6G-SFF-3

Micron

SFF

1.92 TB

6 Gbps

SSD-1.92T-SATA-6G-EM-SFF-m

Micron

SFF

1.92 TB

6 Gbps

SSD-3.84T-SATA-6G-SFF

Micron

SFF

3.84 TB

6 Gbps

SSD-3.84T-SATA-6G-EM-SFF-i

Intel

SFF

3.84 TB

6 Gbps

SSD-3.84T-SATA-6G-EV-SFF-i

Intel

SFF

3.84 TB

6 Gbps

SSD-480G-SATA-6G-EV-SFF-sa

Samsung

SFF

480 GB

6 Gbps

SSD-960G-SATA-Ny1351-SFF-7

Seagate

SFF

960 GB

6 Gbps

SSD-480G-SATA-Ny1351-SFF-6

Seagate

SFF

480 GB

6 Gbps

SSD-960G-SATA-PM883-SFF

Samsung

SFF

960 GB

6 Gbps

SSD-3.84T-SATA-PM883-SFF

Samsung

SFF

3.84 TB

6 Gbps

SSD-1.92T-SATA-PM883-SFF

Samsung

SFF

1.92 TB

6 Gbps

 

SAS SSDs

Model

Vendor

Form factor

Capacity

Rate

SSD-3.2T-SAS3-SS530-SFF

WD

LFF

3.2 TB

12 Gbps

SSD-400G-SAS3-SS530-SFF

WD

LFF

400 GB

12 Gbps

SSD-800G-SAS3-SS530-SFF

WD

LFF

800 GB

12 Gbps

SSD-1.6T-SAS3-SS530-SFF

WD

LFF

1.6 TB

12 Gbps

 

NVMe SSDs

Model

Vendor

Form factor

Capacity

Interface

Rate

SSD-375G-NVMe-SFF-i

Intel

SFF

375 GB

PCIe3.0

8 Gbps

SSD-750G-NVMe-SFF-i

Intel

SFF

750 GB

PCIe3.0

8 Gbps

SSD-1T-NVMe-SFF-i-1

Intel

SFF

1 TB

PCIe3.0

8 Gbps

SSD-1.6T-NVMe-EM-SFF-i

Intel

SFF

1.6 TB

PCIe3.0

8 Gbps

SSD-2T-NVMe-SFF-i-1

Intel

SFF

2 TB

PCIe3.0

8 Gbps

SSD-3.2T-NVMe-EM-SFF-i

Intel

SFF

3.2 TB

PCIe3.0

8 Gbps

SSD-4T-NVMe-SFF-i-2

Intel

SFF

4 TB

PCIe3.0

8 Gbps

SSD-6.4T-NVMe-EM-SFF-i

Intel

SFF

6.4 TB

PCIe3.0

8 Gbps

SSD-7.68T-NVMe-EM-SFF-i

Intel

SFF

7.68 TB

PCIe3.0

8 Gbps

SSD-8T-NVMe-SFF-i

Intel

SFF

8 TB

PCIe3.0

8 Gbps

SSD-6.4T-NVMe-EM-SFF-mbl

Memblaze

SFF

6.4 TB

PCIe

8 Gbps

SSD-3.2T-NVMe-EM-SFF-mbl

Memblaze

SFF

3.2 TB

PCIe

8 Gbps

 

NVMe SSD PCIe accelerator modules

Model

Form factor

Capacity

Interface

Rate

Link width

SSD-NVME-375G-P4800X

HHHL

375 GB

PCIe

8 Gbps

×4

SSD-NVME-750G-P4800X

HHHL

750 GB

PCIe

8 Gbps

×4

SSD-NVME-3.2T-PBlaze5

HHHL

3.2 TB

PCIe

8 Gbps

×8

SSD-NVME-6.4T-PBlaze5

HHHL

6.4 TB

PCIe

8 Gbps

×8

SSD-1.6T-NVME-PB516

HHHL

1.6TB

PCIe

8 Gbps

×8

 

Drive LEDs

The server supports SAS, SATA, and NVMe drives, of which SAS and SATA drives support hot swapping. You can use the LEDs on a drive to identify its status after it is connected to a storage controller.

Figure 159 shows the location of the LEDs on a drive.

Figure 159 Drive LEDs

R190_硬盘编号1、2.png

(1) Fault/UID LED

(2) Present/Active LED

 

To identify the status of a SAS or SATA drive, use Table 23. To identify the status of an NVMe drive, use Table 24.

Table 23 SAS/SATA drive LED description

Fault/UID LED status

Present/Active LED status

Description

Flashing amber (0.5 Hz)

Steady green/Flashing green (4.0 Hz)

A drive failure is predicted. As a best practice, replace the drive before it fails.

Steady amber

Steady green/Flashing green (4.0 Hz)

The drive is faulty. Replace the drive immediately.

Steady blue

Steady green/Flashing green (4.0 Hz)

The drive is operating correctly and is selected by the RAID controller.

Off

Flashing green (4.0 Hz)

The drive is performing a RAID migration or rebuilding, or the system is reading or writing data to the drive.

Off

Steady green

The drive is present but no data is being read or written to the drive.

Off

Off

The drive is not securely installed.

 

Table 24 NVMe drive LED description

Fault/UID LED status

Present/Active LED status

Description

Flashing amber (0.5 Hz)

Off

The managed hot removal process is completed and the drive is ready for removal.

Flashing amber (4 Hz)

Off

The drive is in hot insertion process.

Steady amber

Steady green/Flashing green (4.0 Hz)

The drive is faulty. Replace the drive immediately.

Steady blue

Steady green/Flashing green (4.0 Hz)

The drive is operating correctly and selected by the RAID controller.

Off

Flashing green (4.0 Hz)

The drive is performing a RAID migration or rebuilding, or the system is reading or writing data to the drive.

Off

Steady green

The drive is present but no data is being read or written to the drive.

Off

Off

The drive is not securely installed.

 

Drive configurations and numbering

The numbers of storage controllers in tables Table 25, Table 27, and Table 29 are applicable to all storage controllers.

48SFF server

Table 25 presents the drive configurations available for the 48SFF server and their compatible types of storage controllers and NVMe SSD expander modules. Table 26 shows drive population for the 48SFF server.

These drive configurations use the same drive numbering scheme, and drives with the same number are distinguished by the compute module they reside in, as shown in Figure 160.

Table 25 Drive, storage controller, and NVMe SSD expander configurations (48SFF server)

Drive configuration

Storage controller

NVMe SSD expander

48SFF (48 SFF SAS/SATA drives)

2 × storage controllers

N/A

48SFF (24 SFF SAS/SATA drives)

1 × storage controller

N/A

 

Table 26 Drive population (48SFF server)

Drive configuration

Compute module 1

Compute module 2

48SFF

(48 SFF SAS/SATA drives)

24 SFF SAS/SATA drives

24 SFF SAS/SATA drives

24SFF

(24 SFF SAS/SATA drives)

24 SFF SAS/SATA drives

N/A

 

 

NOTE:

"N/A" indicates that the compute module is not required but a compute module blank must be installed.

 

Figure 160 Drive numbering for 48SFF drive configurations (48SFF server)

 

 

NOTE:

For the location of the compute modules, see "Front panel view of the server."

 

32SFF server

Table 27 presents the drive configurations available for the 32SFF server and their compatible types of storage controllers and NVMe SSD expander modules. Table 28 shows drive population for the 32SFF server.

These drive configurations use the same drive numbering scheme, and drives with the same number are distinguished by the compute module they reside in, as shown in Figure 161.

Table 27 Drive, storage controller, and NVMe SSD expander configurations (32SFF server)

Drive configuration

Storage controller

NVMe SSD expander

32SFF

(32 SFF SAS/SATA drives)

2 × storage controllers

N/A

32SFF

(28 SFF SAS/SATA drives and 4 SFF NVMe drives)

2 × storage controllers

1 × 4-port NVMe SSD expander module

32SFF

(24 SFF SAS/SATA drives and 8 SFF NVMe drives)

1 × storage controller

1 × 8-port NVMe SSD expander module

28SFF

(28 SFF SAS/SATA drives)

2 × storage controllers

N/A

28SFF (24 SFF SAS/SATA drives and 4 SFF NVMe drives)

1 × storage controller

1 × 4-port NVMe SSD expander module

 

Table 28 Drive population (32SFF server)

Drive configuration

Drive cage bay 1 in compute module 1

Drive cage bay 2 in compute module 1

Compute module 2

32SFF

(32 SFF SAS/SATA drives)

4 SFF SAS/SATA drives

4 SFF SAS/SATA drives

24 SFF SAS/SATA drives

32SFF

(28 SFF SAS/SATA drives and 4 SFF NVMe drives)

4 SFF SAS/SATA drives

4 SFF NVMe drives

32SFF

(24 SFF SAS/SATA drives and 8 SFF NVMe drives)

4 SFF NVMe drives

4 SFF NVMe drives

28SFF

(28 SFF SAS/SATA drives)

4 SFF SAS/SATA drives

/

28SFF

(24 SFF SAS/SATA drives and 4 SFF NVMe drives)

/

4 SFF NVMe drives

 

 

NOTE:

·      To install 4 SFF SAS/SATA drives, a 4SFF SAS/SATA drive backplane is required. To install 4 SFF NVMe drives, a 4SFF NVMe drive backplane is required.

·      "/" indicates no drives are required but drive blanks must be installed.

 

Figure 161 Drive numbering for the 32SFF configuration (32SFF server)

 

 

NOTE:

For the location of the compute modules, see "Front panel view of the server." For the location of the drive cage bays, see "Front panel view of a compute module."

 

16SFF server

Table 29 presents the drive configurations available for the 16SFF server and their compatible types of storage controllers and NVMe SSD expander modules. Table 30 shows drive population for the 16SFF server.

These drive configurations use the same drive numbering scheme, and drives with the same number are distinguished by the compute module they reside in, as shown in Figure 162.

Table 29 Drive, storage controller, and NVMe SSD expander configurations (16SFF server)

Drive configuration

Storage controller

NVMe SSD expander

16SFF

(16 SFF SAS/SATA drives)

2 × storage controllers

N/A

16SFF

(12 SFF SAS/SATA drives and 4 SFF NVMe drives)

2 × storage controllers

1 × 4-port NVMe SSD expander module

16SFF

(8 SFF SAS/SATA drives and 8 SFF NVMe drives)

1 × storage controller

1 × 8-port NVMe SSD expander module

16SFF

(4 SFF SAS/SATA drives and 12 SFF NVMe drives)

1 × storage controller

·         1 × 4-port NVMe SSD expander module

·         1 × 8-port NVMe SSD expander module

16SFF

(16 SFF NVMe drives)

N/A

2 × 8-port NVMe SSD expander modules

12SFF (12 SFF SAS/SATA drives)

2 × storage controllers

N/A

12SFF

(8 SFF SAS/SATA drives and 4 SFF NVMe drives)

1 × storage controller

1 × 4-port NVMe SSD expander module

12SFF

(4 SFF SAS/SATA drives and 8 SFF NVMe drives)

1 × storage controller

2 × 4-port NVMe SSD expander modules

12SFF

(4 SFF SAS/SATA drives and 8 SFF NVMe drives)

1 × storage controller

1 × 8-port NVMe SSD expander module

12SFF

(12 SFF NVMe drives)

N/A

·         1 × 4-port NVMe SSD expander module

·         1 × 8-port NVMe SSD expander module

8SFF

(8 SFF SAS/SATA drives)

1 × storage controller

N/A

8SFF

(4 SFF SAS/SATA drives and 4 SFF NVMe drives)

1 × storage controller

1 × 4-port NVMe SSD expander module

8SFF

(8 SFF NVMe drives)

N/A

2 × 4-port NVMe SSD expander modules

8SFF

(8 SFF NVMe drives)

N/A

1 × 8-port NVMe SSD expander module

4SFF

(4 SFF SAS/SATA drives)

1 × storage controller

N/A

4SFF

(4 SFF NVMe drives)

N/A

1 × 4-port NVMe SSD expander module

 

Table 30 Drive population (16SFF server)

Drive configuration

Drive cage bay 1 in compute module 1

Drive cage bay 2 in compute module 1

Drive cage bay 3 in compute module 2

Drive cage bay 4 in compute module 2

16SFF

(16 SFF SAS/SATA drives)

4 SFF SAS/SATA drives

4 SFF SAS/SATA drives

4 SFF SAS/SATA drives

4 SFF SAS/SATA drives

16SFF

(12 SFF SAS/SATA drives and 4 SFF NVMe drives)

4 SFF SAS/SATA drives

4 SFF NVMe drives

4 SFF SAS/SATA drives

4 SFF SAS/SATA drives

16SFF

(8 SFF SAS/SATA drives and 8 SFF NVMe drives)

4 SFF NVMe drives

4 SFF NVMe drives

4 SFF SAS/SATA drives

4 SFF SAS/SATA drives

16SFF

(4 SFF SAS/SATA drives and 12 SFF NVMe drives)

4 SFF SAS/SATA drives

4 SFF NVMe drives

4 SFF NVMe drives

4 SFF NVMe drives

16SFF (16 SFF NVMe drives)

4 SFF NVMe drives

4 SFF NVMe drives

4 SFF NVMe drives

4 SFF NVMe drives

12SFF (12 SFF SAS/SATA drives)

4 SFF SAS/SATA drives

/

4 SFF SAS/SATA drives

4 SFF SAS/SATA drives

12SFF

(8 SFF SAS/SATA drives and 4 SFF NVMe drives)

4 SFF SAS/SATA drives

4 SFF SAS/SATA drives

4 SFF NVMe drives

/

12SFF

(4 SFF SAS/SATA drives and 8 SFF NVMe drives)

/

4 SFF NVMe drives

4 SFF SAS/SATA drives

4 SFF NVMe drives

12SFF

(4 SFF SAS/SATA drives and 8 SFF NVMe drives)

4 SFF NVMe drives

4 SFF NVMe drives

4 SFF SAS/SATA drives

/

12SFF

(12 SFF NVMe drives)

/

4 SFF NVMe drives

4 SFF NVMe drives

4 SFF NVMe drives

8SFF

(8 SFF SAS/SATA drives)

4 SFF SAS/SATA drives

4 SFF SAS/SATA drives

N/A

8SFF

(4 SFF SAS/SATA drives and 4 SFF NVMe drives)

4 SFF SAS/SATA drives

4 SFF NVMe drives

N/A

8SFF

(8 SFF NVMe drives)

/

4 SFF NVMe drives

/

4 SFF NVMe drives

8SFF

(8 SFF NVMe drives)

4 SFF NVMe drives

4 SFF NVMe drives

N/A

4SFF

(4 SFF SAS/SATA drives)

4 SFF SAS/SATA drives

/

N/A

4SFF

(4 SFF NVMe drives)

/

4 SFF NVMe drives

N/A

 

 

NOTE:

·      To install 4 SFF SAS/SATA drives, a 4SFF SAS/SATA drive backplane is required. To install 4 SFF NVMe drives, a 4SFF NVMe drive backplane is required.

·      "/" indicates no drives are required but drive blanks must be installed.

·      "N/A" indicates that the compute module is not required but a compute module blank must be installed.

 

Figure 162 Drive numbering for the 16SFF drive configuration (16SFF server)

 

 

NOTE:

For the location of the compute modules, see "Front panel view of the server." For the location of the drive cage bays, see "Front panel view of a compute module."

 

PCIe modules

Typically, the PCIe modules are available in the following standard form factors:

·          LP—Low profile.

·          FHHL—Full height and half length.

·          FHFL—Full height and full length.

·          HHHL—Half height and half length.

·          HHFL—Half height and full length.

Some PCIe modules require PCIe I/O resources. Make sure the number of such installed PCIe modules does not exceed eleven.

Storage controllers

For some storage controllers, you can order a power fail safeguard module to prevent data loss from power outages. This module provides a flash card and a supercapacitor. When a system power failure occurs, the supercapacitor provides power for a minimum of 20 seconds. During this interval, the storage controller can transfer data from DDR memory to the flash card, where the data remains indefinitely or until the controller retrieves the data. If the storage controller contains a built-in flash card, you can order only a supercapacitor.

The following storage controllers require PCIe I/O resources.

HBA-LSI-9300-8i-A1-X

Item

Specifications

Form factor

LP

Connectors

One ×8 mini-SAS-HD connector

Number of internal ports

8 internal SAS ports (compatible with SATA)

Drive interface

12 Gbps SAS 3.0 or 6 Gbps SATA 3.0

PCIe interface

PCIe3.0 ×8

RAID levels

Not supported

Built-in cache memory

N/A

Supported drives

·         SAS HDD

·         SAS SSD

·         SATA HDD

·         SATA SSD

The controller supports a maximum of 24 drives.

Power fail safeguard module

Not supported

Firmware upgrade

Online upgrade

 

HBA-LSI-9311-8i

Item

Specifications

Form factor

LP

Connectors

One ×8 mini-SAS-HD connector

Number of internal ports

8 internal SAS ports (compatible with SATA)

Drive interface

12 Gbps SAS 3.0 or 6 Gbps SATA 3.0

PCIe interface

PCIe3.0 ×8

RAID levels

0, 1, 10, 1E

Built-in cache memory

N/A

Supported drives

·         SAS HDD

·         SAS SSD

·         SATA HDD

·         SATA SSD

The controller supports a maximum of 24 drives.

Power fail safeguard module

Not supported

Firmware upgrade

Online upgrade

 

HBA-LSI-9440-8i

Item

Specifications

Form factor

LP

Connectors

One ×8 mini-SAS-HD connector

Number of internal ports

8 internal SAS ports (compatible with SATA)

Drive interface

12 Gbps SAS 3.0 or 6 Gbps SATA 3.0

PCIe interface

PCIe3.1 ×8

RAID levels

0, 1, 5, 10, 50

Built-in cache memory

N/A

Supported drives

·         SAS HDD

·         SAS SSD

·         SATA HDD

·         SATA SSD

The controller supports a maximum of 24 drives.

Power fail safeguard module

Not supported

Firmware upgrade

Online upgrade

 

RAID-LSI-9361-8i(1G)-A1-X

Item

Specifications

Form factor

LP

Connectors

One ×8 mini-SAS-HD connector

Number of internal ports

8 internal SAS ports (compatible with SATA)

Drive interface

12 Gbps SAS 3.0 or 6 Gbps SATA 3.0

PCIe interface

PCIe3.0 ×8

RAID levels

0, 1, 5, 6, 10, 50, 60

Built-in cache memory

1 GB internal cache module (DDR3-1866 MHz)

Supported drives

·         SAS HDD

·         SAS SSD

·         SATA HDD

·         SATA SSD

The controller supports a maximum of 24 drives.

Power fail safeguard module

BAT-LSI-G2-4U-B-X

The power fail safeguard module is optional.

Built-in flash card

Not supported

Supercapacitor connector

Not supported

The supercapacitor connector is on the flash card of the power fail safeguard module.

Firmware upgrade

Online upgrade

 

RAID-LSI-9361-8i(2G)-1-X

Item

Specifications

Form factor

LP

Connectors

One ×8 mini-SAS-HD connector

Number of internal ports

8 internal SAS ports (compatible with SATA)

Drive interface

12 Gbps SAS 3.0 or 6 Gbps SATA 3.0

PCIe interface

PCIe3.0 ×8

RAID level

0, 1, 5, 6, 10, 50, 60

Built-in cache memory

2 GB internal cache module (DDR3-1866 MHz)

Supported drives

·         SAS HDD

·         SAS SSD

·         SATA HDD

·         SATA SSD

The controller supports a maximum of 24 drives.

Power fail safeguard module

BAT-LSI-G2-4U-B-X

The power fail safeguard module is optional.

Built-in flash card

Not supported

Supercapacitor connector

Not supported

The supercapacitor connector is on the flash card of the power fail safeguard module.

Firmware upgrade

Online upgrade

 

RAID-LSI-9460-8i(2G)

Item

Specifications

Form factor

LP

Connectors

One ×8 mini-SAS-HD connector

Number of internal ports

8 internal SAS ports (compatible with SATA)

Drive interface

12 Gbps SAS 3.0 or 6 Gbps SATA 3.0

PCIe interface

PCIe3.1 ×8

RAID levels

0, 1, 5, 6, 10, 50, 60

Built-in cache memory

2 GB internal cache module (DDR4-2133 MHz)

Supported drives

·         SAS HDD

·         SAS SSD

·         SATA HDD

·         SATA SSD

The controller supports a maximum of 24 drives.

Power fail safeguard module

BAT-LSI-G3-4U-B

The power fail safeguard module is optional.

Built-in flash card

Supported

Supercapacitor connector

Supported

Firmware upgrade

Online upgrade

 

RAID-LSI-9460-8i(4G)

Item

Specifications

Form factor

LP

Connectors

One ×8 mini-SAS-HD connector

Number of internal ports

8 internal SAS ports (compatible with SATA)

Drive interface

12 Gbps SAS 3.0 or 6 Gbps SATA 3.0

PCIe interface

PCIe3.1 ×8

RAID levels

0, 1, 5, 6, 10, 50, 60

Built-in cache memory

4 GB internal cache module (DDR4-2133 MHz)

Supported drives

·         SAS HDD

·         SAS SSD

·         SATA HDD

·         SATA SSD

The controller supports a maximum of 24 drives.

Power fail safeguard module

BAT-LSI-G3-4U-B

The power fail safeguard module is optional.

Built-in flash card

Supported

Supercapacitor connector

Supported

Firmware upgrade

Online upgrade

 

NVMe SSD expander modules

Model

Specifications

EX-4NVMe-B

4-port NVMe SSD expander module, which supports a maximum of 4 NVMe SSD drives.

EX-8NVMe-B

8-port NVMe SSD expander module, which supports a maximum of 8 NVMe SSD drives.

 

GPU modules

The GPU-V100 and GPU-V100-32G modules require PCIe I/O resources.

GPU-P40-X

Item

Specifications

PCIe interface

PCIe3.0 ×16

Form factor

FH3/4FL, dual-slot wide

Maximum power consumption

250 W

Memory size

24 GB GDDR5

Memory bus width

384 bits

Memory bandwidth

346 GB/s

Power connector

Available

 

GPU-T4

Item

Specifications

PCIe interface

PCIe3.0 ×16

Form factor

LP, single-slot wide

Maximum power consumption

70 W

Memory size

16 GB GDDR6

Memory bus width

256 bits

Memory bandwidth

320 GB/s

Power connector

N/A

 

GPU-V100

Item

Specifications

PCIe interface

PCIe3.0 ×16

Form factor

FH3/4FL, dual-slot wide

Maximum power consumption

250 W

Memory size

16 GB HBM2

Memory bus width

4096 bits

Memory bandwidth

900 GB/s

Power connector

Available

 

GPU-V100-32G

Item

Specifications

PCIe interface

PCIe3.0 ×16

Form factor

FH3/4FL, dual-slot wide

Maximum power consumption

250 W

Memory size

32 GB HBM2

Memory bus width

4096 bits

Memory bandwidth

900 GB/s

Power connector

Available

 

GPU module and riser card compatibility

Riser card

PCIe riser connector or bay

PCIe slot

Available GPU modules

RS-GPU-R6900-G3

Connector 0 in a compute module

Slot 1

·         GPU-P40-X

·         GPU-T4

·         GPU-V100

·         GPU-V100-32G

RS-4*FHHL-G3

Bay 1 or 3 at the server rear

Slot 2

GPU-T4

Slot 3

Not supported

Slot 4

Not supported

Slot 6

GPU-T4

 

PCIe Ethernet adapters

In addition to the PCIe Ethernet adapters, the server also supports mLOM Ethernet adapters (see "mLOM Ethernet adapters").

NIC-GE-4P-360T-B2-1-X, CNA-10GE-2P-560F-B2-1-X PCIe, NIC-X540-T2-T-10Gb-2P, NIC-XXV710-F-B-25Gb-2P, and NIC-957454A4540C-B-100G-1P Ethernet adapters require PCIe I/O resources.

Table 31 PCIe Ethernet adapter specifications

Model

Form factor

Ports

Connector

Data rate

Bus type

NCSI

CNA-10GE-2P-560F-B2-1-X

LP

2

SFP+

10 Gbps

PCIe2.0 ×8

Not supported

CNA-560T-B2-10Gb-2P-1-X

LP

2

RJ45

10 Gbps

PCIe3.0 ×4

Not supported

IB-MCX354A-FCBT-56/40Gb-2P-X

LP

2

QSFP

40/56 Gbps

PCIe3.0 ×8

Not supported

IB-MCX354A-FCBT-56/40Gb-2P-1

LP

2

QSFP

40/56 Gbps

PCIe3.0 ×8

Not supported

IB-MCX555A-ECAT-100Gb-1P-1

LP

1

QSFP28

100 Gbps

PCIe3.0 ×16

Not supported

IB-MCX453A-FCAT-56/40Gb-1P-1

LP

1

QSFP28

56 Gbps

PCIe3.0 ×8

Not supported

NIC-10GE-2P-520F-B2-1-X

LP

2

SFP+

10 Gbps

PCIe3.0 ×8

Not supported

NIC-10GE-2P-530F-B2-1-X

LP

2

SFP+

10 Gbps

PCIe2.0 ×8

Not supported

NIC-620F-B2-25Gb-2P-1-X

LP

2

SFP28

25 Gbps

PCIe3.0 ×8

Supported

NIC-957454A4540C-B-100G-1P

LP

1

QSFP28

100 Gbps

PCIe3.0 ×16

Not supported

NIC-BCM957302-F-B-10Gb-2P

LP

2

SFP+

10 Gbps

PCIe3.0 ×8

Not supported

NIC-BCM957414-F-B-25Gb-2P

LP

2

SFP28

25 Gbps

PCIe3.0 ×8

Not supported

NIC-BCM957416-T-B-10Gb-2P

LP

2

SFP+

10 Gbps

PCIe3.0 ×8

Not supported

NIC-CAVIUM-F-B-25Gb-2P

LP

2

SFP28

25 Gbps

PCIe3.0 ×8

Not supported

NIC-GE-4P-360T-B2-1-X

LP

4

RJ-45

10/100/1000 Mbps

PCIe2.0 ×4

Not supported

NIC-MCX4121A-F-B-10Gb-2P

LP

2

SFP28

10 Gbps

PCIe3.0 ×8

Not supported

NIC-MCX415A-F-B-100Gb-1P

LP

1

QSFP28

100 Gbps

PCIe3.0 ×16

Not supported

NIC-MCX4121A-F-B-25Gb-2P

LP

2

SFP28

25 Gbps

PCIe3.0 ×8

Not supported

NIC-X540-T2-T-10Gb-2P

LP

2

RJ-45

10 Gbps

PCIe2.0 ×8

Not supported

NIC-X710DA2-F-B-10Gb-2P-2

LP

2

SFP+

10 Gbps

PCIe3.0 ×8

Not supported

NIC-X710DA4-F-B-10Gb-4P

LP

4

SFP+

10 Gbps

PCIe3.0 ×8

Not supported

NIC-XXV710-F-B-25Gb-2P

LP

2

SFP28

25 Gbps

PCIe3.0 ×8

Not supported

IB-MCX354A-FCBT-56Gb/40Gb-2P

LP

2

QSFP

40/56 Gbps

PCIe3.0 ×8

Not supported

NIC-OPA-100Gb-1P

LP

1

QSFP28

100 Gbps

PCIe3.0 ×16

Not supported

 

FC HBAs

FC-HBA-QLE2560-8Gb-1P-1-X, FC-HBA-QLE2562-8Gb-2P-1-X, FC-HBA-QLE2690-16Gb-1P-1-X, and FC-HBA-QLE2692-16Gb-2P-1-X FC HBAs require PCIe I/O resources.

FC-HBA-QLE2560-8Gb-1P-1-X

Item

Specifications

Form factor

LP

Ports

1

Connector

SFP+

Data rate

8 Gbps

 

FC-HBA-QLE2562-8Gb-2P-1-X

Item

Specifications

Form factor

LP

Ports

2

Connector

SFP+

Data rate

8 Gbps

 

FC-HBA-QLE2690-16Gb-1P-1-X

Item

Specifications

Form factor

LP

Ports

1

Connector

SFP+

Data rate

16 Gbps

 

FC-HBA-QLE2692-16Gb-2P-1-X

Item

Specifications

Form factor

LP

Ports

2

Connector

SFP+

Data rate

16 Gbps

 

FC-HBA-QLE2740-32Gb-1P

Item

Specifications

Form factor

LP

Ports

1

Connector

SFP+

Data rate

32 Gbps

 

FC-HBA-QLE2742-32Gb-2P

Item

Specifications

Form factor

LP

Ports

2

Connector

SFP+

Data rate

32 Gbps

 

FC-HBA-LPe32000-32Gb-1P-X

Item

Specifications

Form factor

LP

Ports

1

Connector

SFP+

Data rate

32 Gbps

 

FC-HBA-LPe32002-32Gb-2P-X

Item

Specifications

Form factor

LP

Ports

2

Connector

SFP+

Data rate

32 Gbps

 

HBA-8Gb-LPe12000-1P-1-X

Item

Specifications

Form factor

LP

Ports

1

Connector

SFP+

Data rate

8 Gbps

 

HBA-8Gb-LPe12002-2P-1-X

Item

Specifications

Form factor

LP

Ports

2

Connector

SFP+

Data rate

8 Gbps

 

HBA-16Gb-LPe31000-1P-1-X

Item

Specifications

Form factor

LP

Ports

1

Connector

SFP+

Data rate

16 Gbps

 

HBA-16Gb-LPe31002-2P-1-X

Item

Specifications

Form factor

LP

Ports

2

Connector

SFP+

Data rate

16 Gbps

 

mLOM Ethernet adapters

In addition to mLOM Ethernet adapters, the server also supports PCIe Ethernet adapters (see "PCIe Ethernet adapters").

By default, port 1 on an mLOM Ethernet adapter acts as an HDM shared network port.

NIC-GE-4P-360T-L3-M

Item

Specifications

Form factor

LP

Ports

4

Connector

RJ-45

Data rate

1000 Mbps

Bus type

1000BASE-X ×4

NCSI

Supported

 

NIC-10GE-2P-560T-L2-M

Item

Specifications

Form factor

LP

Ports

2

Connector

RJ-45

Data rate

1/10 Gbps

Bus type

10G-KR ×2

NCSI

Supported

 

NIC-10GE-2P-560F-L2-M

Item

Specifications

Form factor

LP

Ports

2

Connector

SFP+

Data rate

10 Gbps

Bus type

10G-KR ×2

NCSI

Supported

 

Riser cards

To expand the server with PCIe modules, install riser cards on the PCIe riser connectors or in riser bays.

The PCIe slots in a riser card are numbered differently depending on the riser card model and the PCIe riser connector or bay that holds the riser card.

Riser card guidelines

Each PCIe slot in a riser card can supply a maximum of 75 W power to the PCIe module. You must connect a separate power cord to the PCIe module if it requires more than 75 W power.

If a processor is faulty or absent, the corresponding PCIe slots are unavailable.

RS-FHHL-G3

Item

Specifications

PCIe riser connector

Connector 0 in a compute module

PCIe slots

Slot 1: PCIe3.0 ×16 (16, 8, 4, 2, 1) for processor 2 of the compute module

NOTE:

The numbers in parentheses represent link widths.

Form factors of PCIe modules

FHHL

Maximum power supplied per PCIe slot

75 W

 

Figure 163 RS-FHHL-G3 riser card

(1) PCIe slot 1

(2) GPU module power connector

 

RS-GPU-R6900-G3

Item

Specifications

PCIe riser connector

Connector 0 in a compute module

PCIe slots

Slot 1: PCIe3.0 ×16 (16, 8, 4, 2, 1) for processor 2 of the compute module

NOTE:

The numbers in parentheses represent link widths.

Form factors of PCIe modules

FHHL

NOTE:

The riser card supports double-wide GPU modules.

Maximum power supplied per PCIe slot

75 W

 

Figure 164 RS-GPU-R6900-G3 riser card

(1) PCIe slot 1

(2) GPU module power connector

 

RS-4*FHHL-G3

Item

Specifications

PCIe riser bay

Bay 1 or 3 at the server rear

PCIe slots

·         PCIe riser bay 1:

¡  Slot 2: PCIe3.0 ×16 (8, 4, 2, 1) for processor 1 in compute module 1

¡  Slot 3: PCIe3.0 ×8 (8, 4, 2, 1) for processor 1 in compute module 1

¡  Slot 4: PCIe3.0 ×8 (8, 4, 2, 1) for processor 1 in compute module 2

¡  Slot 6: PCIe3.0 ×16 (8, 4, 2, 1) for processor 1 in compute module 2

·         PCIe riser bay 3:

¡  Slot 2: PCIe3.0 ×16 (8, 4, 2, 1) for processor 2 in compute module 1

¡  Slot 3: PCIe3.0 ×8 (8, 4, 2, 1) for processor 2 in compute module 1

¡  Slot 4: PCIe3.0 ×8 (8, 4, 2, 1) for processor 2 in compute module 2

¡  Slot 6: PCIe3.0 ×16 (8, 4, 2, 1) for processor 2 in compute module 2

NOTE:

The numbers in parentheses represent link widths.

Form factors of PCIe modules

FHHL

NOTE:

Slots 2 and 6 of the riser card support single-wide GPU modules.

Maximum power supplied per PCIe slot

75 W

 

Figure 165 RS-4*FHHL-G3 riser card

(1) PCIe slot 2

(2) PCIe slot 3

(3) PCIe slot 4

(4) PCIe slot 6

(5) mLOM Ethernet adapter connector

(6) Supercapacitor connector 2

(7) NCSI connector

(8) SAS port B2 (×4 SAS ports)

(9) SAS port B1 (×4 SAS ports)

(10) SAS port A2 (×4 SAS ports)

(11) SAS port A1 (×4 SAS ports)

(12) Supercapacitor connector 1

 

 

NOTE:

PCIe slot 4 is unavailable if an mLOM Ethernet adapter is installed.

 

RS-6*FHHL-G3-1

Item

Specifications

PCIe riser bay

Bay 1 or 3 at the server rear

PCIe slots

·         PCIe riser bay 1:

¡  Slots 1 through 3: PCIe3.0 ×8 (8, 4, 2, 1) for processor 1 in compute module 1

¡  Slots 4 through 6: PCIe3.0 ×8 (8, 4, 2, 1) for processor 1 in compute module 2

·         PCIe riser bay 3:

¡  Slots 1 through 3: PCIe3.0 ×8 (8, 4, 2, 1) for processor 2 in compute module 1

¡  Slots 4 through 6: PCIe3.0 ×8 (8, 4, 2, 1) for processor 2 in compute module 2

NOTE:

The numbers in parentheses represent link widths.

Form factors of PCIe modules

FHHL

Maximum power supplied per PCIe slot

75 W

 

Figure 166 RS-6*FHHL-G3-1 riser card

(1) PCIe slot 1

(2) PCIe slot 2

(3) PCIe slot 3

(4) PCIe slot 4

(5) PCIe slot 5

(6) PCIe slot 6

(7) mLOM Ethernet adapter connector

(8) Supercapacitor connector 2

(9) NCSI connector

(10) SAS port B2 (×4 SAS ports)

(11) SAS port B1 (×4 SAS ports)

(12) SAS port A2 (×4 SAS ports)

(13) SAS port A1 (×4 SAS ports)

(14) Supercapacitor connector 1

 

 

NOTE:

PCIe slot 4 is unavailable if an mLOM Ethernet adapter is installed.

 

RS-6*FHHL-G3-2

Item

Specifications

PCIe riser bay

Bay 2 at the server rear

PCIe slots

·         Slot 1: PCIe3.0 ×8 (8, 4, 2, 1) for processor 2 in compute module 1

·         Slot 2: PCIe3.0 ×8 (8, 4, 2, 1) for processor 1 in compute module 1

·         Slots 3, 4, and 6: PCIe3.0 ×8 (8, 4, 2, 1) for processor 1 in compute module 2

·         Slot 5: PCIe3.0 ×8 (8, 4, 2, 1) for processor 2 in compute module 2

NOTE:

The numbers in parentheses represent link widths.

Form factors of PCIe modules

FHHL

Maximum power supplied per PCIe slot

75 W

 

Figure 167 RS-6*FHHL-G3-2 riser card

(1) PCIe slot 1

(2) PCIe slot 2

(3) PCIe slot 3

(4) PCIe slot 4

(5) PCIe slot 5

(6) PCIe slot 6

 

Riser card and system board port mapping relationship

Riser card name

Ports on the riser card

Corresponding ports on the system board

Riser card 0

N/A

N/A

Riser card 1

SAS port A1

SAS port A1 on compute module 1

SAS port A2

SAS port A2 on compute module 1

SAS port B1

SAS port A1 on compute module 2

SAS port B2

SAS port A2 on compute module 2

Supercapacitor port 1

Supercapacitor port 1 on compute module 1

Supercapacitor port 2

Supercapacitor port 1 on compute module 2

Riser card 2

N/A

N/A

Riser card 3

SAS port A1

SAS port B1 on compute module 1

SAS port A2

SAS port B2 on compute module 1

SAS port B1

SAS port B1 on compute module 2

SAS port B2

SAS port B2 on compute module 2

Supercapacitor port 1

Supercapacitor port 2 on compute module 1

Supercapacitor port 2

Supercapacitor port 2 on compute module 2

 

 

NOTE:

·      For more information about the SAS and supercapacitor ports on a riser card, see "Riser cards."

·      For more information about the SAS and supercapacitor ports on the system board, see "Main board of a compute module."

 

Fan modules

The server must be configured with six hot swappable fan modules, each of which includes two fans. Figure 168 shows the layout of the fan modules in the chassis.

The fans support N+1 redundancy.

During system POST and operation, the server will be powered off through HDM if the temperature detected by any sensor in the server reaches the critical threshold. The server will be powered off directly if any key components such as processors exceed the upper threshold.

Figure 168 Fan module layout

 

Air baffles

Compute module air baffles

Each compute module comes with two bilateral air baffles (a right air baffle and a left air baffle). You must install a low mid air baffle, high mid air baffle, or GPU module air baffle as required.

Table 32 lists air baffles available for a compute module and their installation locations and usage scenarios.

Table 32 Compute module air baffles

Name

Picture

Installation location

Usage scenario

High mid air baffle

Above the DIMMs between the two processors.

No riser card is installed in the compute module.

Low mid air baffle

Above the DIMMs between the two processors.

A riser card is installed in the compute module.

If the riser card carries a GPU module, install a GPU air baffle instead of a low mid air baffle.

Bilateral air baffle

Above the DIMMs at the right of processor 1, or the DIMMs at the left of processor 2.

N/A

GPU module air baffle

Above the DIMMs between the two processors.

A GPU module is installed in the compute module.

 

 

NOTE:

For more information about the air baffle locations, see "Main board components."

 

Power supply air baffle

The server comes with one power supply air baffle installed over the fan modules for heat dissipation of the power supplies. For more information about the air baffle location, see "Fan modules."

Figure 169 Power supply air baffle

 

Rear riser card air baffles

Each riser card at the server rear comes with a riser card air baffle installed over the four SAS connectors. An RS-4*FHHL-G3 riser card also comes with a GPU module air baffle. For more information about the air baffle location, see "Riser cards."

Table 33 Rear riser card air baffles

Name

Picture

Installation location

Functions

Riser card air baffle

At the left of the connectors on each server riser card.

Provides ventilation aisles in rear riser cards.

GPU module air baffle

Between the NCSI connector and the mLOM Ethernet adapter connector.

Provides ventilation aisles in an RS-4*FHHL-G3 riser card.

 

Power supplies

The power supplies have an overtemperature protection mechanism. A power supply stops working when an overtemperature occurs and automatically recovers when the overtemperature condition is removed.

800 W power supply

Item

Specifications

Model

PSR800-12A

Rated input voltage range

·         100 VAC to 240 VAC @ 50/60 Hz

·         192 VDC to 288 VDC (240 HVDC power source)

Maximum rated input current

·         10.0 A @ 100 VAC to 240 VAC

·         4.0 A @ 240 VDC

Maximum rated output power

800 W

Efficiency at 50 % load

94%, 80 Plus platinum level

Temperature requirements

·         Operating temperature: 0°C to 50°C (32°F to 122°F)

·         Storage temperature: –40°C to +70°C (–40°F to +158°F)

Operating humidity

5% to 90%

Maximum altitude

5000 m (16404.20 ft)

Redundancy

N+N redundancy

Hot swappable

Yes

Cold backup

Yes

 

800 W high-voltage power supply

Item

Specifications

Model

PSR800-12AHD

Rated input voltage range

·         100 VAC to 240 VAC @ 50/60 Hz

·         180 VDC to 400 VDC (240 to 380 HVDC power source)

Maximum rated input current

·         10.0 A @ 100 VAC to 240 VAC

·         3.8 A @ 240 VDC

Maximum rated output power

800 W

Efficiency at 50 % load

94%, 80 Plus platinum level

Temperature requirements

·         Operating temperature: 0°C to 50°C (32°F to 122°F)

·         Storage temperature: –40°C to +70°C (–40°F to +158°F)

Operating humidity

5% to 90%

Maximum altitude

5000 m (16404.20 ft)

Redundancy

N+N redundancy

Hot swappable

Yes

Cold backup

Yes

 

1200 W power supply

Item

Specifications

Model

PSR1200-12A

Rated input voltage range

·         100 VAC to 127 VAC @ 50/60 Hz (1000 W)

·         200 VAC to 240 VAC @ 50/60 Hz (1200 W)

·         192 VDC to 288 VDC (240 HVDC power source) (1200 W)

Maximum rated input current

·         12.0 A @ 100 VAC to 240 VAC

·         6.0 A @ 240 VDC

Maximum rated output power

1200 W

Efficiency at 50 % load

94%, 80 Plus platinum level

Temperature requirements

·         Operating temperature: 0°C to 50°C (32°F to 122°F)

·         Storage temperature: –40°C to +70°C (–40°F to +158°F)

Operating humidity

5% to 90%

Maximum altitude

5000 m (16404.20 ft)

Redundancy

N+N redundancy

Hot swappable

Yes

Cold backup

Yes

 

1600 W power supply

Item

Specifications

Model

PSR850-12A

Rated input voltage range

·         200 VAC to 240 VAC @ 50/60 Hz

·         192 VDC to 288 VDC (240 HVDC power source)

Maximum rated input current

·         9.5 A @ 200 VAC to 240 VAC

·         8.0 A @ 240 VDC

Maximum rated output power

1600 W

Efficiency at 50 % load

94%, 80 Plus platinum level

Temperature requirements

·         Operating temperature: 0°C to 50°C (32°F to 122°F)

·         Storage temperature: –40°C to +70°C (–40°F to +158°F)

Operating humidity

5% to 90%

Maximum altitude

5000 m (16404.20 ft)

Redundancy

N+N redundancy

Hot swappable

Yes

Cold backup

Yes

800 W –48VDC power supply

Item

Specifications

Model

DPS-800W-12A-48V

Rated input voltage range

–48 VDC to –60 VDC

Maximum rated input current

20.0 A @ –48 VAC to –60 VDC

Maximum rated output power

800 W

Efficiency at 50 % load

92%

Temperature requirements

·         Operating temperature: 0°C to 55°C (32°F to 131°F)

·         Storage temperature: –40°C to +70°C (–40°F to +158°F)

Operating humidity

5% to 90%

Maximum altitude

5000 m (16404.20 ft)

Redundancy

1+1 redundancy

Hot swappable

Yes

Cold backup

Yes

 

850 W high-efficiency Platinum power supply

Item

Specifications

Model

DPS-850W-12A

Rated input voltage range

·         100 VAC to 240 VAC @ 50/60 Hz

·         192 VDC to 288 VDC (240 HVDC power source)

Maximum rated input current

·         10.0 A @ 100 VAC to 240 VAC

·         4.4 A @ 240 VDC

Maximum rated output power

850 W

Efficiency at 50 % load

94%, 80 Plus platinum level

Temperature requirements

·         Operating temperature: 0°C to 55°C (32°F to 131°F)

·         Storage temperature: –40°C to +70°C (–40°F to +158°F)

Operating humidity

5% to 85%

Maximum altitude

5000 m (16404.20 ft)

Redundancy

N+N redundancy

Hot swappable

Yes

Cold backup

Yes

 

Expander modules

Model

Specifications

DSD-EX-A-X

Dual SD card extended module (supports RAID level 1)

BP-4SFF

4SFF SAS/SATA drive backplane

BP-4SFF-NVMe

4SFF NVMe drive backplane

 

Diagnostic panels

Diagnostic panels provide diagnostics and troubleshooting capabilities. You can locate and troubleshoot component failures by using the diagnostic panels in conjunction with the event log generated in HDM.

 

 

NOTE:

A diagnostic panel displays only one component failure at a time. When multiple component failures exist, the diagnostic panel displays all these failures one by one at intervals of 4 seconds.

 

Diagnostic panel specifications

Model

Specifications

SD-SFF-A

SFF diagnostic panel

 

Diagnostic panel view

Figure 170 shows the error code and LEDs on a diagnostic panel.

Figure 170 Diagnostic panel view

(1) Error code

(2) LEDs

 

For more information about the LEDs and error codes, see "LEDs."

LEDs

The server is operating correctly when the error code is 00 and all LEDs on the diagnostic panel are off.

POST LED

LED status

Error code

Description

Steady green

Code for the current POST phase (in the range of 00 to 99)

The server is performing POST without detecting any error.

Flashing red

Code for the current POST phase (in the range of 00 to 99)

The POST process encountered an error and stopped in the displayed phase.

 

TEMP LED

LED status

Error code

Description

Flashing red

Temperature sensor ID

A severe temperature warning is present on the component monitored by the sensor.

This warning might occur because the temperature of the component has exceeded the upper threshold or dropped below the lower threshold.

 

CAP LED

LED status

Error code

Description

Flashing amber

01

The system power consumption has exceeded the power cap value.

 

Component LEDs

An alarm is present if a component LED has one of the following behaviors:

·          Flashing amber (0.5 Hz)—A predictive alarm has occurred.

·          Flashing amber (1 Hz)—A general alarm has occurred.

·          Flashing red (1 Hz)—A severe alarm has occurred.

Use Table 34 to identify the faulty item if a component LED has one of those behaviors. To obtain records of component status changes, use the event log in HDM. For information about using the event log, see HDM online help.

Table 34 LED, error code and faulty item matrix

LED

Error code

Faulty item

BRD

01

PDB

02

Management module

03

Midplane

11

Main board of compute module 1

12

Drive backplane for drive cage bay 2 of compute module 1

13

Drive backplane for drive cage bay 1 of compute module 1

21

Main board of compute module 2

22

Drive backplane for drive cage bay 4 of compute module 2

23

Drive backplane for drive cage bay 3 of compute module 2

91

mLOM Ethernet adapter

NOTE:

If the error code field displays 11 or 21 and any other code alternatively, replace the faulty item other than the main board. If the issue persists, replace the main board.

CPU (processor)

01

Processor 1 in compute module 1

02

Processor 2 in compute module 1

03

Processor 1 in compute module 2

04

Processor 2 in compute module 2

DIMM

A1 through A9, AA, Ab, or AC

Compute module 1:

·         A1 through A9—DIMMs in slots A1 through A9

·         AA—DIMM in slot A10

·         Ab—DIMM in slot A11

·         AC—DIMM in slot A12

b1 through b9, bA, bb, or bC

Compute module 1:

·         b1 through b9—DIMMs in slots B1 through B9

·         bA—DIMM in slot B10

·         bb—DIMM in slot B11

·         bC—DIMM in slot B12

C1 through C9, CA, Cb, or CC

Compute module 2:

·         C1 through C9—DIMMs in slots A1 through A9

·         CA—DIMM in slot A10

·         Cb—DIMM in slot A11

·         CC—DIMM in slot A12

d1 through d9, dA, db, or dC

Compute module 2:

·         d1 through d9—DIMMs in slots B1 through B9

·         dA—DIMM in slot B10

·         db—DIMM in slot B11

·         dC—DIMM in slot B12

HDD

00 through 23

Relevant drive in compute module 1 (24SFF)

30 through 53

Relevant drive in compute module 2 (24SFF)

00 through 07

Relevant drive in compute module 1 (8SFF)

10 through 17

Relevant drive in compute module 2 (8SFF)

PCIE

01

PCIe module in PCIe slot 1 of riser card 0 in compute module 1

03

PCIe module in PCIe slot 1 of riser card 0 in compute module 2

11 through 16

PCIe modules in PCIe slots 1 through 6 of riser card 1 at server rear

21 through 26

PCIe modules in PCIe slots 1 through 6 of riser card 2 at server rear

31 through 36

PCIe modules in PCIe slots 1 through 6 of riser card 3 at server rear

09

PCIe uplink between processors and the PCH for the mLOM Ethernet adapter

PSU

01

Power supply 1

02

Power supply 2

03

Power supply 3

04

Power supply 4

FAN

01 through 06

Fan module 1 through Fan module 6

VRD

01

PDB P5V voltage

02

PDB P3V3_STBY voltage

03

Management module P1V05_PCH_STBY voltage

04

Management module PVNN_PCH_STBY voltage

05

Management module P1V8_PCH_STBY voltage

60

Compute module 1 HPMOS voltage

61

Compute module 1 PVCCIO_CPU1 voltage

62

Compute module 1 PVCCIN_CPU1 voltage

63

Compute module 1 PVCCSA_CPU1 voltage

64

Compute module 1 PVCCIO_CPU2 voltage

65

Compute module 1 PVCCIN_CPU2 voltage

66

Compute module 1 PVCCSA_CPU2 voltage

67

Compute module 1 VDDQ/VPP_CPU1_ABC voltage

68

Compute module 1 VDDQ/VPP_CPU1_DEF voltage

69

Compute module 1 VTT_CPU1_ABC voltage

6A

Compute module 1 VTT_CPU1_DEF voltage

6b

Compute module 1 VDDQ/VPP_CPU2_ABC voltage

6C

Compute module 1 VDDQ/VPP_CPU2_DEF voltage

6d

Compute module 1 VTT_CPU2_ABC voltage

6E

Compute module 1 VTT_CPU2_DEF voltage

70

Compute module 2 HPMOS voltage

71

Compute module 2 PVCCIO_CPU1 voltage

72

Compute module 2 PVCCIN_CPU1 voltage

73

Compute module 2 PVCCSA_CPU1 voltage

74

Compute module 2 PVCCIO_CPU2 voltage

75

Compute module 2 PVCCIN_CPU2 voltage

76

Compute module 2 PVCCSA_CPU2 voltage

77

Compute module 2 VDDQ/VPP_CPU1_ABC voltage

78

Compute module 2 VDDQ/VPP_CPU1_DEF voltage

79

Compute module 2 VTT_CPU1_ABC voltage

7A

Compute module 2 VTT_CPU1_DEF voltage

7b

Compute module 2 VDDQ/VPP_CPU2_ABC voltage

7C

Compute module 2 VDDQ/VPP_CPU2_DEF voltage

7d

Compute module 2 VTT_CPU2_ABC voltage

7E

Compute module 2 VTT_CPU2_DEF voltage

 

 

NOTE:

·      The term "CPU" in this table refers to processors.

·      For the location of riser cards at the server rear, see "Riser cards."

 

Fiber transceiver modules

Model

Central wavelength

Connector

Max transmission distance

SFP-25G-SR-MM850-1-X

850 nm

LC

100 m (328.08 ft)

SFP-XG-SX-MM850-A1-X

850 nm

LC

300 m (984.25 ft)

SFP-XG-SX-MM850-E1-X

850 nm

LC

300 m (984.25 ft)

QSFP-100G-SR4-MM850

850 nm

MPO

100 m (328.08 ft)

SFP-XG-LX-SM1310-E

1310 nm

LC

10 km (32808.40 ft)

 

Storage options other than HDDs and SDDs

Model

Specifications

DVD-RW-Mobile-USB-A

Removable USB DVDRW drive module

IMPORTANT IMPORTANT:

For this module to work correctly, you must connect it to a USB 3.0 connector.

 

NVMe VROC modules

Model

RAID levels

Compatible NVMe SSDs

NVMe-VROC-Key-S

0, 1, 10

All NVMe drives and PCIe M.2 SSDs

NVMe-VROC-Key-P

0, 1, 5, 10

All NVMe drives and PCIe M.2 SSDs

NVMe-VROC-Key-I

0, 1, 5, 10

Intel NVMe drives and Intel PCIe M.2 SSDs

 

TPM/TCM modules

Trusted platform module (TPM) is a microchip embedded in the management module. It stores encryption information (such as encryption keys) for authenticating server hardware and software. The TPM operates with drive encryption programs such as Microsoft Windows BitLocker to provide operating system security and data protection. For information about Microsoft Windows BitLocker, visit the Microsoft website at http://www.microsoft.com.

Trusted cryptography module (TCM) is a trusted computing platform-based hardware module with protected storage space, which enables the platform to implement password calculation.

Table 35 describes the TPM and TCM modules supported by the server.

Table 35 TPM/TCM specifications

Model

Specifications

TPM-2-X

Trusted Platform Module 2.0

TCM-1-X

Trusted Cryptography Module 1.0

 

Security bezels, slide rail kits, and CMA

Model

Description

CMA-2U-A

2U CMA

SL-4U-BB

4U ball bearing rail

RS-CH-A

Chassis handle

SEC-Panel-4U

4U security bezel

CAB-GPU-PWR-M

GPU power cable module

BAT-LSI-G2-4U-B-X

LSI G2 supercapacitor B (compute module air baffle)

BAT-LSI-G3-4U-B

LSI G3 supercapacitor B (compute module air baffle)

 


Appendix C  Managed hot removal of NVMe drives

Managed hot removal of NVMe drives enables you to remove NVMe drives safely while the server is operating.

Use Table 36 to determine the managed hot removal method depending on the VMD status and the operating system. For more information about VMD, see the BIOS user guide for the server.

Table 36 Managed hot removal methods

VMD status

Operating system

Managed hot removal method

Enabled

Windows

Performing a managed hot removal in Windows.

Linux

Performing a managed hot removal in Linux.

Disabled (default status)

N/A

Contact the support.

 

Performing a managed hot removal in Windows

Prerequisites

Install Intel® Rapid Storage Technology enterprise (Intel® RSTe).

To obtain Intel® RSTe, use one of the following methods:

·          Go to https://platformsw.intel.com/KitSearch.aspx to download the software.

·          Contact Intel Support.

Procedure

1.        Stop reading data from or writing data to the NVMe drive to be removed.

2.        Identify the location of the NVMe drive. For more information, see "Drive configurations and numbering."

3.        Run Intel® RSTe.

4.        Unmount the NVMe drive from the operating system, as shown in Figure 171:

¡  Select the NVMe drive to be removed from the Devices list.

¡  Click Activate LED to turn on the Fault/UID LED on the drive.

¡  Click Remove Disk.

Figure 171 Removing an NVMe drive

 

5.        Observe the Fault/UID LED on the drive. If the Fault/UID LED turns steady blue and the drive is removed from the Devices list, remove the drive from the server.

For more information about the removal procedure, see "Replacing an NVMe drive."

Performing a managed hot removal in Linux

In Linux, you can perform a managed hot removal of NVMe drives from the CLI or by using Intel® Accelerated Storage Manager.

Prerequisites

·          Identify that your operating system is a non-SLES Linux operating system. SLES operating systems do not support managed hot removal of NVMe drives.

·          To perform a managed hot removal by using Intel®  ASM, install Intel®  ASM.

To obtain Intel® ASM, use one of the following methods:

¡  Go to https://platformsw.intel.com/KitSearch.aspx to download the software.

¡  Contact Intel Support.

Performing a managed hot removal from the CLI

1.        Stop reading data from or writing data to the NVMe drive to be removed.

2.        Identify the location of the NVMe drive. For more information, see "Drive configurations and numbering."

3.        Access the CLI of the server.

4.        Execute the lsblk | grep nvme command to identify the drive letter of the NVMe drive, as shown in Figure 172.

Figure 172 Identifying the drive letter of the NVMe drive to be removed

 

5.        Execute the ledctl locate=/dev/drive_letter command to turn on the Fault/UID LED on the drive. The drive_letter argument represents the drive letter, for example, nvme0n1.

6.        Execute the echo 1 > /sys/block/drive_letter/device/device/remove command to unmount the drive from the operating system. The drive_letter argument represents the drive letter, for example, nvme0n1.

7.        Observe the Fault/UID LED on the drive. If the Fault/UID LED turns steady blue, remove the drive from the server.

For more information about the removal procedure, see "Replacing an NVMe drive."

Performing a managed hot removal from the Intel®  ASM Web interface

1.        Stop reading data from or writing data to the NVMe drive to be removed.

2.        Identify the location of the NVMe drive. For more information, see "Drive configurations and numbering."

3.        Run Intel® ASM.

4.        Click RSTe Management.

Figure 173 Accessing RSTe Management

 

5.        Expand the Intel(R) VROC(in pass-thru mode) menu to view operating NVMe drives, as shown in Figure 174.

Figure 174 Viewing operating NVMe drives

 

6.        Click the light bulb icon to turn on the Fault/UID LED on the drive, as shown in Figure 175.

Figure 175 Turning on the drive Fault/UID LED

 

7.        Click the removal icon, as shown in Figure 176.

Figure 176 Removing an NVMe drive

 

8.        In the dialog box that opens, click Yes.

Figure 177 Confirming the removal

 

9.        Remove the drive from the server. For more information about the removal procedure, see "Replacing an NVMe drive."


Appendix D  Environment requirements

About environment requirements

The operating temperature requirements for the server vary depending on the server model and hardware configuration. When the general and component-based requirements conflict, use the component-based requirement.

Be aware that the actual maximum operating temperature of the server might be lower than what is stated because of poor site cooling performance. In a real data center, the server cooling performance might decrease because of adverse external factors, including poor cabinet cooling performance, high power density inside the cabinet, or insufficient spacing between devices.

General environment requirements

Item

Specifications

Operating temperature

Minimum: 5°C (41°F)

Maximum:

·         For 32SFF and 48SFF drive configurations: 40°C (104°F)

·         For 16SFF drive configuration: 45°C (113°F)

CAUTION CAUTION:

The maximum temperature varies by hardware option presence. For more information, see "Operating temperature requirements."

Storage temperature

–40°C to +70°C (–40°F to +158°F)

Operating humidity

8% to 90%, noncondensing

Storage humidity

5% to 95%, noncondensing

Operating altitude

–60 m to +3000 m (–196.85 ft to +9842.52 ft)

The allowed maximum temperature decreases by 0.33 °C (32.59°F) as the altitude increases by 100 m (328.08 ft) from 900 m (2952.76 ft)

Storage altitude

–60 m to +5000 m (–196.85 ft to +16404.20 ft)

 

Operating temperature requirements

Use Table 37 and Table 38 to determine the maximum operating temperature of the servers with 8 SFF drives in each compute module or 24 SFF drives in either compute module. A maximum server operating temperature applies if the server contains any options in its matching hardware option list.

 

 

NOTE:

All maximum server operating temperature values are provided on the basis that the fan modules are installed as needed and operating correctly. For more information about fan modules, see "Fan modules."

 

Table 37 Temperature requirements for the server with 8 SFF drives in each compute module

Maximum server operating temperature

Hardware options

30°C (86°F)

GPU module GPU-V100 or GPU-V100-32G

35°C (95°F)

·         A faulty fan

·         NVMe drives

·         GPU module GPU-P4-X, GPU-P40-X, GPU-P100, or GPU-T4

NOTE:

With GPU-P4-X, GPU-P40-X, GPU-P100, or GPU-T4 installed, the server performance might degrade if a fan fails.

40°C (104°F)

·         PCIe M.2 SSD

·         Any of the following processor models:

¡  8180

¡  8180M

¡  8168

¡  6154

¡  6146

¡  6144

¡  6244

45°C (113°F)

None of the above hardware options or operating conditions exists

 

Table 38 Temperature requirements for the server with 24 SFF drives in either compute module

Maximum server operating temperature

Hardware options

30°C (86°F)

·         Any of the following processor models wth a faulty fan:

¡  8180

¡  8180M

¡  8168

¡  6154

¡  6146

¡  6144

¡  6244

35°C (95°F)

·         GPU module GPU-P4-X or GPU-T4

·         Any of the following processor models:

¡  8180

¡  8180M

¡  8168

¡  6154

¡  6146

¡  6144

¡  6244

NOTE:

With GPU-P4-X or GPU-T4 installed, the server performance might degrade if a fan fails.

40°C (104°F)

None of the above hardware options or operating conditions exists

 


Appendix E  Product recycling

New H3C Technologies Co., Ltd. provides product recycling services for its customers to ensure that hardware at the end of its life is recycled. Vendors with product recycling qualification are contracted to New H3C to process the recycled hardware in an environmentally responsible way.

For product recycling services, contact New H3C at

·          Tel: 400-810-0504

·          E-mail: service@h3c.com

·          Website: http://www.h3c.com


Appendix F  Glossary

Item

Description

B

BIOS

Basic input/output system is non-volatile firmware pre-installed in a ROM chip on a server's management module. The BIOS stores basic input/output, power-on self-test, and auto startup programs to provide the most basic hardware initialization, setup and control functionality.

C

CPLD

Complex programmable logic device is an integrated circuit used to build reconfigurable digital circuits.

F

FIST

Fast Intelligent Scalable Toolkit provided by H3C for easy and extensible server management. It can guide users to configure a server quickly with ease and provide an API interface to allow users to develop their own management tools.

G

GPU module

Graphics processing unit module converts digital signals to analog signals for output to a display device and assists processors with image processing to improve overall system performance.

H

HDM

H3C Device Management is the server management control unit with which administrators can configure server settings, view component information, monitor server health status, and remotely manage the server.

Hot swapping

A module that supports hot swapping (a hot-swappable module) can be installed or removed while the server is running without affecting the system operation.

I

iFIST

Integrated Fast Intelligent Scalable Toolkit is a management tool embedded in an H3C server. It allows users to manage the server it resides in and provides features such as RAID configuration, OS and driver installation, and health status monitoring.

K

KVM

KVM is a management method that allows remote users to use their local video display, keyboard, and mouse to monitor and control the server.

N

NCSI

NCSI enables a port on a PCIe or mLOM Ethernet adapter to function as a management network port and an Ethernet port at the same time.

Network adapter

A network adapter, also called a network interface card (NIC), connects the server to the network.

NVMe SSD expander module

An expander module that facilitates communication between the main board of a compute module and the front NVMe hard drives. The module is required if a front NVMe hard drive is installed.

NVMe VROC module

A module that works with Intel VMD to provide RAID capability for the server to virtualize storage resources of NVMe drives.

R

RAID

Redundant array of independent disks (RAID) is a data storage virtualization technology that combines multiple physical hard drives into a single logical unit to improve storage and security performance.

Redundancy

A mechanism that ensures high availability and business continuity by providing backup modules. In redundancy mode, a backup or standby module takes over when the primary module fails.

S

Security bezel

A locking bezel mounted to the front of a server to prevent unauthorized access to modules such as hard drives.

U

U

A unit of measure defined as 44.45 mm (1.75 in) in IEC 60297-1. It is used as a measurement of the overall height of racks, as well as equipment mounted in the racks.

UPI

Ultra Path Interconnect is a high-speed interconnection method for multiprocessor systems. It can provide a transfer speed of up to 10.4 GT/s.

V

VMD

VMD provides hot removal, management and fault-tolerance functions for NVMe drives to increase availability, reliability, and serviceability.

 

 


Appendix G  Acronyms

Acronym

Full name

B

BIOS

Basic Input/Output System

C

CMA

Cable Management Arm

CPLD

Complex Programmable Logic Device

D

DCPMM

Data Center Persistent Memory Module

DDR

Double Data Rate

DIMM

Dual In-Line Memory Module

DRAM

Dynamic Random Access Memory

F

FIST

Fast Intelligent Scalable Toolkit

G

GPU

Graphics Processing Unit

H

HBA

Host Bus Adapter

HDD

Hard Disk Drive

HDM

H3C Device Management

I

IDC

Internet Data Center

iFIST

integrated Fast Intelligent Scalable Toolkit

K

KVM

Keyboard, Video, Mouse

L

LRDIMM

Load Reduced Dual Inline Memory Module

M

mLOM

Modular LAN-on-Motherboard

N

NCSI

Network Controller Sideband Interface

NVMe

Non-Volatile Memory Express

P

PCIe

Peripheral Component Interconnect Express

PDB

Power Distribution Board

PDU

Power Distribution Unit

POST

Power-On Self-Test

R

RAID

Redundant Array of Independent Disks

RDIMM

Registered Dual Inline Memory Module

S

SAS

Serial Attached Small Computer System Interface

SATA

Serial ATA

SD

Secure Digital

SDS

Secure Diagnosis System

SFF

Small Form Factor

SSD

Solid State Drive

T

TCM

Trusted Cryptography Module

TPM

Trusted Platform Module

U

UID

Unit Identification

UPI

Ultra Path Interconnect

UPS

Uninterruptible Power Supply

USB

Universal Serial Bus

V

VROC

Virtual RAID on CPU

VMD

Volume Management Device

 

 

 

  • Cloud & AI
  • InterConnect
  • Intelligent Computing
  • Security
  • SMB Products
  • Intelligent Terminal Products
  • Product Support Services
  • Technical Service Solutions
All Services
  • Resource Center
  • Policy
  • Online Help
All Support
  • Become A Partner
  • Partner Policy & Program
  • Global Learning
  • Partner Sales Resources
  • Partner Business Management
  • Service Business
All Partners
  • Profile
  • News & Events
  • Online Exhibition Center
  • Contact Us
All About Us
新华三官网