H3C UniServer R4900 G6 Ultra Server User Guide-6W100

HomeSupportServersH3C UniServer R4900 G6 UltraInstall & UpgradeInstallation GuidesH3C UniServer R4900 G6 Ultra Server User Guide-6W100
02-Appendix
Title Size Download
02-Appendix 9.52 MB

Contents

Appendix A  Server specifications· 1

Server models and chassis view· 1

Technical specifications· 2

Components· 3

Front panel 5

Front panel view of the server 5

LEDs and buttons· 8

Ports· 9

Rear panel 10

Rear panel view· 10

LEDs· 12

Ports· 13

System board· 14

System board components· 14

System maintenance switch· 16

DIMM slots· 17

Appendix B  Component specifications· 18

About component model names· 18

DIMMs· 18

DRAM DIMM rank classification label 18

HDDs and SSDs· 19

Drive numbering· 19

Drive LEDs· 21

Drive configurations· 23

Drive backplanes· 23

Front 4LFF SAS/SATA drive backplane· 23

Front 8SFF UniBay drive backplane· 23

Front 2SFF UniBay drive backplane· 24

Rear 2SFF UniBay drive backplane· 24

Drive backplanes· 24

Front 8SFF SAS/SATA drive backplane· 25

Front 8SFF UniBay drive backplane· 25

Front 8LFF SAS/SATA drive backplane· 26

Front 12LFF SAS/SATA drive backplane· 27

Front 8SAS/SATA+4UniBay drive backplane· 27

Front 4SAS/SATA+8UniBay drive backplane· 28

Front 12LFF UniBay drive backplane· 28

Front 17SAS/SATA+8UniBay drive backplane· 29

Front 16E1.S drive backplane· 30

Mid 4LFF SAS/SATA backplane· 31

Mid 8SFF UniBay drive backplane· 32

Rear 2LFF SAS/SATA drive backplane· 32

Rear 4LFF SAS/SATA drive backplane· 33

Rear 2SFF SAS/SATA drive backplane· 33

Rear 2SFF UniBay drive backplane· 33

Rear 4SFF SAS/SATA drive backplane· 34

Rear 4SFF UniBay drive backplane· 34

Rear 2UniBay drive backplane (compatible with OCP riser card) 35

Riser cards· 35

Riser card guidelines· 36

RC-FHFL-2U-G6· 36

RC-1FHFL-R3-2U-G6· 37

RC-2FHFL-2U-LC-G6· 37

RC-2HHHL-R3-2U-G6· 38

RC-2FHFL-R3-2U-G6· 39

RC-2HHHL-R4-2U-G6· 40

RC-3FHFL-2U-G6· 40

RC-3FHHL-2U-G6· 42

RC-5HHHL-R5-2U-G5· 43

PCA-R4900-4GPU-G6· 43

OCP riser card· 45

LCD smart management module· 46

Fan modules· 46

PCIe slot 47

PCIe modules· 48

Storage controllers· 48

NVMe VROC modules· 49

Front SATA M.2 expander module· 49

Rear NVMe M.2 expander module· 50

Server management module· 50

Serial & DSD module· 51

B/D/F information· 52

Appendix C  Managed removal of OCP network adapters· 53

Before you begin· 53

Performing a hot removal 53

Appendix D  Environment requirements· 55

About environment requirements· 55

General environment requirements· 55

Operating temperature requirements· 55

General guidelines· 55

Operating temperature requirements without liquid-cooled module· 55

Operating temperature requirements with liquid-cooled module· 57

Appendix E  Product recycling· 58

Appendix F  Glossary· 59

Appendix G  Acronyms· 61

 


Appendix A  Server specifications

The information in this document might differ from your product if it contains custom configuration options or features.

Figures in this document are for illustration only.

Server models and chassis view

H3C UniServer R4900 G6 Ultra servers are 2U dual-processor servers independently developed by H3C based on the new-generation Eagle Stream platform of Intel. The servers can be widely used in general computing scenarios, while focusing on high-performance computing, artificial intelligence, cloud desktop and other scenarios. They are suitable for typical applications in industries such as Internet, service providers, enterprises, and government. The servers have the characteristics of high computing performance, large storage capacity, low power consumption, strong scalability, high reliability, and ease of management and deployment.

Figure 1 Chassis view

 

The servers come in the models listed in Table 1. These models support different drive configurations. For more information about drive configuration and compatible storage controller configuration, see H3C UniServer R4900 G6 Ultra Server Drive Configurations and Cabling Solutions.

Table 1 R4900 G6 Ultra server models

Model

Maximum drive configuration

LFF

12LFF drives at the front + drives (4LFF+2SFF) at the rear + 4LFF drives in the middle

SFF

25SFF drives at the front + drives (2SFF+2SFF+4SFF) at the rear + 8SFF drives in the middle

E1.S

32E1.S drives at the front

 

Technical specifications

Item

Specifications

Dimensions (H × W × D)

·     Without a security bezel:

¡     LFF&SFF models: 87.5 × 445.4 × 780 mm (3.44 × 17.54 × 30.71 in)

¡     E1.S model: 87.5 × 445.4 × 820 mm (3.44 × 17.54 × 32.28 in)

·     With a security bezel:

¡     LFF&SFF models: 87.5 × 445.4 × 808 mm (3.44 × 17.54 × 31.81 in)

¡     E1.S model: 87.5 × 445.4 × 848 mm (3.44 × 17.54 × 33.39 in)

Max. weight

46.5 kg (102.51 lb)

Processors

2 × Intel Eagle Stream processors, maximum 350 W power consumption per processor

Supports HBM

Supports Montage Jintide processors

Memory

Up to 32 DIMMs (DDR5 supported)

Storage controllers

·     High-performance standard storage controller

·     NVMe VROC module

·     Serial & DSD module (supports RAID 1)

Chipset

Intel Emmitsburg

Integrated graphics

The graphics chip (model AST2600) is integrated in the BMC management chip to provide a maximum resolution of 1920 × 1200@60Hz (32bpp), where:

·     1920 × 1200: 1920 horizontal pixels and 1200 vertical pixels.

·     60Hz: Screen refresh rate, 60 times per second.

·     32bpp: Color depts. The higher the value, the more colors that can be displayed.

If you attach monitors to both the front and rear VGA connectors, only the monitor connected to the front VGA connector is available.

Network connectors

·     1 × embedded 1 Gbps HDM dedicated management port

·     Up to three OCP 3.0 network adapter connectors (OCP 3.0 network adapters support NCSI)

I/O connectors

·     5 × USB connectors

¡     3 × USB3.0 connectors (one on the right chassis ear, and two on the rear panel)

¡     2 × USB2.0 connectors (one on the system board, and one available on the left chassis ear only when the multifunctional rack mount kit is used)

·     12 × built-in SATA connectors (displayed as 3 × x4 SlimSAS connectors)

·     12 × MCIO connectors (PCIe5.0 x8)

·     1 × RJ-45 HDM dedicated network port (at the server rear)

·     2 × VGA connectors (one on the front panel, and one available on the left chassis ear only when the multifunctional rack mount kit is used)

·     1 × serial port (available only when the Serial & DSD module is used)

·     1 × HDM dedicated management port (available on the left chassis ear only when the multifunctional rack mount kit is used)

Expansion slots

14 × PCIe standard slots (10 × PCIe 5.0 slots and 4 × PCIe 4.0 slots)

Optical drives

External USB optical drives

Power supplies

2 × hot-swappable power supplies, 1 + 1 redundancy

Standards

CCC, UL, CE

 

Components

Figure 2 R4900 G6 Ultra server components

 

Item

Description

(1) Chassis access panel

N/A

(2) Mid drive cage

Installed in the server to accommodate drives.

(3) Standard PCIe network adapter

Installed in a standard PCIe slot to provide network ports.

(4) System battery

Supplies power to the system clock to ensure system time correctness.

(5) Storage controller

Provides RAID capability to SAS/SATA drives, including RAID configuration and RAID scale-up. It supports online upgrade of the controller firmware and remote configuration.

(6) NVMe VROC module

Works with Intel VMD to provide RAID capability for the server to virtualize storage resources of NVMe drives.

(7) GPU module

Provides computing services such as graphics processing and AI.

(8) Riser card

Provides PCIe slots.

(9) OCP riser card

Provides one slot for installing an OCP network adapter and two slots for installing drives at the server rear.

(10) Serial & DSD module

Provides one serial port and two SD card slots.

(11) Memory

Stores computing data and data exchanged with external storage temporarily.

(12) Rear drive backplane

Provides power and data channels for drives at the server rear.

(13) Rear drive cage

Installed at the server rear to accommodate drives.

(14) Server management module

Provides I/O connectors and HDM out-of-band management features.

(15) OCP network adapter

Network adapter installed onto the OCP network adapter connector on the system board.

(16) Riser card blank

Installed on an empty PCIe riser connector to ensure good ventilation.

(17) Power supply

Supplies power to the server. The power supplies support hot swapping and 1+1 redundancy.

(18) Chassis

N/A

(19) Chassis ears

Attach the server to the rack. The right ear is integrated with the front I/O component, and the left ear is integrated with VGA connector, HDM dedicated management connector, and USB 3.0 connector.

(20) Front drive backplane

Provides power and data channels for drives at the server front. This document installs an 8SFF front drive backplane as an example.

(21) LCD smart management module

Displays basic server information, operating status, and fault information. Together with HDM event logs, users can fast locate faulty components and troubleshoot the server, ensuring server operation.

(22) Drive

Provides data storage space. Drives support hot swapping.

(23) SATA M.2 SSD expander module

Provides M.2 SSD slots.

(24) SATA M.2 SSD drive

Provides data storage space for the server.

(25) Encryption module

Provides encryption services for the server to enhance data security.

(26) System board

One of the most important parts of a server, on which multiple components are installed, such as processor, memory, and fan. It is integrated with basic server components, including the BIOS chip and PCIe connectors.

(27) Processor socket cover

Installed over an empty processor socket to protect pins in the socket.

(28) Fan cage

Accommodates fan modules.

(29) Chassis open-alarm module

Detects if the access panel is removed. The detection result can be displayed from the HDM Web interface.

(30) Air baffle

Provides ventilation aisles for processor heatsinks and memory modules and provides support for the supercapacitor.

(31) Fan

Helps server ventilation. Fans support hot swapping and N+1 redundancy.

(32) Processor

Integrates memory and PCIe controllers to provide data processing capabilities for the server.

(33) Processor retaining bracket

Attaches a processor to the heatsink.

(34) Supercapacitor holder

Secures a supercapacitor in the chassis.

(35) Supercapacitor

Supplies power to the flash card on the power fail safeguard module, which enables the storage controller to back up data to the flash card for protection when power outage occurs.

(36) Mid GPU module

Installed in the server to expand GPU modules to meet graphics computing and AI processing requirements.

(37) Processor heatsink

Cools the processor.

(38) Processor liquid-cooled module

Cools the processor.

 

Front panel

Front panel view of the server

Figure 3 8LFF front panel

 

Table 2 8LFF front panel description

Item

Description

1

USB 3.0 connector

2

Drive or LCD mart management module (optional)

3

Serial label pull tab

4

HDM dedicated management connector

5

USB 2.0 connector

6

VGA connector

 

Figure 4 12LFF front panel

 

Table 3 12LFF front panel description

Item

Description

1

12LFF UniBay drives*

2

USB 3.0 connector

3

Drive or LCD mart management module (optional)

4

Serial label pull tab

5

HDM dedicated management connector

6

USB 2.0 connector

7

VGA connector

*: Drive types supported by the server vary by drive backplane configuration. For more information, see "Drive backplanes."

 

Figure 5 8SFF front panel

 

Table 4 8SFF front panel description

Item

Description

1

Bay 1: 8SFF drives (optional)*

2

Bay 2: 8SFF drives (optional)*

3

Bay 3: 8SFF drives (optional)*

4

USB 3.0 connector

5

LCD smart management module (optional)

6

Serial label pull tab

7

HDM dedicated management connector

8

USB 2.0 connector

9

VGA connector

*: Drive types supported by the server vary by drive backplane configuration. For more information, see "Drive backplanes."

 

Figure 6 25SFF front panel

 

Table 5 25SFF front panel description

Item

Description

1

25SFF drives*

2

USB 3.0 connector

3

Drive or LCD mart management module (optional)

4

Serial label pull tab

5

HDM dedicated management connector

6

USB 2.0 connector

7

VGA connector

*: Drive types supported by the server vary by drive backplane configuration. For more information, see "Drive backplanes."

 

Figure 7 32E1.S front panel

 

Table 6 32E1.S front panel description

Item

Description

1

USB 3.0 connector

2

32E1.S drives

3

Serial label pull tab

4

HDM dedicated management connector

5

USB 2.0 connector

6

VGA connector

 

LEDs and buttons

Figure 8 Front panel LEDs and buttons

 

Table 7 LEDs and buttons on the front panel

Button/LED

Status

Power on/standby button and system power LED

·     Steady green—The system has started.

·     Flashing green (1 Hz)—The system is starting.

·     Steady amber—The system is in standby state.

·     Off—No power is present. Possible reasons:

¡     No power source is connected.

¡     No power supplies are present.

¡     The installed power supplies are faulty.

¡     The system power cords are not connected correctly.

OCP 3.0 network adapter Ethernet port LED

·     Steady green—A link is present on a port of an OCP 3.0 network adapter.

·     Flashing green (1 Hz)—A port on an OCP 3.0 network adapter is receiving or sending data.

·     Off—No link is present on any port of either OCP 3.0 network adapter.

NOTE:

The server supports a maximum of three OCP3.0 network adapters.

Health LED

·     Steady green—The system is operating correctly or a minor alarm is present.

·     Flashing green (4 Hz)—HDM is initializing.

·     Flashing amber (1 Hz)—A major alarm is present.

·     Flashing red (1 Hz)—A critical alarm is present.

If a system alarm is present, log in to HDM to obtain more information about the system running status.

UID button LED

·     Steady blue—UID LED is activated. The UID LED can be activated by using the following methods:

¡     Press the UID button LED.

¡     Activate the UID LED from HDM.

·     Flashing blue:

¡     1 Hz—The firmware is being upgraded or the system is being managed from HDM. Do not power off the server.

¡     4 Hz—HDM is restarting. To restart HDM, press the UID button LED for eight seconds.

·     Off—UID LED is not activated.

 

Security bezel light

The security bezel provides hardened security and uses effect light to visualize operation and health status to help inspection and fault location. The default effect light is as shown in Figure 9.

Figure 9 Security bezel

 

Table 8 Security bezel effect light

System status

Light status

Standby

Steady white: The system is in standby state.

Startup

·     Beads turn on white from middle in turn—POST progress.

·     Beads turn on white from middle three times—POST has finished.

Running

·     Breathing white (gradient at 0.2 Hz)—Normal state, indicating the system load by the percentage of beads turning on from the middle to the two sides of the security bezel.

¡     No loadLess than 10%.

¡     Light load—10% to 50%.

¡     Middle load—50% to 80%.

¡     Heavy loadMore than 80%.

·     Breathing white (gradient at 1 Hz)—A pre-alarm is present (only for drive pre-failure).

·     Flashing amber (1 Hz)—A major alarm is present.

·     Flashing red (1 Hz)—A critical alarm is present (only for power supply errors).

UID

·     All beads flash white (1 Hz)—The firmware is being upgraded or the system is being managed from HDM. Do not power off the server.

·     Some beads flash white (1 Hz)—HDM is restarting.

 

Ports

Table 9 Ports on the front panel

Port

Type

Description

VGA connector

DB-15

Connects a display terminal, such as a monitor or KVM device.

USB connector

USB 2.0/3.0

Connects the following devices:

·     USB flash drive.

·     USB keyboard or mouse.

·     USB optical drive for operating system installation.

HDM dedicated management connector

Type-C

Connects a Type-C to USB adapter cable, which connects to a USB Wi-Fi adapter or USB drive.

 

Rear panel

Rear panel view

Figure 10 Rear panel components

 

Table 10 Rear panel description

Item

Description

1

PCIe riser bay 1: PCIe slots 1 through 3

2

PCIe riser bay 2: PCIe slots 4 through 6

3

PCIe riser bay 3: PCIe slots 7 and 8

4

PCIe riser bay 4: PCIe slots 9 and 10

5

Power supply 2

6

Power supply 1

7

OCP 3.0 network adapter/Serial & DSD module (in slot 17) (optional)

8

VGA connector

9

Two USB 3.0 connectors

10

HDM dedicated network port (1Gbps, RJ-45, default IP address 192.168.1.2/24)

11

OCP 3.0 network adapter (in slot 16) (optional)

For more information about the serial & DSD module, see "Serial & DSD module."

 

When the server is installed with the liquid-cooled module, slot 6 on the rear panel will be used to install the inlet and outlet ports. The quick connectors will be differentiated into male connectors and hose connectors, as shown in Figure 11 and Figure 12, respectively.

Figure 11 Rear panel components (Processor liquid-cooled module with male connector)

 

Figure 12 Rear panel components (Processor liquid-cooled module with hose connector)

 

Table 11 Rear panel description (with liquid-cooled module installed)

Item

Description

1

PCIe riser bay 1: PCIe slots 1 through 3

2

PCIe riser bay 2: PCIe slots 4 through 5

3

PCIe riser bay 3: PCIe slots 7 through 8

4

PCIe riser bay 4: PCIe slots 9 and 10

5

Power supply 2

6

Power supply 1

7

OCP 3.0 network adapter or serial & DSD module (in slot 17) (optional)

8

Liquid outlet (red)

9

Liquid inlet (blue)

10

VGA connector

11

Two USB 3.0 connectors

12

HDM dedicated network port (1Gbps, RJ-45, default IP address 192.168.1.2/24)

13

OCP 3.0 network adapter (in slot 16) (optional)

 

LEDs

Figure 13 Rear panel LEDs

(1) Power supply LED for power supply 2

(2) Power supply LED for power supply 1

(3) Activity LED of the Ethernet port

(4) Link LED of the Ethernet port

(5) UID LED

 

Table 12 LEDs on the rear panel

LED

Status

Power supply LED

·     Steady green—The power supply is operating correctly.

·     Flashing green (0.33 Hz)—The power supply is in standby state and does not output power.

·     Flashing green (2 Hz)—The power supply is updating its firmware.

·     Steady amber—Either of the following conditions exists:

¡     The power supply is faulty.

¡     The power supply does not have power input, but another power supply has correct power input.

·     Flashing amber (1 Hz)—An alarm has occurred on the power supply.

·     Off—No power supplies have power input, which can be caused by an incorrect power cord connection or power source shutdown.

Activity LED of the Ethernet port

·     Flashing green (1 Hz)—The port is receiving or sending data.

·     Off—The port is not receiving or sending data.

Link LED of the Ethernet port

·     Steady green—A link is present on the port.

·     Off—No link is present on the port.

UID LED

·     Steady blue—UID LED is activated. The UID LED can be activated by using the following methods:

¡     Press the UID button LED.

¡     Enable UID LED from HDM.

·     Flashing blue:

¡     1 Hz—The firmware is being upgraded or the system is being managed from HDM. Do not power off the server.

¡     4 Hz—HDM is restarting. To restart HDM, press the UID button LED for 8 seconds.

·     Off—UID LED is not activated.

 

Ports

Table 13 Ports on the rear panel

Port

Type

Description

VGA connector

DB-15

Connects a display terminal, such as a monitor or KVM device.

BIOS serial port

DB-9

The BIOS serial port is used for the following purposes:

·     Log in to the server when the remote network connection to the server has failed.

·     Establish a GSM modem or encryption lock connection.

NOTE:

The serial port is on the serial & DSD module. For more information, see "Serial & DSD module."

USB connector

USB 3.0

Connects the following devices:

·     USB flash drive.

·     USB keyboard or mouse.

·     USB optical drive for operating system installation.

HDM dedicated network port

RJ-45

Establishes a network connection to manage HDM from its Web interface.

Power receptacle

Standard single-phase

Connects the power supply to the power source.

 

System board

System board components

Figure 14 System board components

 

Table 14 System board components

No.

Description

Mark

1

PCIe riser connector 3 (for processor 2)

RISER3 PCIe X16

2

PCIe riser connector 2 (for processor 2)

RISER2 PCIe X16

3

Fan connector 2 for the OCP 3.0 network adapter

-

4

OCP 3.0 network adapter connector 2/Serial & DSD module connector

OCP2&DSD&UART CARD

5

PCIe riser connector 1 (for processor 1)

RISER1 PCIe X16

6

Server management module slot

BMC

7

Fan connector 1 for the OCP 3.0 network adapter

OCP1 FAN

8

OCP 3.0 network adapter connector 1

OCP1

9

SlimSAS connector 3 (x4 SATA)

SATA PORT3

10

AUX connector 9 for mid drive backplane

AUX9

11

Mid drive backplane/Riser card AUX connector 7

AUX7

12

SlimSAS connector 2 (x4 SATA)

SATA PORT2

13

SlimSAS connector 1 (x4 SATA or M.2 SSD)

M.2&SATA PORT1

14

Front I/O connector

RIGHT EAR

15

M.2 SSD AUX connector

M.2 AUX

16

Fan connector 4

J245

17

Liquid Leakage Detection module connector

LEAKDET

18

LCD smart management module connector

DIAG LCD

19

MCIO connector C1-P4A (for processor 1)

C1-P4A

20

MCIO connector C1-P4C (for processor 1)

C1-P4C

21

Fan connector 3

J104

22

MCIO connector C1-P3C (for processor 1)

C1-P3C

23

MCIO connector C1-P3A (for processor 1)

C1-P3A

24

Power connector 3 for front drive backplane

PWR3

25

Power connector 2 for front drive backplane

PWR2

26

Connector for the thermal sensor module at the air inlet

-

27

Power connector 1 for front drive backplane

PWR1

28

Fan connector 2

J94

29

MCIO connector C2-P4A (for processor 2)

C2-P4A

30

AUX connector 3 for front drive backplane

AUX3

31

MCIO connector C2-P4C (for processor 2)

C2-P4C

32

MCIO connector C2-P3C (for processor 2)

C2-P3C

33

Fan connector 1

J96

34

AUX connector 2 for front drive backplane

AUX2

35

AUX connector 1 for front drive backplane

AUX1

36

MCIO connector C2-P3A (for processor 2)

C2-P3A

37

Chassis-open alarm module connector

INTRUDER

38

Mid drive backplane/Riser card power connector 6

PWR6

39

Front VGA and USB 2.0 connector

LEFT EAR

40

Power connector 4 for rear drive backplane

PWR4

41

Power connector 5 for rear drive backplane

PWR5

42

MCIO connector C2-P2C (for processor 2)

C2-P2C

43

AUX connector 5 for rear drive backplane

AUX5

44

MCIO connector C2-P2A (for processor 2)

C2-P2A

45

AUX connector 6 for mid drive backplane

AUX6

46

AUX connector 4 for rear drive backplane

AUX4

47

Riser card AUX connector 8

AUX8

48

NVMe VROC module connector

NVMe RAID KEY

49

Embedded USB 2.0 connector

INTERNAL USB2.0

50

TPM/TCM connector

TPM

51

System battery

-

52

MCIO connector C1-P2C (for processor 1)

C1-P2C

53

MCIO connector C1-P2A (for processor 1)

C1-P2A

X

System maintenance switch

MAINTENANCE SW

 

System maintenance switch

Figure 15 shows the system maintenance switch. Table 15 describes how to use the maintenance switch.

Figure 15 System maintenance switch

 

Table 15 System maintenance switch description

Item

Description

Remarks

1

·     Off (default)—HDM login requires the username and password of a valid HDM user account.

·     On—HDM login requires the default username and password.

For security purposes, turn off the switch after you complete tasks with the default username and password as a best practice.

5

·     Off (default)—Normal server startup.

·     On—Restores the default BIOS settings.

To restore the default BIOS settings, turn on and then turn off the switch. The server starts up with the default BIOS settings at the next startup.

CAUTION CAUTION:

The server cannot start up when the switch is turned on. To avoid service data loss, stop running services and power off the server before turning on the switch.

6

·     Off (default)—Normal server startup.

·     On—Clears all passwords from the BIOS at server startup.

If this switch is on, the server will clear all the passwords at each startup. Make sure you turn off the switch before the next server startup if you do not need to clear all the passwords.

2, 3, 4, 7, and 8

Reserved for future use.

N/A

 

DIMM slots

The system board provides eight DIMM channels per processor, and 16 channels in total, as shown in Figure 16. Each channel contains two DIMM slots.

Figure 16 System board DIMM slot layout

 

 


Appendix B  Component specifications

For components compatible with the server and detailed component information, use the component compatibility lookup tool at http://www.h3c.com/en/home/qr/default.htm?id=66.

About component model names

The model name of a hardware option in this document might differ slightly from its model name label.

A model name label might add a prefix or suffix to the hardware-coded model name for purposes such as identifying the matching server brand or applicable region. For example, the DDR5-4800-32G-1Rx4 memory model represents memory module labels including UN-DDR5-4800-32G-1Rx4-R, UN-DDR5-4800-32G-1Rx4-F, and UN-DDR5-4800-32G-1Rx4-S, which have different prefixes and suffixes.

DIMMs

The server provides eight DIMM channels per processor and each channel has two DIMM slots. If the server has one processor, the total number of DIMM slots is 16. If the server has two processors, the total number of DIMM slots is 32. For the physical layout of DIMM slots, see "DIMM slots."

DRAM DIMM rank classification label

A DIMM rank is a set of memory chips that the system accesses while writing or reading from the memory. On a multi-rank DIMM, only one rank is accessible at a time.

To determine the rank classification of a DIMM, use the label attached to the DIMM, as shown in Figure 17. The meaning of the DDR DIMM rank classification labels are similar and this section uses the label of a DDR5 DIMM as an example.

Figure 17 DDR DIMM rank classification label

 

Table 16 DIMM rank classification label description

Callout

Description

Remarks

1

Capacity

Options include:

·     8GB.

·     16GB.

·     32GB.

2

Number of ranks

Options include:

·     1R—One rank (Single-Rank).

·     2R—Two ranks (Dual-Rank). A 2R DIMM is equivalent to two 1R DIMMs.

·     4R—Four ranks (Quad-Rank). A 4R DIMM is equivalent to two 2R DIMMs

·     8R—Eight ranks (8-Rank). An 8R DIMM is equivalent to two 4R DIMMs.

3

Data width

Options include:

·     ×4—4 bits.

·     ×8—8 bits.

4

DIMM generation

DDR5

5

Data rate

Options include:

·     2666V—2666 MHz.

·     2933Y2933 MHz.

·     3200AA3200 MHz.

·     48004800 MHz.

6

DIMM type

Options include:

·     L—LRDIMM.

·     R—RDIMM.

 

HDDs and SSDs

Drive numbering

The server provides different drive numbering schemes for different drive configurations at the server front and rear, as shown in Figure 18 through Figure 22.

Figure 18 Drive numbering for front 25SFF drive configuration

 

Figure 19 Drive numbering for front 12LFF drive configuration

 

Figure 20 Drive numbering for front 8LFF drive configuration

 

Figure 21 Drive numbering for front 32E1.S drive configuration

 

Figure 22 Drive numbering for rear 4LFF+4SFF drive configuration

 

Figure 23 Drive numbering for rear 2SFF+2SFF+4SFF drive configuration

 

Figure 24 Drive numbering for the rear OCP riser card configuration

 

Figure 25 Drive numbering for the mid 4LFF drive configuration

 

Figure 26 Drive numbering for the mid 8SFF drive configuration

 

Drive LEDs

The server supports SAS, SATA, and NVMe (including E1.S) drives, of which SAS and SATA drives support hot swapping and NVMe drives support hot insertion and managed hot removal. You can use the LEDs on a drive to identify its status after it is connected to a storage controller.

For more information about OSs that support hot insertion and managed hot removal of NVMe drives, use the component compatibility lookup tool at http://www.h3c.com/en/home/qr/default.htm?id=66.

Figure 27 shows the location of the LEDs on a drive.

Figure 27 LFF&SFF drive LEDs

 

(1) Fault/UID LED

(2) Present/Active LED

 

Figure 28 E1.S drive LEDs

 

(1) Fault/UID LED

(2) Present/Active LED

 

To identify the status of a SAS or SATA drive, use Table 17. To identify the status of an NVMe drive, use Table 18.

Table 17 SAS/SATA drive LED description

Fault/UID LED status

Present/Active LED status

Description

Flashing amber (0.5 Hz)

Steady green/Flashing green (4.0 Hz)

A drive failure is predicted. As a best practice, replace the drive before it fails.

Steady amber

Steady green/Flashing green (4.0 Hz)

The drive is faulty. Replace the drive immediately.

Steady blue

Steady green/Flashing green (4.0 Hz)

The drive is operating correctly and is selected by the RAID controller.

Off

Flashing green (4.0 Hz)

The drive is performing a RAID migration or rebuilding, or the system is reading or writing data to the drive.

Off

Steady green

The drive is present but no data is being read or written to the drive.

Off

Off

The drive is not securely installed.

 

Table 18 NVMe drive LED description

Fault/UID LED status

Present/Active LED status

Description

Flashing amber (4 Hz)

Off

The drive is in hot insertion process.

Steady amber

Steady green/Flashing green (4.0 Hz)

The drive is faulty. Replace the drive immediately.

Steady blue

Steady green/Flashing green (4.0 Hz)

The drive is operating correctly and selected by the RAID controller.

Off

Flashing green (4.0 Hz)

The drive is performing a RAID migration or rebuilding, or the system is reading or writing data to the drive.

Off

Steady green

The drive is present but no data is being read or written to the drive.

Off

Off

The drive is not securely installed.

 

Drive configurations

The server supports multiple drive configurations. For more information about drive configurations and their required storage controller and riser cards, see H3C UniServer R4900 G6 Ultra Server Drive Configurations and Cabling Guide.

Drive backplanes

The server supports the following types of drive backplanes:

·     SAS/SATA drive backplanesSupport only SAS/SATA drives.

·     NVMe drive backplanes—Support only NVMe drives.

·     UniBay drive backplanesSupport both SAS/SATA and NVMe drives. You must connect both SAS/SATA and NVMe data cables. The number of supported drives varies by drive cabling.

Front 4LFF SAS/SATA drive backplane

The PCA-BP-4LFF-1U-G6 4LFF SAS/SATA drive backplane can be installed at the server front to support four 3.5-inch SAS/SATA drives.

Figure 29 4LFF SAS/SATA drive backplane

 

(1) AUX connector (AUX 1)

(2) Power connector (PWR 1)

(3) x4 Mini-SAS-HD connector (SAS PORT 1)

 

 

Front 8SFF UniBay drive backplane

The PCA-BP-8UniBay-1U-G6 8SFF UniBay drive backplane can be installed at the server front to support eight 2.5-inch SAS/SATA/NVMe drives.

Figure 30 8SFF UniBay drive backplane

 

(1) AUX connector (AUX1)

(2) MCIO connector A1/A2 (PCIe5.0 x8)(NVMe A1/A2)

(3) Power connector (PWR1)

(4) MCIO connector A3/A4 (PCIe5.0 x8)(NVMe A3/A4)

(5) SAS/SATA connector (SAS PORT)

(6) MCIO connector B1/B2 (PCIe5.0 x8)(NVMe B1/B2)

(7) MCIO connector B3/B4 (PCIe5.0 x8)(NVMe B3/B4)

PCIe5.0 x8 description:

·     PCIe5.0: Fifth-generation signal speed.

·     x8: Bus bandwidth.

 

Front 2SFF UniBay drive backplane

The HDDCage-2UniBay-1U-G6 UniBay drive backplane can be installed at the server front to support two 2.5-inch SAS/SATA/NVMe drives.

Figure 31 2SFF UniBay drive backplane

 

(1) SlimSAS connector A1/A2 (PCIe4.0 x8)(NVME-A1/A2)

(2) x4 Mini-SAS-HD connector (SAS PORT)

(3) AUX connector (AUX)

(4) Power connector (PWR)

PCIe4.0 x8 description:

·     PCIe4.0: Fourth-generation signal speed.

·     x8: Bus bandwidth.

 

Rear 2SFF UniBay drive backplane

The HDDCage-2UniBay-R-1U-G6 drive backplane can be installed at the server front to support two 3.5-inch SAS/SATA/NVMe drives.

Figure 32 2SFF UniBay drive backplane

 

(1) SlimSAS connector A1/A2 (PCIe4.0 x8) (NVME-A1/A2)

(2) x4 Mini-SAS-HD connector (SAS PORT)

(3) AUX connector (AUX)

(4) Power connector (PWR)

PCIe4.0 x8 description:

·     PCIe4.0: Fourth-generation signal speed.

·     x8: Bus bandwidth.

 

Drive backplanes

The server supports the following types of drive backplanes:

·     SAS/SATA drive backplanesSupport only SAS/SATA drives.

·     UniBay drive backplanesSupport both SAS/SATA and NVMe drives. You must connect both SAS/SATA and NVMe data cables. The number of supported drives varies by drive cabling.

·     X SAS/SATA+Y UniBay drive backplanesSupport SAS/SATA drives in all slots and support NVMe drives in certain slots.

¡     X: Number of slots supporting only SAS/SATA drives.

¡     Y: Number of slots supporting both SAS/SATA and NVMe drives.

For UniBay drive backplanes and X SAS/SATA+Y UniBay drive backplanes:

·     The two drive types are supported only when both SAS/SATA and NVMe data cables are connected.

·     The number of supported SAS/SATA drives and the number of supported NVMe drives vary by cable connection.

Front 8SFF SAS/SATA drive backplane

The PCA-BP-8SFF-2U-G6 8SFF SAS/SATA drive backplane can be installed at the server front to support eight 2.5-inch SAS/SATA drives.

Figure 33 8SFF SAS/SATA drive backplane

 

(1) x8 SlimSAS connector (SAS PORT1)

(2) AUX connector (AUX)

(3) Power connector (PWR)

 

 

Front 8SFF UniBay drive backplane

The PCA-BP-8UniBay-2U-G6 8SFF UniBay drive backplane can be installed at the server front to support eight 2.5-inch SAS/SATA/NVMe drives.

Figure 34 8SFF UniBay drive backplane

 

(1) x8 SlimSAS connector (SAS PORT)

(2) AUX ( AUX)

(3) MCIO connector B3/B4 (PCIe5.0 x8)(NVMe B3/B4)

(4) Power connector (POWER)

(5) MCIO connector B1/B2 (PCIe5.0 x8)(NVMe B1/B2)

(6) MCIO connector A3/A4 (PCIe5.0 x8)(NVMe A3/A4)

(7) MCIO connector A1/A2 (PCIe5.0 x8)(NVMe A1/A2)

PCIe5.0 x8 description:

·     PCIe5.0: Fifth-generation signal speed.

·     x8: Bus bandwidth.

 

Front 8LFF SAS/SATA drive backplane

The PCA-BP-8LFF-2U-G6 8LFF SAS/SATA drive backplane can be installed at the server front to support eight 3.5-inch SAS/SATA drives.

Figure 35 8LFF SAS/SATA drive backplane

 

(1) X8 Mini-SAS-HD connector (SAS PORT)

(2) Power connector (PWR)

(3) AUX connector (AUX)

 

Front 12LFF SAS/SATA drive backplane

The PCA-BP-12LFF-2U-G6 12LFF SAS/SATA drive backplane can be installed at the server front to support twelve 3.5-inch SAS/SATA drives.

Figure 36 12LFF SAS/SATA drive backplane

 

(1) x4 SlimSAS connector (SAS PORT 2), managing the last four SAS/SATA drives on the backplane

(2) Power connector 2 (PWR 2)

(3) AUX connector (AUX)

(4) Power connector 1 (PWR 1)

(5) x8 SlimSAS connector (SAS PORT 1), managing the first eight SAS/SATA drives on the backplane

 

Front 8SAS/SATA+4UniBay drive backplane

The PCA-BP-12LFF-4NVMe-2U-G6 12LFF drive backplane can be installed at the server front to support twelve 3.5-inch SAS/SATA/NVMe drives, including eight SAS/SATA drives and four SAS/SATA/NVMe drives.

Figure 37 8SAS/SATA+4UniBay drive backplane

 

(1) MCIO connector A3 (PCIe5.0 x4)(NVMe-A3), supporting NVMe drive 9

(2) x4 SlimSAS connector (SAS PORT 2), managing the last four SAS/SATA drives on the backplane

(3) Power connector 2 (PWR 2)

(4) AUX connector 1(AUX 1)

(5) Power connector 1 (PWR 1)

(6) x8 SlimSAS connector (SAS PORT 1), managing the first eight SAS/SATA drives on the backplane

(7) MCIO connector A4 (PCIe5.0 x4)(NVMe-A4), supporting NVMe drive 8

(8) MCIO connector A1/A2 (PCIe5.0 x8)(NVMe-A1/A2), supporting NVMe drives 10 and 11

For more information about drive numbering, see "Drive numbering."

PCIe5.0 x8 description:

·     PCIe5.0: Fifth-generation signal speed.

·     x8: Bus bandwidth.

 

Front 4SAS/SATA+8UniBay drive backplane

The PCA-BP-12LFF-EXP-2U-G6 12LFF drive backplane can be installed at the server front to support twelve 3.5-inch SAS/SATA/NVMe drives, including four SAS/SATA drives and eight SAS/SATA/NVMe drives. The drive backplane integrates an Expander chip to manage 12 SAS/SATA drives through an x8 SlimSAS connector. The drive backplane also provides three downlink interfaces to connect to other drive backplanes and support more drives.

Figure 38 4SAS/SATA+8UniBay drive backplane

 

(1) x8 SlimSAS uplink interface (SAS PORT), managing all drives on the backplane

(2) x4 SlimSAS downlink interface 3 (SAS EXP3)

(3) Power connector 2 (PWR2)

(4) MCIO connector B1/B2 (PCIe5.0 x8)(NVMe-B1/B2), supporting NVMe drives 6 and 7

(5) Power connector 1 (PWR1)

(6) x8 SlimSAS downlink interface 2 (SAS EXP2)

(7) x4 SlimSAS downlink interface 1 (SAS EXP1)

(8) AUX connector (AUX)

(9) MCIO connector B3/B4 (PCIe5.0 x8)(NVMe B3/B4), supporting NVMe drives 4 and 5

(10) MCIO connector A3/A4 (PCIe5.0 x8)(NVMe A3/A4), supporting NVMe drives 8 and 9

(11) MCIO connector A1/A2 (PCIe5.0 x8)(NVMe A1/A2), supporting NVMe drives 10 and 11

For more information about drive numbering, see "Drive numbering."

PCIe5.0 x8 description:

·     PCIe5.0: Fifth-generation signal speed.

·     x8: Bus bandwidth.

 

Front 12LFF UniBay drive backplane

The PCA-BP-12LFF-UniBay-2U-G6 12LFF UniBay drive backplane can be installed at the server front to support twelve 3.5-inch SAS/SATA/NVMe drives.

Figure 39 12LFF UniBay drive backplane

 

(1) MCIO connector A3 (PCIe5.0 x4)(NVMe-A3)

(2) x4 SlimSAS connector (SAS PORT 2), managing the last four SAS/SATA drives on the backplane

(3) MCIO connector B1/B2 (PCIe5.0 x8)(NVMe-B1/B2)

(4) Power connector 2 (PWR 2)

(5) AUX connector 1 (AUX 1)

(6) MCIO connector C1 (PCIe5.0 x4)(NVMe-C1)

(7) Power connector 1 (PWR 1)

(8) x8 SlimSAS connector (SAS PORT 1), managing the first eight SAS/SATA drives on the backplane

(9) MCIO connector C3/C4 (PCIe5.0 x8)(NVMe-C3/C4)

(10) MCIO connector C2 (PCIe5.0 x4)( NVMe-C2)

(11) MCIO connector B3/B4 (PCIe5.0 x8)(NVMe-B3/B4)

(12) MCIO connector A4 (PCIe5.0 x4)(NVMe-A4)

(13) MCIO connector A1/A2 (PCIe5.0 x8)(NVMe-A1/A2)

PCIe5.0 x8 description:

·     PCIe5.0: Fifth-generation signal speed.

·     x8: Bus bandwidth.

 

Front 17SAS/SATA+8UniBay drive backplane

The PCA-BP-25SFF-2U-G6 25SFF drive backplane can be installed at the server front to support twenty-five 2.5-inch SAS/SATA/NVMe drives, including 17 SAS/SATA drives and 8 SAS/SATA/NVMe drives. The drive backplane can use an x8 SlimSAS connector to manage 25 SAS/SATA drives. The drive backplane also integrates an Expander chip and three downlink interfaces to connect to other drive backplanes and support more drives.

Figure 40 17SAS/SATA+8UniBay drive backplane

 

(1) x4 SlimSAS downlink interface 3 (SAS EXP 3)

(2) x8 SlimSAS uplink interface (SAS PORT), managing all drives on the backplane

(3) x8 SlimSAS downlink interface 2 (SAS EXP 2)

(4) x4 SlimSAS downlink interface 1 (SAS EXP 1)

(5) Power connector 1 (PWR 1)

(6) Power connector 2 (PWR 2)

(7) MCIO connector 4 (PCIe5.0 x8)(NVMe 4), supporting NVMe drives 17 and 18

(8) AUX connector (AUX)

(9) MCIO connector 3 (PCIe5.0 x8)(NVMe 3), supporting NVMe drives 19 and 20

(10) MCIO connector 2 (PCIe5.0 x8)(NVMe 2), supporting NVMe drives 21 and 22

(11) Power connector 3 (PWR 3)

(12) MCIO connector 1 (PCIe5.0 x8)(NVMe 1), supporting NVMe drives 23 and 24

For more information about drive numbering, see "Drive numbering."

PCIe5.0 x8 description:

·     PCIe5.0: Fifth-generation signal speed.

·     x8: Bus bandwidth.

 

Front 16E1.S drive backplane

The PCA-BP-16E1S-1U-G6 16E1.S drive backplane can be installed at the server front to support sixteen 9.5 mm E1.S drives.

Figure 41 16E1.S drive backplane

 

(1) AUX connector (AUX 1)

(2) MCIO connector D3/D4 (PCIe5.0 x8)(EDSFF-D3/D4)

(3) Power connector 1 (PWR 1)

(4) MCIO connector D1/D2 (PCIe5.0 x8)(EDSFF-D1/D2)

(5) MCIO connector C3/C4 (PCIe5.0 x8)(EDSFF-C3/C4)

(6) MCIO connector C1/C2 (PCIe5.0 x8)(EDSFF-C1/C2)

(7) MCIO connector B3/B4 (PCIe5.0 x8)(EDSFF-B3/B4)

(8) MCIO connector B1/B2 (PCIe5.0 x8)(EDSFF-B1/B2)

(9) MCIO connector A3/A4 (PCIe5.0 x8)(EDSFF-A3/A4)

(10) Power connector 2 (PWR 2)

(11) MCIO connector A1/A2 (PCIe5.0 x8)(EDSFF-A1/A2)

PCIe5.0 x8 description:

·     PCIe5.0: Fifth-generation signal speed.

·     x8: Bus bandwidth.

 

Mid 4LFF SAS/SATA backplane

The PCA-BP-4LFF-2U-M-G6 4LFF drive backplane can be installed in the mid 4LFF drive cage on the server to support four 3.5-inch SAS/SATA drives.

Figure 42 4LFF SAS/SATA drive backplane

 

(1) AUX connector (AUX1)

(2) Power connector (PWR 1)

(3) x4 Mini-SAS-HD connector (SAS PORT1)

 

 

Mid 8SFF UniBay drive backplane

The PCA-BP-8Unibay-2U-M-G6 8SFF UniBay drive backplane is combined by two 4SFF UniBay drive backplanes and can be installed at the server middle. Each 4SFF UniBay drive backplane can support four 2.5-inch SAS/SATA/NVMe drives.

Figure 43 4SFF UniBay drive backplane

 

(1) AUX connector (AUX)

(2) Power connector (PWR)

(3) MCIO connector B1/B2 (PCIe5.0 x8)(NVME-B1/B2)

(4) MCIO connector B3/B4 (PCIe5.0 x8)(NVME-B3/B4)

(5) x4 Mini-SAS-HD connector (SAS PORT)

PCIe5.0 x8 description:

·     PCIe5.0: Fifth-generation signal speed.

·     x8: Bus bandwidth.

 

Rear 2LFF SAS/SATA drive backplane

The PCA-BP-2LFF-2U-G6 2LFF SAS/SATA drive backplane is installed at the server rear to support two 3.5-inch SAS/SATA drives.

Figure 44 2LFF SAS/SATA drive backplane

 

(1) x4 Mini-SAS-HD connector (SAS PORT1)

(2) AUX connector (AUX1)

(3) Power connector (PWR1)

 

Rear 4LFF SAS/SATA drive backplane

The PCA-BP-4LFF-2U-G6 4LFF SAS/SATA drive backplane is installed at the server rear to support four 3.5-inch SAS/SATA drives.

Figure 45 4LFF SAS/SATA drive backplane

 

(1) AUX connector (AUX1)

(2) Power connector (PWR1)

(3) x4 Mini-SAS-HD connector (SAS PORT1)

 

Rear 2SFF SAS/SATA drive backplane

The PCA-BP-2SFF-2U-G6 2SFF SAS/SATA drive backplane is installed at the server rear to support two 2.5-inch SAS/SATA drives.

Figure 46 2SFF SAS/SATA drive backplane

 

(1) Power connector (PWR)

(2) x4 Mini-SAS-HD connector (SAS PORT)

(3) AUX connector (AUX)

 

Rear 2SFF UniBay drive backplane

The PCA-BP-2SFF-2UniBay-2U-G6 2SFF UniBay drive backplane is installed at the server rear to support two 2.5-inch SAS/SATA/NVMe drives.

Figure 47 2SFF UniBay drive backplane

 

(1) Power connector (PWR)

(2) x4 Mini-SAS-HD connector (SAS PORT)

(3) SlimSAS connector (PCIe4.0 x8)(NVME)

(4) AUX connector (AUX)

PCIe4.0 x8 description:

·     PCIe4.0: Fifth-generation signal speed.

·     x8: Bus bandwidth.

 

Rear 4SFF SAS/SATA drive backplane

The PCA-BP-4SFF-2U-G6 4SFF SAS/SATA drive backplane is installed at the server rear to support four 2.5-inch SAS/SATA drives.

Figure 48 4SFF SAS/SATA drive backplane

 

(1) x4 Mini-SAS-HD connector (SAS PORT)

(2) AUX connector (AUX)

(3) Power connector (PWR)

 

Rear 4SFF UniBay drive backplane

The PCA-BP-4SFF-4UniBay-2U-G6 4SFF UniBay drive backplane is installed at the server rear to support four 2.5-inch SAS/SATA/NVMe drives.

Figure 49 4SFF UniBay drive backplane

 

(1) AUX connector (AUX)

(2) Power connector (PWR)

(3) MCIO connector B1/B2 (PCIe5.0 x8)(NVME-B1/B2)

(4) MCIO connector B3/B4 (PCIe5.0 x8)(NVME-B3/B4)

(5) x4 Mini-SAS-HD connector (SAS PORT)

 

PCIe5.0 x8 description:

·     PCIe5.0: Fifth-generation signal speed.

·     x8: Bus bandwidth.

 

Rear 2UniBay drive backplane (compatible with OCP riser card)

The PCA-BP-2UniBay-OCP-2U-G6 2SFF UniBay drive backplane is installed together with an OCP riser card at the server rear to support two 2.5-inch SAS/SATA/NVMe drives.

Figure 50 2SFF UniBay drive backplane

 

(1) SlimSAS connector A1/A2 (PCIe4.0 x8)(NVME-A1/A2)

(2) x4 Mini-SAS-HD connector (SAS PORT)

(3) AUX connector (AUX)

(4) Power connector (PWR)

PCIe4.0 x8 description:

·     PCIe4.0: Fifth-generation signal speed.

·     x8: Bus bandwidth.

 

Riser cards

To expand the server with PCIe modules, install riser cards on the PCIe riser connectors.

Riser card guidelines

Each PCIe slot in a riser card can supply a maximum of 75 W of power to the PCIe module. You must connect a separate power cord to the PCIe module if it requires more than 75 W of power.

If a processor is faulty or absent, the PCIe slots connected to it are unavailable.

The slot number of a PCIe slot varies by the PCIe riser connector that holds the riser card. For example, slot 1/4 represents PCIe slot 1 if the riser card is installed on connector 1 and represents PCIe slot 4 if the riser card is installed on connector 2. For information about PCIe riser connector locations, see "Rear panel view."

RC-FHFL-2U-G6

Figure 51 RC-FHFL-2U-G6

 

(1) PCIe5.0 x16 (16,8,4,2,1) slot 3/6*

(2) Rear 2SFF SAS/SATA drive backplane

PCIe5.0 x16 (16,8,4,2,1) description:

·     PCIe5.0: Fifth-generation signal speed.

·     x16: Connector bandwidth.

·     (16,8,4,2,1): Compatible bus bandwidth, including x16, x8, x4, x2, and x1.

 

 

NOTE:

·     slot 3/6: When the riser card is installed in PCIe riser card slot 1, this slot corresponds to PCIe slot 3. When the riser card is installed in PCIe riser card slot 2, this slot corresponds to PCIe slot 6. This rule applies to all the other PCIe riser card slots. For information about PCIe slots, see "Rear panel view."

·     For more information about the rear 2SFF SAS/SATA drive backplane, see "Rear 2SFF SAS/SATA drive backplane."

 

RC-1FHFL-R3-2U-G6

Figure 52 RC-1FHFL-R3-2U-G6

 

(1) GPU module power connector

(2) PCIe5.0 x16 (16,8,4,2,1) slot 7

PCIe5.0 x16 (16,8,4,2,1) description:

·     PCIe5.0: Fifth-generation signal speed.

·     x16: Connector bandwidth.

·     (16,8,4,2,1): Compatible bus bandwidth, including x16, x8, x4, x2, and x1.

 

 

NOTE:

This type is riser card is not supported when liquid-cooled modules are installed on the server.

 

RC-2FHFL-2U-LC-G6

Figure 53 RC-2FHFL-2U-LC-G6 (1)

 

Figure 54 RC-2FHFL-2U-LC-G6 (2)

 

(1) PCIe5.0 x16 (16,8,4,2,1) slot 5

(2) PCIe5.0 x16 (16,8,4,2,1) slot 4

(3) GPU module power connector

(4) MCIO connector 2-C

(5) MCIO connector 2-A

(6) MCIO connector 1-A

(7) MCIO connector 1-C

 

PCIe5.0 x16 (16,8,4,2,1) description:

·     PCIe5.0: Fifth-generation signal speed.

·     x16: Connector bandwidth.

·     (16,8,4,2,1): Compatible bus bandwidth, including x16, x8, x4, x2, and x1.

 

RC-2HHHL-R3-2U-G6

Figure 55 RC-2HHHL-R3-2U-G6

(1) PCIe5.0 x16 (8,4,2,1) slot 8

(2) PCIe5.0 x16 (8,4,2,1) slot 7

PCIe5.0 x16 (8,4,2,1) description:

·     PCIe5.0: Fifth-generation signal speed.

·     x16: Connector bandwidth.

·     (8,4,2,1): Compatible bus bandwidth, including x8, x4, x2, and x1.

 

RC-2FHFL-R3-2U-G6

Figure 56 RC-2FHFL-R3-2U-G6

 

(1) PCIe5.0 x16 (8,4,2,1) slot 8

(2) PCIe5.0 x16 (8,4,2,1) slot 7

(3) GPU module power connector

 

PCIe5.0 x16 (8,4,2,1) description:

·     PCIe5.0: Fifth-generation signal speed.

·     x16: Connector bandwidth.

·     (8,4,2,1): Compatible bus bandwidth, including x8, x4, x2, and x1.

 

RC-2HHHL-R4-2U-G6

Figure 57 RC-2HHHL-R4-2U-G6

 

(1) SLOT 2 cable

(2) AUX connector

(3) PCIe5.0 x16 (8,4,2,1) slot 8

(4) PCIe5.0 x16 (8,4,2,1) slot 7

(5) Power connector

(6) SLOT 1 cable

PCIe5.0 x16 (8,4,2,1) description:

·     PCIe5.0: Fifth-generation signal speed.

·     x16: Connector bandwidth.

·     (8,4,2,1): Compatible bus bandwidth, including x8, x4, x2, and x1.

 

RC-3FHFL-2U-G6

Figure 58 RC-3FHFL-2U-G6 (1)

 

Figure 59 RC-3FHFL-2U-G6 (2)

 

(1) PCIe5.0 x16 (16,8,4,2,1) slot 2/5

(2) PCIe5.0 x16 (16,8,4,2,1) slot 3/6

(3) GPU module power connector

(4) PCIe5.0 x16 (16,8,4,2,1) slot 1/4*

(5) MCIO connector 2-C

(6) MCIO connector 2-A

(7) PMCIO connector 1-A

(8) MCIO connector 1-C

PCIe5.0 x16 (16,8,4,2,1) description:

·     PCIe5.0: Fifth-generation signal speed.

·     x16: Connector bandwidth.

·     (16,8,4,2,1): Compatible bus bandwidth, including x16, x8, x4, x2, and x1.

 

 

NOTE:

slot 1/4: When the riser card is installed in PCIe riser card slot 1, this slot corresponds to PCIe slot 1. When the riser card is installed in PCIe riser card slot 2, this slot corresponds to PCIe slot 4. This rule applies to all the other PCIe riser card slots. For information about PCIe slots, see "Rear panel view."

 

RC-3FHHL-2U-G6

Figure 60 RC-3FHHL-2U-G6 (1)

 

Figure 61 RC-3FHHL-2U-G6 (2)

 

(1) PCIe5.0 x16 (8,4,2,1) slot 2/5

(2) PCIe5.0 x16 (8,4,2,1) slot 3/6

(3) PCIe5.0 x16 (16,8,4,2,1) slot 1/4*

(4) MCIO connector 1-A

(5) MCIO connector 1-C

 

PCIe5.0 x16 (16,8,4,2,1) description:

·     PCIe5.0: Fifth-generation signal speed.

·     x16: Connector bandwidth.

·     (16,8,4,2,1): Compatible bus bandwidth, including x16, x8, x4, x2, and x1.

 

 

NOTE:

slot 1/4: When the riser card is installed in PCIe riser card slot 1, this slot corresponds to PCIe slot 1. When the riser card is installed in PCIe riser card slot 2, this slot corresponds to PCIe slot 4. This rule applies to all the other PCIe riser card slots. For information about PCIe slots, see "Rear panel view."

 

RC-5HHHL-R5-2U-G5

Figure 62 RC-5HHHL-R5-2U-G5

 

(1) PCIe5.0 x16 (8,4,2,1) slot 12

(2) MCIO connector 2

(3) PCIe5.0 x16 (8,4,2,1) slot 13

(4) MCIO connector 3

(5) PCIe5.0 x16 (8,4,2,1) slot 14

(6) MCIO connector 4

(7) PCIe5.0 x16 (8,4,2,1) slot 15

(8) MCIO connector 5

(9) Power connector

(10) AUX connector

PCIe5.0 x16 (8,4,2,1) description:

·     PCIe5.0: Fifth-generation signal speed.

·     x16: Connector bandwidth.

·     (8,4,2,1): Compatible bus bandwidth, including x8, x4, x2, and x1.

 

PCA-R4900-4GPU-G6

Figure 63 PCA-R4900-4GPU-G6 (1)

(1) PCIe5.0 x16 (8,4,2,1) slot 14

(2) PCIe5.0 x16 (8,4,2,1) slot 13

(3) PCIe5.0 x16 (8,4,2,1) slot 12

(4) PCIe5.0 x16 (8,4,2,1) slot 11

(5) GPU module power connector

(6) GPU module power connector

PCIe5.0 x16 (8,4,2,1) description:

·     PCIe5.0: Fifth-generation signal speed.

·     x16: Connector bandwidth.

·     (8,4,2,1): Compatible bus bandwidth, including x8, x4, x2, and x1.

 

Figure 64 PCA-R4900-4GPU-G6 (2)

(7) GPU module power connector

(8) GPU module power connector

 

Figure 65 PCA-R4900-4GPU-G6 (3)

(9) PCIe5.0 x16 (16,8,4,2,1) slot 6

(10) PCIe5.0 x16 (16,8,4,2,1) slot 3

PCIe5.0 x16 (16,8,4,2,1) description:

·     PCIe5.0: Fifth-generation signal speed.

·     x16: Connector bandwidth.

·     (16,8,4,2,1): Compatible bus bandwidth, including x16, x8, x4, x2, and x1.

 

OCP riser card

Figure 66 OCP riser card

 

(1) MCIO connector 4

(2) MCIO connector 3

(3) Power connector

(4) AUX connector

(5) MCIO connector 2

(6) MCIO connector 1

 

LCD smart management module

An LCD smart management module displays basic server information, operating status, and fault information, and provides diagnostics and troubleshooting capabilities. You can locate and troubleshoot component failures by using the LCD module in conjunction with the event logs generated in HDM.

Figure 67 LCD smart management module

 

Table 19 LCD smart management module description

No.

Item

Description

1

Mini-USB connector

Used for upgrading the firmware of the LCD module.

2

LCD module cable

Connects the LCD module to the system board of the server. For information about the LCD smart management module connector on the system board, see "System board."

3

LCD module shell

Protects and secures the LCD screen.

4

LCD screen

Displays basic server information, operating status, and fault information.

 

Fan modules

The server supports four hot swappable fan modules. The server supports N+1 fan module redundancy. Figure 68 shows the layout of the fan modules in the chassis.

The server uses intelligent fan energy-saving and noise-reduction technology, which integrates multiple AI algorithms. It can monitor the temperature, power, and other status information of the device in real-time to obtain the optimal fan adjustment policy. Then, it dynamically adjusts the fan duty cycle configuration to meet the device's energy-saving and noise-reduction requirements.

Figure 68 Fan module layout

 

PCIe slot

The server supports installing mid GPU adapters in the server middle and supports installing riser cards, rear GPU modules, and OCP 3.0 adapters at the server rear. The PCIe slot numbers vary by configuration.

Figure 69 PCIe slot numbering when riser cards are installed at the server rear

 

Figure 70 PCIe slot numbering when 2UniBay drive backplanes and OCP 3.0 adapters are installed at the server rear

 

Figure 71 PCIe slot numbering when rear GPU modules are installed

 

Figure 72 PCIe slot numbering on a mid GPU adapter

 

IMPORTANT

IMPORTANT:

You cannot configure both rear GPU modules and mid GPU adapters on the same server.

 

PCIe modules

Typically, the PCIe modules are available in the following standard form factors:

·     LP—Low profile.

·     FHHL—Full height and half length.

·     FHFL—Full height and full length.

·     HHHL—Half height and half length.

·     HHFL—Half height and full length.

The following PCIe modules require PCIe I/O resources: Storage controllers, FC HBAs, and GPU modules. Make sure the number of such PCIe modules installed does not exceed 11.

Storage controllers

The server supports the following types of storage controllers:

·     Embedded VROC controllerEmbedded in the server and does not require installation.

·     Standard storage controllerComes in a standard PCIe form factor and typically requires a riser card for installation.

For some storage controllers, you can order a power fail safeguard module to prevent data loss from power outages. This module provides a flash card and a supercapacitor. When a system power failure occurs, the supercapacitor provides power for a minimum of 20 seconds. During this interval, the storage controller can transfer data from DDR memory to the flash card, where the data remains indefinitely or until the controller retrieves the data. If the storage controller contains a built-in flash card, you can order only a supercapacitor.

Embedded VROC controller

Item

Specifications

Type

Embedded in PCH of the system board

Number of internal ports

12 internal SAS ports (compatible with SATA)

Connectors

Three onboard ×4 SlimSAS connectors

Drive interface

6 Gbps SATA 3.0

Supports drive hot swapping

PCIe interface

PCIe3.0 ×4

RAID levels

0, 1, 5, 10

Built-in cache memory

N/A

Built-in flash

N/A

Power fail safeguard module

Not supported

Firmware upgrade

Upgrade with the BIOS

 

Standard storage controllers

For more information, use the component compatibility lookup tool at http://www.h3c.com/en/home/qr/default.htm?id=66.

NVMe VROC modules

Model

RAID levels

Compatible NVMe SSDs

NVMe-VROC-Key-S

0, 1, 10

All NVMe drives

NVMe-VROC-Key-P

0, 1, 5, 10

All NVMe drives

 

Front SATA M.2 expander module

Figure 73 Front view of the front SATA M.2 expander module

 

(1) Data cable connector

(2) M.2 SSD drive slot 1

 

Figure 74 Rear view of the front SATA M.2 expander module

 

(1) SATA M.2 SSD drive slot 2

 

Rear NVMe M.2 expander module

Figure 75 Rear NVMe M.2 expander module

 

(1) NVMe M.2 SSD drive slot 1

(2) NVMe M.2 SSD drive slot 2

 

Server management module

The server management module is installed on the system board to provide I/O connectors and HDM out-of-band features for the server.

Figure 76 Server management module

 

(1) VGA connector

(2) Two USB 3.0 connectors

(3) HDM dedicated network interface

(4) UID LED

(5) HDM serial port

(6) iFIST module

(7) NCSI connector

 

 

Serial & DSD module

The serial & DSD module is installed in the slot on the server rear panel. The module provides two SD slots and forms RAID 1 by default.

Figure 77 Serial & DSD module

 

Table 20 Component description

Item

Description

1

SD card slot 1

2

SD card slot 2

3

Serial port

 

B/D/F information

You can obtain B/D/F information by using one of the following methods:

·     BIOS log—Search the dumpiio keyword in the BIOS log.

·     UEFI shell—Execute the pci command. For information about how to execute the command, execute the help pci command.

·     Operating system—The obtaining method varies by OS.

¡     For Linux, execute the lspci command.

If Linux does not support the lspci command by default, use the software package manager supported by the operating system to obtain and install the pci-utils package.

¡     For Windows, install the pciutils package, and then execute the lspci command.

¡     For VMware, execute the lspci command.

Appendix C  Managed removal of OCP network adapters

Before you begin

Before you perform a managed removal of an OCP network adapter, perform the following tasks:

·     Use the OS compatibility query tool at http://www.h3c.com/en/home/qr/default.htm?id=66 to obtain operating systems that support managed removal of OCP network adapters.

·     Make sure the BIOS version is 6.00.15 or higher, the HDM2 version is 1.13 or higher, and the CPLD version is V001 or higher.

Performing a hot removal

This section uses an OCP network adapter in slot 16 as an example.

To perform a hot removal:

1.     Access the operating system.

2.     Execute the dmidecode -t 9 command to search for the bus address of the OCP network adapter. As shown in Figure 78, the bus address of the OCP network adapter in slot 16 is 0000:31:00.0.

Figure 78 Searching for the bus address of an OCP network adapter by slot number

 

3.     Execute the echo 0 > /sys/bus/pci/slots/slot number/power command, where slot number represents the number of the slot where the OCP network adapter resides.

Figure 79 Executing the echo 0 > /sys/bus/pci/slots/slot number/power command

 

4.     Identify whether the OCP network adapter has been disconnected:

¡     Observe the OCP network adapter LED. If the LED is off, the OCP network adapter has been disconnected.

¡     Execute the lspci vvv s 0000:31:00.0 command. If no output is displayed, the OCP network adapter has been disconnected.

Figure 80 Identifying OCP network adapter status

 

5.     Replace the OCP network adapter.

6.     Identify whether the OCP network adapter has been connected:

¡     Observe the OCP network adapter LED. If the LED is on, the OCP network adapter has been connected.

¡     Execute the lspci vvv s 0000:31:00.0 command. If an output is displayed, the OCP network adapter has been connected.

Figure 81 Identifying OCP network adapter status

 

7.     Identify whether any exception exists. If any exception occurred, contact H3C Support.


Appendix D  Environment requirements

About environment requirements

The operating temperature requirements for the server vary depending on the server model and hardware configuration. When the general and component-based requirements conflict, use the component-based requirement.

Be aware that the actual maximum operating temperature of the server might be lower than what is stated because of poor site cooling performance. In a real data center, the server cooling performance might decrease because of adverse external factors, including poor cabinet cooling performance, high power density inside the cabinet, or insufficient spacing between devices.

General environment requirements

Item

Specifications

Operating temperature

Minimum: 5°C (41°F)

Maximum: 40°C (104°F)

·     CAUTION CAUTION:

The maximum temperature varies by hardware option presence. For more information, see "Operating temperature requirements."

Storage temperature

–40°C to +70°C (–40°F to +158°F)

Operating humidity

8% to 90%, noncondensing

Storage humidity

5% to 95%, noncondensing

Operating altitude

–60 m to +3000 m (–196.85 ft to +9842.52 ft)

The allowed maximum temperature decreases by 0.33°C (32.59°F) as the altitude increases by 100 m (328.08 ft) from 900 m (2952.76 ft)

Storage altitude

–60 m to +5000 m (–196.85 ft to +16404.20 ft)

 

Operating temperature requirements

General guidelines

The cooling capability of the server depends on the power density of devices within the cabinet, the cooling capability of the cabinet, and the spacing between the server and other devices. When the server is stacked with other devices, the maximum operating temperature of the server might decrease.

Operating temperature requirements without liquid-cooled module

If a mid drive or mid GPU adapter is installed, you cannot install processors with a TDP of more than 165 W.

When a CRPS-195mm 2000W power supply is installed, the maximum server operating temperature decreases by 3°C (37.4°F).

When an 8038 or 8056 fan fails, the maximum server operating temperature decreases by 5°C (41°F). If the power consumption of GPU modules is greater than 165W, the performance of processors might decrease.

Table 21 shows the operating temperature requirements for the server without the liquid-cooled module.

Table 21 Operating temperature requirements without liquid-cooled module

Drive configuration

Maximum temperature

Remarks

30°C (86°F)

35°C (95°F)

40°C (104°F)

·     8LFF

·     8SFF 1 (without TOSHIBA SAS HDD SFF)

·     16SFF

·     E1.S

Smart network adapters and GPUs are not supported when mid drives are installed.

The following are not supported:

·     A40 GPUs.

·     Mid GPUs.

·     Smart network adapters and GPUs when mid drives are installed.

·     Processors of more than 270W when the 4GPU module is installed.

The following are not supported:

·     GPUs.

·     U.2 NVMe SSDs.

·     Mid drives and rear drives.

·     Processors of more than 165W.

·     Delta DPS-1600AB-13 R 1600W Platinum AC power supply.

Fan speed unrestricted.

·     8SFF1(without TOSHIBA SAS HDD SFF)

·     12LFF

·     25SFF

·     24SFF

The following are not supported:

·     A40/A16 GPU.

·     Mid GPU.

·     Smart network adapters and GPUs are not supported when mid drives are installed

The following are not supported:

·     A100-80G/A40/A16 GPUs

·     Processors of more than 300W when rear drives (including HDDs, U.2/E1.S NVMe SSDs, excluding SATA/SAS SSDs) are installed.

·     Delta DPS-1600AB-13 R 1600W Platinum AC power supply.

·     Mid GPUs.

·     Smart network adapters and GPUs when mid drives are installed.

·     Processors of more than 270W when the rear 4GPU module is installed.

·     Processors of more than 250W when the rear A2 GPU is installed.

·     Processors of more than 250W when E1.S NVMe drives of 3.84 TB are installed.

With the 8SFF drive configuration that uses TOSHIBA SAS HDD SFF, the 8056 fan module is limited to 90% of its maximum speed.

1: The 8SFF drives are installed in slots 0 to 7. For more information about drive slots, see "Drive numbering."

 

Operating temperature requirements with liquid-cooled module

When an 8038 or 8056 fan fails, the maximum server operating temperature decreases by 5°C (41°F).

Table 22 shows the operating temperature requirements for the server without the liquid-cooled module.

Table 22 Operating temperature requirements with liquid-cooled module

Drive configuration

Maximum temperature

Remarks

30°C (86°F)

35°C (95°F)

40°C (104°F)

·     8LFF

·     8SFF1 (without TOSHIBA SAS HDD SFF)

·     16SFF

·     E1.S

All configurations are supported.

All configurations are supported.

The following are not supported:

·     GPUs.

·     U.2/E1.S NVMe SSDs.

·     Mid drives and rear drives.

·     Delta DPS-1600AB-13 R 1600W Platinum AC power supply.

Fan speed unrestricted.

·     8SFF1 (with TOSHIBA SAS HDD SFF)

·     12LFF

·     25SFF

·     24SFF

A40/A16 GPUs are not supported.

A100-80G/A40/A16 GPUs are not supported

With the 8SFF drive configuration that uses TOSHIBA SAS HDD SFF, the 8056 fan module is limited to 90% of its maximum speed.

1: The 8SFF drives are installed in slots 0 to 7. For more information about drive slots, see "Drive numbering."

·      


Appendix E  Product recycling

New H3C Technologies Co., Ltd. provides product recycling services for its customers to ensure that hardware at the end of its life is recycled. Vendors with product recycling qualification are contracted to New H3C to process the recycled hardware in an environmentally responsible way.

For product recycling services, contact New H3C at

·     Tel: 400-810-0504

·     E-mail: [email protected]

·     Website: http://www.h3c.com

 


Appendix F  Glossary

Item

Description

B

BIOS

Basic input/output system is non-volatile firmware pre-installed in a ROM chip on a server's management module. The BIOS stores basic input/output, power-on self-test, and auto startup programs to provide the most basic hardware initialization, setup and control functionality.

C

CPLD

Complex programmable logic device is an integrated circuit used to build reconfigurable digital circuits.

G

GPU module

Graphics processing unit module converts digital signals to analog signals for output to a display device and assists processors with image processing to improve overall system performance.

H

HDM

Hardware Device Management is the server management control unit with which administrators can configure server settings, view component information, monitor server health status, and remotely manage the server.

Hot swapping

A module that supports hot swapping (a hot-swappable module) can be installed or removed while the server is running without affecting the system operation.

K

KVM

KVM is a management method that allows remote users to use their local video display, keyboard, and mouse to monitor and control the server.

N

NVMe VROC module

A module that works with Intel VMD to provide RAID capability for the server to virtualize storage resources of NVMe drives.

R

RAID

Redundant array of independent disks (RAID) is a data storage virtualization technology that combines multiple physical hard drives into a single logical unit to improve storage and security performance.

Redundancy

A mechanism that ensures high availability and business continuity by providing backup modules. In redundancy mode, a backup or standby module takes over when the primary module fails.

S

Security bezel

A locking bezel mounted to the front of a server to prevent unauthorized access to modules such as hard drives.

U

U

A unit of measure defined as 44.45 mm (1.75 in) in IEC 60297-1. It is used as a measurement of the overall height of racks, as well as equipment mounted in the racks.

UniBay drive backplane

A UniBay drive backplane supports both SAS/SATA and NVMe drives.

UniSystem

UniSystem provided by H3C for easy and extensible server management. It can guide users to configure a server quickly with ease and provide an API interface to allow users to develop their own management tools.

V

VMD

VMD provides hot removal, management and fault-tolerance functions for NVMe drives to increase availability, reliability, and serviceability.

 


Appendix G  Acronyms

Acronym

Full name

B

BIOS

Basic Input/Output System

C

CMA

Cable Management Arm

CPLD

Complex Programmable Logic Device

D

DCPMM

Data Center Persistent Memory Module

DDR

Double Data Rate

DIMM

Dual In-Line Memory Module

DRAM

Dynamic Random Access Memory

DVD

Digital Versatile Disc

G

GPU

Graphics Processing Unit

H

HBA

Host Bus Adapter

HDD

Hard Disk Drive

HDM

Hardware Device Management

I

IDC

Internet Data Center

iFIST

integrated Fast Intelligent Scalable Toolkit

K

KVM

Keyboard, Video, Mouse

L

LRDIMM

Load Reduced Dual Inline Memory Module

N

NCSI

Network Controller Sideband Interface

NVMe

Non-Volatile Memory Express

P

PCIe

Peripheral Component Interconnect Express

POST

Power-On Self-Test

R

RAID

Redundant Array of Independent Disks

RDIMM

Registered Dual Inline Memory Module

S

SAS

Serial Attached Small Computer System Interface

SATA

Serial ATA

SD

Secure Digital

SDS

Secure Diagnosis System

SFF

Small Form Factor

sLOM

Small form factor Local Area Network on Motherboard

SSD

Solid State Drive

T

TCM

Trusted Cryptography Module

TDP

Thermal Design Power

TPM

Trusted Platform Module

U

UID

Unit Identification

UPI

Ultra Path Interconnect

UPS

Uninterruptible Power Supply

USB

Universal Serial Bus

V

VROC

Virtual RAID on CPU

VMD

Volume Management Device

 

  • Cloud & AI
  • InterConnect
  • Intelligent Computing
  • Security
  • SMB Products
  • Intelligent Terminal Products
  • Product Support Services
  • Technical Service Solutions
All Services
  • Resource Center
  • Policy
  • Online Help
All Support
  • Become A Partner
  • Partner Policy & Program
  • Global Learning
  • Partner Sales Resources
  • Partner Business Management
  • Service Business
All Partners
  • Profile
  • News & Events
  • Online Exhibition Center
  • Contact Us
All About Us
新华三官网