H3C UniServer R6700 G3 Server User Guide-6W101

HomeSupportServerH3C UniServer R6700 G3Install & UpgradeInstallation GuidesH3C UniServer R6700 G3 Server User Guide-6W101

02-Appendix

Contents

Appendix A  Server specifications· 1

Server models and chassis view· 1

Technical specifications· 2

Components· 3

Front panel 4

Front panel view· 4

LEDs and buttons· 5

Ports· 6

Rear panel 7

Rear panel view· 7

LEDs· 7

Ports· 9

System board· 10

System board components· 10

System maintenance switches· 11

Processor mezzanine board components· 12

DIMM slots· 12

Appendix B  Component specifications· 1

About component model names· 1

DIMMs· 1

DRAM DIMM rank classification label 1

HDDs and SSDs· 2

Drive LEDs· 2

Drive configurations and numbering· 3

PCIe modules· 6

Storage controllers· 6

Riser cards· 6

Riser card guidelines· 7

RC-2*FHHL-G3· 7

RC-2LP-2U-G3-2· 8

RC-2LP-2U-G3-3· 9

RC-3*FHHL-G3· 9

RC-3FHHL-2U-G3-1· 10

RC-4*NVME-3*FHHL-G3· 11

RC-8*NVME-1*FHHL-G3· 12

B/D/F information· 13

Viewing B/D/F information· 13

Obtaining B/D/F information· 15

Fans· 15

Fan layout 15

Power supplies· 16

800 W Platinum power supply· 16

800 W 336 V high-voltage power supply· 16

850 W Titanium power supply· 17

1200 W Platinum power supply· 17

1600 W Platinum power supply PSR1600-12A· 18

1600 W Platinum power supply DPS-1600AB-13 R· 18

1600W 336V high-voltage DC· 19

Expander modules· 19

Diagnostic panels· 19

Diagnostic panel view· 19

LEDs· 20

Storage options other than HDDs and SDDs· 24

NVMe VROC modules· 24

Appendix C  Hot removal and managed hot removal of NVMe drives· 1

Performing a managed hot removal in Windows· 1

Prerequisites· 1

Procedure· 1

Performing a managed hot removal in Linux· 2

Prerequisites· 2

Performing a managed hot removal from the CLI 2

Appendix D  Environment requirements· 1

About environment requirements· 1

General environment requirements· 1

Operating temperature requirements· 1

General guidelines· 1

8SFF front drive configuration· 1

16SFF front drive configuration· 2

24SFF front drive configuration· 3

Appendix E  Product recycling· 1

Appendix F  Glossary· 1

Appendix G  Acronyms· 1

 


Appendix A  Server specifications

The information in this document might differ from your product if it contains custom configuration options or features.

Server models and chassis view

H3C UniServer R6700 G3 servers are 2U rack servers with four Intel Purley or Jintide-C series processors. The servers are suitable for compute-intensive scenarios, such as virtualization, high-performance computing (HPC), cloud computing, memory computing, databases, and SAP HANA. They can be deployed to support business applications of service providers, finance companies, governments, enterprise clouds, and research institutes. The servers feature high computing performance, low power consumption, strong expandability, and high availability, allowing for simple deployment and management.

Figure 1 Chassis view

 

The servers come in the models listed in Table 1. These models support different drive configurations. For more information about drive configuration and compatible storage controller configuration, see "Drive configurations and numbering."

Table 1 R6700 G3 server models

Model

Maximum drive configuration

8SFF SAS/SATA

24 SFF drives at the front + 2 SFF drives at the rear.

8SFF UniBay

24 SFF drives at the front + 2 SFF drives at the rear.

NOTE:

A UniBay drive slot supports both SAS/SATA drives and NVMe drives. For more information about drive slots, see "Front panel view."

 

Technical specifications

Item

Specifications

Dimensions (H × W × D)

·     Without a security bezel: 87.5 × 445.4 × 748 mm (3.44 × 17.54 × 29.45 in)

·     With a security bezel: 87.5 × 445.4 × 769 mm (3.44 × 17.54 × 30.28 in)

Max. weight

29.5 kg (65.04 lb)

Processors

4 × Intel Purley processors or Jintide-C series processors

(Up to 3.8 GHz base frequency, maximum 205 W power consumption, and 38.5 MB cache per processor)

Memory

A maximum of 48 DIMMs (support mixture of DCPMMs and DRAM DIMMs)

Chipset

Intel C621 Lewisburg chipset

Network connection

·     1 × onboard 1 Gbps HDM dedicated network port

·     2 × onboard 1 Gbps network adapter ports

·     1 × sLOM network adapter connector

I/O connectors

·     6 × USB connectors:

¡     5 × USB 3.0 connectors (two on the system board, two at the server rear, and one at the server front)

¡     1 × USB 2.0 connector at the server front

·     9 SATA connectors in total:

¡     1 × onboard mini-SAS-HD connector (×8 SATA connectors)

¡     1 × onboard ×1 SATA connector

·     1 × RJ-45 HDM dedicated port at the server rear

·     2 × 1 Gbps network adapter ports at the server rear

·     2 × VGA connectors (one at the server rear and one at the server front)

·     1 × BIOS serial port at the server rear

Expansion slots

A maximum of 11 × PCIe 3.0 slots (Up to 10 standard PCIe modules and one sLOM network adapter)

Optical drives

External USB optical drives

Power supplies

2 × hot-swappable power supplies in redundancy

800 W Platinum, 800W 336V high-voltage DC, 850W Titanium, 1200 W Platinum, 1600 W Platinum, and 1600W 336V high-voltage DC power supplies

 

Components

Figure 2 R6700 G3 server components

 

Table 2 R6700 G3 server components

Item

Description

(1) Power supply

Supplies power to the server. It supports hot swapping and 1+1 redundancy.

(2) sLOM network adapter

Installed on the sLOM network adapter connector of the system board for network expansion.

(3) Riser card

Installed in the server to provide additional slots for PCIe modules.

(4) Processor heatsink

Cools the processor.

(5) Processor retaining bracket

Attaches a processor to the heatsink.

(6) Processor

Integrates a memory processing unit and a PCIe controller to provide data processing capabilities for the server.

(7) Riser card blank

Installed on an empty riser card connector to ensure good ventilation.

(8) Fan

Supports hot swapping and N+1 redundancy.

(9) Supercapacitor

Supplies power to the flash card of the power fail safeguard module, which enables the storage controller to back up data to the flash card for protection when power outage occurs.

(10) Chassis-open alarm module

Generates a chassis open alarm every time the access panel is removed. The alarms can be displayed from the HDM Web interface.

(11) NVMe VROC module

Works with VMD to provide RAID capability for the server to virtualize storage resources of NVMe drives.

(12) System battery

Supplies power to the system clock.

(13) Multifunctional rack mount ears

Attach the server to the rack. The right ear is integrated with the front I/O component. The left ear is integrated with VGA and USB 2.0 connectors.

(14) Drive cage

Encloses drives.

(15) Drive

Drive for data storage, which is hot swappable.

(16) Diagnostic panel

Displays information about faulty components for quick diagnosis.

(17) Fan cage

Used for holding fans.

(18) Memory

Stores computing data and data exchanged with external storage.

(19) TPM or TCM module

Provides encryption services to enhance data security.

(20) Access panel

N/A

(21) Drive backplane

Provides power and data channels for drives.

(22) System board

One of the most important parts of a server, on which multiple components are installed, such as processor, memory, and fan. It is integrated with basic server components, including the BIOS chip, HDM chip, and PCIe connectors.

(23) Chassis

N/A

(24) Processor mezzanine board

Installed over the system board to provide additional processor connectors and DIMM slots.

 

Front panel

Front panel view

Figure 3 shows the front panel view of the server.

Figure 3 Front panel

(1) VGA connector

(2) USB 2.0 connector

(3) Drive cage bay 1 for 8SFF SAS/SATA drives (optional)

(4) Drive cage bay 2 for 8SFF SAS/SATA drives or 8SFF UniBay drives

(5) Serial label pull tab module

(6) Drive cage bay 3 for 8SFF SAS/SATA drives or 8SFF UniBay drives (optional)

(7) Diagnostic panel or LCD module (optional)

(8) USB 3.0 connector

(9) UniBay drives (for the 4SFF UniBay + 4 SFF SAS/SATA drive configuration)

(10) SAS/SATA drives (for the 4SFF UniBay + 4 SFF SAS/SATA drive configuration)

 

LEDs and buttons

Figure 4 shows the front panel LEDs and buttons. Table 3 describes the status of the front panel LEDs.

Figure 4 Front panel LEDs and buttons

(1) Power on/standby button and system power LED

(2) Health LED

(3) sLOM or embedded network adapter Ethernet port LED

(4) UID button LED

 

Table 3 LEDs and buttons on the front panel

Button/LED

Status

Power on/standby button and system power LED

·     Steady green—The system has started.

·     Flashing green (1 Hz)—The system is starting.

·     Steady amber—The system is in Standby state.

·     Off—No power is present. Possible reasons:

¡     No power source is connected.

¡     No power supplies are present.

¡     The installed power supplies are faulty.

¡     The system power cords are not connected correctly.

Health LED

·     Steady green—The system is operating correctly or a minor alarm has occurred.

·     Flashing green (4 Hz)—HDM is initializing.

·     Flashing amber (1 Hz)—A major alarm has occurred.

·     Flashing red (1 Hz)—A critical alarm has occurred.

If a system alarm is present, log in to HDM to obtain more information about the system running status.

sLOM or embedded network adapter Ethernet port LED

·     Steady green—A link is present on the port.

·     Flashing green (1 Hz)—The port is receiving or sending data.

·     Off—No link is present on the port.

UID button LED

·     Steady blue—UID LED is activated. The UID LED can be activated by using the following methods:

¡     Press the UID button LED.

¡     Activate the UID LED from HDM.

·     Flashing blue:

¡     1 Hz—The firmware is being upgraded or the system is being managed from HDM.

¡     4 Hz—HDM is restarting. To restart HDM, press the UID button LED for a minimum of 8 seconds.

·     Off—UID LED is not activated.

 

Ports

The server provides fixed USB 3.0/2.0 and VGA connectors on its front panel.

Table 4 Ports on the front panel

Port

Type

Description

VGA connector

DB-15

Connects a display terminal, such as a monitor or KVM device.

USB connector

USB 3.0/2.0

Connects the following devices:

·     USB flash drive.

·     USB keyboard or mouse.

·     USB optical drive for operating system installation.

 

Rear panel

Rear panel view

Figure 5 shows the rear panel view.

Figure 5 Rear panel components

(1) PCIe slots 1 through 3

(2) PCIe slots 4 through 6

(3) PCIe slots 7 and 8

(4*) 2SFF UniBay drive cage or PCIe riser card (optional)

(5) Power supply 2

(6) Power supply 1

(7) VGA connector

(8) BIOS serial port

(9) USB 3.0 connectors

(10) Ethernet ports (1 Gbps, RJ-45)

(11) HDM dedicated network port (1 Gbps, RJ-45, default IP address 192.168.1.2/24)

(12) sLOM network adapter in slot 11 (optional)

 

 

NOTE:

·     The asterisk (*) indicates that if a PCIe riser card is installed, PCIe slots are numbered 9 to 10 from the top down.

·     A UniBay drive slot supports both SAS/SATA drives and NVMe drives.

 

LEDs

Figure 6 shows the rear panel LEDs. Table 5 describes the status of the rear panel LEDs.

Figure 6 Rear panel LEDs

(1, 4, and 6) Link LEDs of the Ethernet ports

(2, 5, and 7) Activity LEDs of the Ethernet ports

(3) UID LED

(8) Power supply 1 LED

(9) Power supply 2 LED

 

Table 5 LEDs on the rear panel

LED

Status

Link LED of an Ethernet port

·     Steady green—A link is present on the port.

·     Off—No link is present on the port.

Activity LED of an Ethernet port

·     Flashing green (1 Hz)—The port is receiving or sending data.

·     Off—The port is not receiving or sending data.

UID LED

·     Steady blue—UID LED is activated. The UID LED can be activated by using the following methods:

¡     Press the UID button LED.

¡     Enable UID LED from HDM.

·     Flashing blue:

¡     1 Hz—The firmware is being updated or the system is being managed by HDM.

¡     4 Hz—HDM is restarting. To restart HDM, press the UID button LED for a minimum of 8 seconds.

·     Off—UID LED is not activated.

Power supply LED

·     Steady green—The power supply is operating correctly.

·     Flashing green (1 Hz)—Power is being input correctly but the system is not powered on.

·     Flashing green (0.33 Hz)—The power supply is in standby state and does not output power.

·     Flashing green (2 Hz)—The power supply is updating its firmware.

·     Steady amber—Either of the following conditions exists:

¡     The power supply is faulty.

¡     The power supply does not have power input, but the other power supply has correct power input.

·     Flashing amber (1 Hz)—An alarm has occurred on the power supply.

·     Off—No power supplies have power input, which can be caused by an incorrect power cord connection or power source shutdown.

 

Ports

Table 6 Ports on the rear panel

Port

Type

Description

VGA connector

DB-15

Connects a display terminal, such as a monitor or KVM device.

BIOS serial port

RJ-45

The BIOS serial port is used for the following purposes:

·     Log in to the server when the remote network connection to the server has failed.

·     Establish a GSM modem or encryption lock connection.

USB connector

USB 3.0

Connects the following devices:

·     USB flash drive.

·     USB keyboard or mouse.

·     USB optical drive for operating system installation.

Ethernet port

RJ-45

Establishes a network connection for interaction with external devices.

HDM dedicated network port

RJ-45

Establishes a network connection to manage HDM from its Web interface.

Power receptacle

Standard single-phase

Connects the power supply to the power source.

 

System board

System board components

Figure 7 System board components

(1) TPM/TCM connector

(2) System maintenance switches (below the riser card support bracket)

(3) PCIe riser connector 1

(4) sLOM network adapter connector (slot 11)

(5) System battery

(6) Mini-SAS-HD port (×8 SATA ports)

(7) Front I/O connector

(8) NVMe VROC module connector

(9) SATA port

(10) LCD module connector

(11) Diagnostic panel connector

(12) Front drive backplane power connector 1

(13) Front drive backplane AUX connector 1

(14) Front drive backplane power connector 2

(15) Front drive backplane AUX connector 2

(16) Front drive backplane power connector 3

(17) Front drive backplane AUX connector 3

(18) Rear drive backplane power connector

(19) Chassis-open alarm module, front VGA, and USB 2.0 connector

(20) Rear drive backplane AUX connector

(21) Dual internal USB 3.0 connectors

(22) PCIe riser connector 4

(23) Processor mezzanine board connector

(24) PCIe riser connector 3

(25) Dual SD card extended module connector

(26) PCIe riser connector 2

 

 

System maintenance switches

Use the system maintenance switches as shown in Figure 8, if you forget HDM username, HDM password, or BIOS password, or need to restore default BIOS settings, as described in Table 7. To identify the location of the switches on the system board, see Figure 7.

Figure 8 System maintenance switches

 

To use the system maintenance switches, remove the riser card support bracket installed over the switches.

Table 7 System maintenance switches

Item

Description

Remarks

Switch 1

·     OFF (default)HDM login requires the username and password of a valid HDM user account.

·     ON—HDM login requires the default username and password.

For security purposes, turn off the switch after you complete tasks with the default username and password as a best practice.

Switch 5

·     OFF (default)—Normal server startup.

·     ON—Restores the default BIOS settings.

To restore the default BIOS settings, turn on and then turn off the switch. The server starts up with the default BIOS settings at the next startup.

CAUTION:

The server cannot start up when the switch is turned on.

Switch 6

·     OFF (default)—Normal server startup.

·     ON—Clears all passwords from the BIOS at server startup.

To clear all passwords from the BIOS, turn on the switch and then start the server. All the passwords will be cleared from the BIOS. Before the next server startup, turn off the switch to perform a normal server startup.

Switches 2, 3, 4, 7, and 8

Reserved.

N/A

 

Processor mezzanine board components

Figure 9 Processor mezzanine board components

(1) SlimSAS connector 4A (×8 SlimSAS port, for processor 4)

(2) SlimSAS connector 4B (×8 SlimSAS port, for processor 4)

(3) SlimSAS connector 3B (×8 SlimSAS port, for processor 4)

(4) SlimSAS connector 3A (×8 SlimSAS port, for processor 4)

(5) SlimSAS connector 2B (×8 SlimSAS port, for processor 3)

(6) SlimSAS connector 2A (×8 SlimSAS port, for processor 3)

(7) SlimSAS connector 1A (×8 SlimSAS port, for processor 3)

(8) SlimSAS connector 1B (×8 SlimSAS port, for processor 3)

 

DIMM slots

The system board and processor mezzanine board each provide 6 DIMM channels per processor, 12 channels in total. Each channel contains one white-coded slot and one black-coded slot, as shown in Table 8.

Table 8 DIMM slot numbering and color-coding scheme

Processor

DlMM slots

Processor 1

A1 through A6 (white coded)

A7 through A12 (black coded)

Processor 2

B1 through B6 (white coded)

B7 through B12 (black coded)

Processor 3

A1 through A6 (white coded)

A7 through A12 (black coded)

Processor 4

B1 through B6 (white coded)

B7 through B12 (black coded)

 

Figure 10 and Figure 11 shows the physical layout of the DIMM slots on the system board and the processor mezzanine board, respectively. For more information about the DIMM slot population rules, see the guidelines in "Installing DIMMs."

Figure 10 DIMM physical layout on the system board

 

Figure 11 DIMM physical layout on the processor mezzanine board

 


Appendix B  Component specifications

For components compatible with the server and detailed component information, visit the query tool at http://www.h3c.com/cn/Service/Document_Software/Document_Center/Server/.

About component model names

The model name of a hardware option in this document might differ slightly from its model name label.

A model name label might add a prefix or suffix to the hardware-coded model name for purposes such as identifying the matching server brand or applicable region. For example, the DDR4-2666-8G-1Rx8-R memory model represents memory module labels including DDR4-2666-8G-1Rx8-R, DDR4-2666-8G-1Rx8-R-F, and DDR4-2666-8G-1Rx8-R-S, which have different suffixes.

DIMMs

DRAM DIMM rank classification label

A DIMM rank is a set of memory chips that the system accesses while writing or reading from the memory. On a multi-rank DIMM, only one rank is accessible at a time.

To determine the rank classification of a DRAM DIMM, use the label attached to the DIMM, as shown in Figure 12.

Figure 12 DRAM DIMM rank classification label

 

Table 9 DIMM rank classification label description

Callout

Description

Remarks

1

Capacity

·     8GB.

·     16GB.

·     32GB.

2

Number of ranks

·     1R—One rank.

·     2R—Two ranks.

·     4R—Four ranks.

·     8R—Eight ranks.

3

Data width

·     ×4—4 bits.

·     ×8—8 bits.

4

DIMM generation

Only DDR4 is supported.

5

Data rate

·     2133P—2133 MHz.

·     2400T—2400 MHz.

·     2666V—2666 MHz.

·     2933Y—2933 MHz.

6

DIMM type

·     L—LRDIMM.

·     R—RDIMM.

 

HDDs and SSDs

Drive LEDs

The server supports SAS, SATA, and NVMe drives, of which SAS and SATA drives support hot swapping. You can use the LEDs on a drive to identify its status after it is connected to a storage controller.

Figure 13 shows the location of the LEDs on a drive.

Figure 13 Drive LEDs

(1) Fault/UID LED

(2) Present/Active LED

 

To identify the status of a SAS or SATA drive, use Table 10. To identify the status of an NVMe drive, use Table 11.

Table 10 SAS/SATA drive LED description

Fault/UID LED status

Present/Active LED status

Description

Flashing amber (0.5 Hz)

Steady green/Flashing green (4.0 Hz)

A drive failure is predicted. As a best practice, replace the drive before it fails.

Steady amber

Steady green/Flashing green (4.0 Hz)

The drive is faulty. Replace the drive immediately.

Steady blue

Steady green/Flashing green (4.0 Hz)

The drive is operating correctly and is selected by the storage controller.

Off

Flashing green (4.0 Hz)

The drive is performing a RAID migration or rebuilding, or the system is reading or writing data to the drive.

Off

Steady green

The drive is present but no data is being read or written to the drive.

Off

Off

The drive is not securely installed.

 

Table 11 NVMe drive LED description

Fault/UID LED status

Present/Active LED status

Description

Flashing amber (0.5 Hz)

Off

The managed hot removal process is completed. You can remove the drive safely.

Flashing amber (4.0 Hz)

Off

The drive is in hot insertion process.

Steady amber

Steady green/Flashing green (4.0 Hz)

The drive is faulty. Replace the drive immediately.

Steady blue

Steady green/Flashing green (4.0 Hz)

The drive is operating correctly and selected by the storage controller.

Off

Flashing green (4.0 Hz)

The drive is performing a RAID migration or rebuilding, or the system is reading or writing data to the drive.

Off

Steady green

The drive is present but no data is being read or written to the drive.

Off

Off

The drive is not securely installed.

 

Drive configurations and numbering

The RSTe embedded RAID controller supports only SATA drives and the standard storage controllers support both SAS and SATA drives.

Table 12 presents the required storage controllers and riser cards for different front drive configurations and Table 13 presents the used drive cage bays and drive numbering schemes.

Table 12 Drive, storage controller, and riser card configurations

Server model

Front drive configuration

Storage controller

Riser card

8SFF SAS/SATA server

8SFF

(8 front SFF SAS/SATA drives)

·     Embedded RSTe RAID controller

·     1 × standard storage controller with 8 internal SAS ports

Any

16SFF

(16 front SFF SAS/SATA drives)

·     2 × standard storage controllers with 8 internal SAS ports

·     1 × RAID-LSI-9460-16i(4G) or RAID-LSI-9560-LP-16i-8GB standard storage controller

Any

16SFF

(8 front SFF SAS/SATA drives + 8 front SFF UniBay drives)

2 × standard storage controllers with 8 internal SAS ports

1 × RC-8*NVME-1*FHHL-G3 riser card

16SFF

(8 front SFF SAS/SATA drives + 8 front SFF NVMe drives)

1 × standard storage controller with 8 internal SAS ports

1 × RC-8*NVME-1*FHHL-G3 riser card

24SFF

(24 SFF SAS/SATA drives)

·     3 × standard storage controllers with 8 internal SAS ports

·     2 × standard storage controllers with 8 internal SAS ports + 1 × RAID-LSI-9460-16i(4G) standard storage controller

Any

24SFF

(16 front SFF SAS/SATA drives + 8 front SFF UniBay drives)

·     3 × standard storage controllers with 8 internal SAS ports

·     1 × standard storage controllers with 8 internal SAS ports + 1 × RAID-LSI-9460-16i(4G) standard storage controller

1 × RC-8*NVME-1*FHHL-G3 riser card

24SFF

(16 front SFF SAS/SATA drives + 8 front SFF NVMe drives)

·     2 × standard storage controllers with 8 internal SAS ports

·     1 × RAID-LSI-9460-16i(4G) standard storage controller

1 × RC-8*NVME-1*FHHL-G3 riser card

8SFF UniBay server

8SFF

(4 front SFF UniBay drives in drive slots 4 through 7 + 4 front SFF SAS/SATA drives in drive slots 0 through 3)

·     1 × embedded RSTe storage controller

·     1 × standard storage controller with 8 internal SAS ports

1 × RC-4*NVME-3*FHHL-G3 riser card for the UniBay drives

8SFF

(8 front SFF UniBay drives)

·     1 × embedded RSTe storage controller

·     1 × standard storage controller with 16 internal SAS ports

1 × RC-8*NVME-1*FHHL-G3 riser card for the UniBay drives

16SFF

(16 front SFF NVMe drives)

N/A

2 × RC-8*NVME-1*FHHL-G3 riser cards

16SFF

(16 front SFF UniBay drives)

·     2 × standard storage controllers with 8 internal SAS ports

·     1 × RAID-LSI-9460-16i(4G) standard storage controller

2 × RC-8*NVME-1*FHHL-G3 riser cards for the UniBay drives

24SFF

(8 front SFF SAS/SATA drives + 16 front SFF UniBay drives)

·     3 × standard storage controllers with 8 internal SAS ports

·     1 × RAID-LSI-9460-16i(4G) standard storage controller + 1 × standard storage controllers with 8 internal SAS ports

2 × RC-8*NVME-1*FHHL-G3 riser card for the UniBay drives

 

 

NOTE:

To install 2SFF drives at the server rear, connect the rear drives to the embedded RSTe RAID controller and connect the front drives to the standard controllers as described in Table 12.

 

Table 13 Drive population and drive numbering schemes

Drive configuration

Drive numbering

8SFF front drives

See Figure 14.

16SFF front drives

See Figure 15.

24SFF front drives

See Figure 16.

2SFF rear drives

See Figure 17.

 

 

NOTE:

For the location of the drive cage bays on the front panel of the server, see "Front panel view."

 

Figure 14 Drive numbering for 8SFF drive configurations

 

Figure 15 Drive numbering for 16SFF drive configurations

 

Figure 16 Drive numbering for the 24SFF drive configuration

 

Figure 17 Drive numbering for the 2SFF drives at the server rear

 

PCIe modules

Typically, the PCIe modules are available in the following standard form factors:

·     LP—Low profile.

·     FHHL—Full height and half length.

·     FHFL—Full height and full length.

·     HHHL—Half height and half length.

·     HHFL—Half height and full length.

Storage controllers

The server supports the following types of storage controllers depending on their form factors:

·     Embedded RSTe RAID controllerEmbedded in the server and does not require installation.

·     Standard storage controllerComes in a standard PCIe form factor and typically requires a riser card for installation.

For some storage controllers, you can order a power fail safeguard module to prevent data loss from power outages. This module provides a flash card and a supercapacitor. When a system power failure occurs, the supercapacitor provides power for a minimum of 20 seconds. During this interval, the storage controller can transfer data from DDR memory to the flash card, where the data remains indefinitely or until the controller retrieves the data. If the storage controller contains a built-in flash card, you can order only a supercapacitor.

Embedded RSTe RAID controller

Item

Specifications

Type

Embedded in PCH of the system board

Connectors

·     One onboard ×8 mini-SAS-HD connector

·     One onboard ×1 SATA connector

Number of internal ports

9 internal SATA ports

Drive interface

6 Gbps SATA 3.0

PCIe interface

PCIe2.0 ×4

RAID levels

0, 1, 5, 10

Built-in cache memory

N/A

Power fail safeguard module

Not supported

Firmware upgrade

Upgraded with the BIOS

 

Other storage controllers

For more information, visit the query tool at http://www.h3c.com/cn/Service/Document_Software/Document_Center/Server/.

Riser cards

To expand the server with PCIe modules, you can install riser cards on the PCIe riser connectors.

Riser card guidelines

Each PCIe slot in a riser card can supply a maximum of 75 W power to the PCIe module. You must connect a separate power cord to the PCIe module only if it requires more than 75 W power.

If a processor is faulty or absent, the corresponding PCIe slots are unavailable.

The slot number of a PCIe slot varies by the PCIe riser connector that holds the riser card. For example, slot 1/4 represents PCIe slot 1 if the riser card is installed on connector 1 and represents PCIe slot 4 if the riser card is installed on connector 2.

For information about PCIe riser connector locations, see "Rear panel."

RC-2*FHHL-G3

Item

Specifications

PCIe riser connector

Connector 3

PCIe slots

·     Slot 7: PCIe3.0 ×8 (8, 4, 2, 1) for processor 2

·     Slot 8: PCIe3.0 ×8 (8, 4, 2, 1) for processor 2

NOTE:

The numbers in parentheses represent link widths.

Form factors of supported PCIe modules

LP

Maximum power supplied per PCIe slot

75 W

 

Figure 18 PCIe slots on the RC-2*FHHL-G3 riser card

(1) PCIe slot 8

(2) PCIe slot 7

 

RC-2LP-2U-G3-2

Item

Specifications

PCIe riser connector

Connector 3

PCIe slots

·     Slot 7: PCIe3.0 ×16 (16, 8, 4, 2, 1) for processor 4

·     Slot 8: PCIe3.0 ×16 (16, 8, 4, 2, 1) for processor 2

NOTE:

The numbers in parentheses represent link widths.

SlimSAS connectors

·     Connectors A and B (×8 SlimSAS port, for processor 4), providing x8 PCIe link for slot 7.

NOTE:

·     The number in parentheses represents link widths.

·     For the standard storage controller held by the riser card to manage NVMe drives, connect the SlimSAS connectors to the front NVMe drive backplane.

·     The riser card supports control of up to four NVMe drives.

Form factors of supported PCIe modules

LP

Maximum power supplied per PCIe slot

75 W

 

Figure 19 PCIe slots and SlimSAS connectors on the RC-2LP-2U-G3-2 riser card

(1) PCIe slot 8

(2) PCIe slot 7

(3) SlimSAS connector B

(4) SlimSAS connector A

 

RC-2LP-2U-G3-3

Item

Specifications

PCIe riser connector

Connector 4

PCIe slots

·     Slot 9: PCIe3.0 ×8 (8, 4, 2, 1) for processor 4

·     Slot 10: PCIe3.0 ×8 (8, 4, 2, 1) for processor 4

NOTE:

The numbers in parentheses represent link widths.

SlimSAS connectors

·     Connector A (×8 SlimSAS port, for processor 4), providing x8 PCIe link for slot 9.

·     Connector B (×8 SlimSAS port, for processor 4), providing x8 PCIe link for slot 10.

NOTE:

·     The number in parentheses represents link widths.

·     For the standard storage controller held by the riser card to manage NVMe drives, connect the SlimSAS connectors to the front NVMe drive backplane.

·     The riser card supports control of up to four NVMe drives.

Form factors of supported PCIe modules

LP

Maximum power supplied per PCIe slot

75 W

 

Figure 20 PCIe slots and SlimSAS connectors on the RC-2LP-2U-G3-3 riser card

(1) SlimSAS connector B

(2) SlimSAS connector A

(3) PCIe slot 10

(4) PCIe slot 9

 

RC-3*FHHL-G3

Item

Specifications

PCIe riser connector

·     Connector 1

·     Connector 2

PCIe slots

·     PCIe riser connector 1:

¡     Slot 1: PCIe3.0 ×16 (16, 8, 4, 2, 1) for processor 1

¡     Slot 2: PCIe3.0 ×16 (16, 8, 4, 2, 1) for processor 1

¡     Slot 3: PCIe3.0 ×8 (8, 4, 2, 1) for processor 3

·     PCIe riser connector 2:

¡     Slot 4: PCIe3.0 ×16 (16, 8, 4, 2, 1) for processor 2

¡     Slot 5: PCIe3.0 ×16 (16, 8, 4, 2, 1) for processor 2

¡     Slot 6: PCIe3.0 ×8 (8, 4, 2, 1) for processor 4

NOTE:

The numbers in parentheses represent link widths.

Form factors of supported PCIe modules

FHHL

Maximum power supplied per PCIe slot

75 W

 

Figure 21 PCIe slots on the RC-3*FHHL-G3 riser card

(1) PCIe slot 3/6

(2) PCIe slot 2/5

(3) PCIe slot 1/4

(4) GPU power connector

 

RC-3FHHL-2U-G3-1

Item

Specifications

PCIe riser connector

·     Connector 1

·     Connector 2

PCIe slots

·     PCIe riser connector 1:

¡     Slot 1: PCIe3.0 ×16 (16, 8, 4, 2, 1) for processor 3

¡     Slot 2: PCIe3.0 ×16 (16, 8, 4, 2, 1) for processor 1

¡     Slot 3: PCIe3.0 ×16 (16, 8, 4, 2, 1) for processor 1

·     PCIe riser connector 2:

¡     Slot 4: PCIe3.0 ×16 (16, 8, 4, 2, 1) for processor 3

¡     Slot 5: PCIe3.0 ×16 (16, 8, 4, 2, 1) for processor 2

¡     Slot 6: PCIe3.0 ×16 (16, 8, 4, 2, 1) for processor 2

NOTE:

The numbers in parentheses represent link widths.

SlimSAS connectors

·     PCIe riser connector 1: Connectors A and B (×8 SlimSAS port, for processor 3), providing x8 PCIe link for slot 1.

·     PCIe riser connector 2: Connectors A and B (×8 SlimSAS port, for processor 3), providing x8 PCIe link for slot 4.

NOTE:

·     The number in parentheses represents link widths.

·     For the standard storage controller held by the riser card to manage NVMe drives, connect the SlimSAS connectors to the front NVMe drive backplane.

·     The riser card supports control of up to four NVMe drives.

Form factors of supported PCIe modules

FHHL

Maximum power supplied per PCIe slot

75 W

 

Figure 22 PCIe slots and SlimSAS connectors on the RC-3FHHL-2U-G3-1 riser card

(1) PCIe slot 3/6

(2) PCIe slot 2/5

(3) PCIe slot 1/4

(4) SlimSAS connector B

(5) SlimSAS connector A

 

 

RC-4*NVME-3*FHHL-G3

Item

Specifications

PCIe riser connector

·     Connector 1

·     Connector 2

PCIe slots

·     PCIe riser connector 1:

¡     Slot 1: PCIe3.0 ×8 (8, 4, 2, 1) for processor 1

¡     Slot 2: PCIe3.0 ×8 (8, 4, 2, 1) for processor 1

¡     Slot 3: PCIe3.0 ×8 (8, 4, 2, 1) for processor 3

·     PCIe riser connector 2:

¡     Slot 4: PCIe3.0 ×8 (8, 4, 2, 1) for processor 3

¡     Slot 5: PCIe3.0 ×8 (8, 4, 2, 1) for processor 3

¡     Slot 6: PCIe3.0 ×8 (8, 4, 2, 1) for processor 4

NOTE:

The numbers in parentheses represent link widths.

SlimSAS connectors

Connectors A1 and A3 (×8 SlimSAS port, for processor 1)

NOTE:

·     The number in parentheses represents link widths.

·     For the standard storage controller held by the riser card to manage NVMe drives, connect the SlimSAS connectors to the front NVMe drive backplane.

·     The riser card supports control of up to four NVMe drives.

Form factors of supported PCIe modules

FHHL

Maximum power supplied per PCIe slot

75 W

 

Figure 23 PCIe slots and SlimSAS connectors on the RC-4*NVME-3*FHHL-G3 riser card

(1) PCIe slot 3/6

(2) PCIe slot 2/5

(3) PCIe slot 1/4

(4) SlimSAS connector A3

(5) SlimSAS connector A1

 

 

RC-8*NVME-1*FHHL-G3

Item

Specifications

PCIe riser connector

·     Connector 1

·     Connector 2

PCIe slots

·     PCIe riser connector 1:

¡     Slot 1: PCIe3.0 ×8 (8, 4, 2, 1) for processor 3

·     PCIe riser connector 2:

¡     Slot 4: PCIe3.0 ×8 (8, 4, 2, 1) for processor 4

NOTE:

The numbers in parentheses represent link widths. Both slots support single-slot wide GPUs.

SlimSAS connectors

Connectors A1, A3, B1, and B3 (×8 SlimSAS port)

The connectors are managed by processor 1 and processor 2 if the card is installed over PCIe riser connector 1 and connector 2, respectively.

NOTE:

·     The number in parentheses represents link widths.

·     For the standard storage controller held by the riser card to manage NVMe drives, connect the SlimSAS connectors to the front NVMe drive backplane.

·     The riser card supports control of up to eight NVMe drives.

Form factors of supported PCIe modules

FHHL

Maximum power supplied per PCIe slot

75 W

 

Figure 24 PCIe slots and SlimSAS connectors on the RC-8*NVME-1*FHHL-G3 riser card

(1) SlimSAS connector B3

(2) SlimSAS connector B1

(3) SlimSAS connector A3

(4) SlimSAS connector A1

(5) PCIe slot 1/4

 

 

B/D/F information

Viewing B/D/F information

Table 14 lists the default Bus/Device/Function numbers (B/D/F) when the following conditions are all met:

·     All processors are installed.

·     All PCIe riser connectors are installed with riser cards.

·     All PCIe slots in riser cards are installed with PCIe modules.

·     The sLOM network adapter is installed in slot 11.

B/D/F information in Table 14 might change if any of the above conditions is not met or a PCIe module with a PCIe bridge is installed.

For more information about riser cards, see "Riser cards" and "Replacing riser cards and PCIe modules." For more information the locations of slot 11, see "System board components."

For information about how to obtain B/D/F information, see "Obtaining B/D/F information."

Table 14 PCIe modules and the corresponding Bus/Device/Function numbers

Riser card model

PCIe riser connector

PCIe slot

Processor

Port number

Root port (B/D/F)

End point (B/D/F)

RC-8*NVME-1*FHHL-G3

Connector 1

slot 1

Processor 3

Port 3C

ac:02.00

ad:00.00

Connector 2

slot 4

Processor 4

Port 2A

d8:00.00

d9:00.00

RC-3*FHHL-G3

Connector 1

slot 1

Processor 1

Port 1A

16:00.00

17:00.00

slot 2

Processor 1

Port 3A

32:00.00

33:00.00

slot 3

Processor 3

Port 3C

ac:02.00

ad:00.00

Connector 2

slot 4

Processor 2

Port 1A

44:00.00

45:00.00

slot 5

Processor 2

Port 2A

58:00.00

59:00.00

slot 6

Processor 4

Port 2A

d8:00.00

d9:00.00

RC-4*NVME-3*FHHL-G3

Connector 1

slot 1

Processor 1

Port 3A

32:00.00

33:00.00

slot 2

Processor 1

Port 3C

32:02.00

34:00.00

slot 3

Processor 3

Port 3C

ac:02.00

ad:00.00

Connector 2

slot 4

Processor 2

Port 2A

58:00.00

59:00.00

slot 5

Processor 2

Port 2C

58:02.00

5a:00.00

slot 6

Processor 4

Port 2A

d8:00.00

d9:00.00

RC-3FHHL-2U-G3-1

Connector 1

slot 1

Processor 3

Port 2A

98:00.00

99:00.00

slot 2

Processor 1

Port 3A

32:00.00

33:00.00

slot 3

Processor 1

Port 1A

16:00.00

17:00.00

Connector 2

slot 4

Processor 3

Port 1A

84:00.00

85:00.00

slot 5

Processor 2

Port 2A

58:00.00

59:00.00

slot 6

Processor 2

Port 1A

44:00.00

45:00.00

RC-2*FHHL-G3

Connector 3

slot 7

Processor 2

Port 3C

6c:02.00

6d:00.00

slot 8

Processor 2

Port 3A

6c:00.00

6e:00.00

RC-2LP-2U-G3-2

Connector 3

slot 7

Processor 4

Port 1A

c4:00.00

c5:00.00

slot 8

Processor 2

Port 3A

6c:00.00

6d:00.00

RC-2LP-2U-G3-3

Connector 4

slot 9

Processor 4

Port 3A

ec:00.00

ed:00.00

slot 10

Processor 4

Port 3C

ec:02.00

ef:00.00

N/A

N/A

slot 11 (for sLOM network adapter)

Processor 1

Port 2A

24:00.00

25:00 .00

 

 

NOTE:

·     The root port (B/D/F) indicates the bus number of the PCIe root node in the processor.

·     The end point (B/D/F) indicates the bus number of a PCIe module in the operating system.

 

Obtaining B/D/F information

You can obtain B/D/F information by using one of the following methods:

·     BIOS log—Search the dumpiio keyword in the BIOS log.

·     UEFI shell—Execute the pci command. For information about how to execute the command, execute the help pci command.

·     Operating system—The obtaining method varies by OS.

¡     For Linux, execute the lspci command.

If Linux does not support the lspci command by default, you must execute the yum command to install the pci-utils package.

¡     For Windows, install the pciutils package, and then execute the lspci command.

¡     For VMware, execute the lspci command.

Fans

Fan layout

Figure 25 shows the layout of the fans in the chassis.

Figure 25 Fan layout

 

Power supplies

The power supplies have an overtemperature protection mechanism. A power supply stops working when an overtemperature occurs and automatically recovers when the overtemperature condition is removed.

800 W Platinum power supply

Item

Specifications

Model

PSR800-12A

Rated input voltage range

·     100 VAC to 240 VAC @ 50/60 Hz

·     192 VDC to 288 VDC (240 HVDC power source)

Maximum rated input current

·     10.0 A @ 100 VAC to 240 VAC

·     4.0 A @ 240 VDC

Maximum rated output power

800 W

Efficiency at 50 % load

94%, 80 Plus Platinum level

Temperature requirements

·     Operating temperature: 0°C to 50°C (32°F to 122°F)

·     Storage temperature: –40°C to +70°C (–40°F to +158°F)

Operating humidity

5% to 90%

Maximum altitude

5000 m (16404.20 ft)

Redundancy

1+1 redundancy

Hot swappable

Yes

Cold backup

Yes

 

800 W 336 V high-voltage power supply

Item

Specifications

Model

PSR800-12AHD

Rated input voltage range

·     100 VAC to 240 VAC @ 50/60 Hz

·     180 VDC to 400 VDC (240 to 336 HVDC power source)

Maximum rated input current

·     10.0 A @ 100 VAC to 240 VAC

·     3.8 A @ 240 VDC

Maximum rated output power

800 W

Efficiency at 50 % load

94%

Temperature requirements

·     Operating temperature: 0°C to 50°C (32°F to 122°F)

·     Storage temperature: –40°C to +70°C (–40°F to +158°F)

Operating humidity

5% to 90%

Maximum altitude

5000 m (16404.20 ft)

Redundancy

1+1 redundancy

Hot swappable

Yes

Cold backup

Yes

 

850 W Titanium power supply

Item

Specifications

Model

PSR850-12A

Rated input voltage range

·     100 VAC to 240 VAC @ 50/60 Hz

·     192 VDC to 288 VDC (240 HVDC power source)

Maximum rated input current

·     11.0 A @ 100 VAC to 240 VAC

·     4.0 A @ 240 VDC

Maximum rated output power

850 W

Efficiency at 50 % load

96%, 80 Plus Titanium level

Temperature requirements

·     Operating temperature: 0°C to 50°C (32°F to 122°F)

·     Storage temperature: –40°C to +70°C (–40°F to +158°F)

Operating humidity

5% to 85%

Maximum altitude

5000 m (16404.20 ft)

Redundancy

1+1 redundancy

Hot swappable

Yes

Cold backup

Yes

 

1200 W Platinum power supply

Item

Specifications

Model

PSR1200-12A

Rated input voltage range

·     100 VAC to 127 VAC @ 50/60 Hz (1000 W)

·     200 VAC to 240 VAC @ 50/60 Hz (1200 W)

·     192 VDC to 288 VDC (240 HVDC power source) (1200 W)

Maximum rated input current

·     12.0 A @ 100 VAC to 240 VAC

·     6.0 A @ 240 VDC

Maximum rated output power

1200 W

Efficiency at 50 % load

94%, 80 Plus Platinum level

Temperature requirements

·     Operating temperature: 0°C to 50°C (32°F to 122°F)

·     Storage temperature: –40°C to +70°C (–40°F to +158°F)

Operating humidity

5% to 90%

Maximum altitude

5000 m (16404.20 ft)

Redundancy

1+1 redundancy

Hot swappable

Yes

Cold backup

Yes

 

1600 W Platinum power supply PSR1600-12A

Item

Specifications

Model

PSR1600-12A

Rated input voltage range

·     200 VAC to 240 VAC @ 50/60 Hz

·     192 VDC to 288 VDC (240 HVDC power source)

Maximum rated input current

·     9.5 A @ 200 VAC to 240 VAC

·     8.0 A @ 240 VDC

Maximum rated output power

1600 W

Efficiency at 50 % load

94%, 80 Plus platinum level

Temperature requirements

·     Operating temperature: 0°C to 50°C (32°F to 122°F)

·     Storage temperature: –40°C to +70°C (–40°F to +158°F)

Operating humidity

5% to 90%

Maximum altitude

5000 m (16404.20 ft)

Redundancy

1+1 redundancy

Hot swappable

Yes

Cold backup

Yes

1600 W Platinum power supply DPS-1600AB-13 R

Item

Specifications

Model

DPS-1600AB-13 R

Rated input voltage range

·     100 VAC to 127 VAC @ 50/60 Hz

·     200 VAC to 240 VAC @ 50/60 Hz

·     192 VDC to 288 VDC (240 HVDC power source)

Maximum rated input current

·     13.8 A @ 100 VAC to 127 VAC

·     9.6 A @ 200 VAC to 240 VAC

Maximum rated output power

1600 W

Efficiency at 50 % load

94%, 80 Plus platinum level

Temperature requirements

·     Operating temperature: 0°C to 50°C (32°F to 122°F)

·     Storage temperature: –40°C to +70°C (–40°F to +158°F)

Operating humidity

5% to 85%

Maximum altitude

5000 m (16404.20 ft)

Redundancy

1+1 redundancy

Hot swappable

Yes

Cold backup

Yes

 

1600W 336V high-voltage DC

Item

Specifications

Model

FSP1600-20FH

Rated input voltage range

·     200 VAC to 240 VAC @ 50/60 Hz

·     192 VDC to 400 VDC (240 or 336 HVDC power source)

Maximum rated input current

·     9.5 A @ 200 VAC to 240 VAC

·     6.0 A @ 336 VDC

Maximum rated output power

1600 W

Efficiency at 50 % load

94%, 80 Plus platinum level

Temperature requirements

·     Operating temperature: 0°C to 55°C (32°F to 131°F)

·     Storage temperature: –40°C to +70°C (–40°F to +158°F)

Operating humidity

5% to 90%

Maximum altitude

5000 m (16404.20 ft)

Redundancy

1+1 redundancy

Hot swappable

Yes

Cold backup

Yes

 

Expander modules

Model

Specifications

HDDCage-8UniBay

Used in the 8SFF UniBay server.

Cage-8SFF-BAY1

Used in the 8SFF UniBay and 8SFF SAS/SATA servers.

Cage-8SFF-BAY3

Used in the 8SFF SAS/SATA server.

 

Diagnostic panels

Diagnostic panels provide diagnostics and troubleshooting capabilities. You can locate and troubleshoot component failures by using the diagnostic panels in conjunction with the event log generated in HDM.

 

 

NOTE:

A diagnostic panel displays only one component failure at a time. When multiple component failures exist, the diagnostic panel displays all these failures one by one at intervals of 4 seconds.

 

Diagnostic panel view

Figure 26 shows the error code and LEDs on a diagnostic panel.

Figure 26 Diagnostic panel view

(1) Error code

(2) LEDs

 

For more information about the LEDs and error codes, see "LEDs."

LEDs

The server is operating correctly when the error code is 00 and all LEDs are off.

POST LED

LED status

Error code

Description

Steady green

Code for the current POST phase (in the range of 00 to 99)

The server is performing POST without detecting any error.

Flashing red (1 Hz)

Code for the current POST phase (in the range of 00 to 99)

The POST process encountered an error and stopped in the displayed phase.

 

TEMP LED

LED status

Error code

Description

Flashing amber (1 Hz)

Temperature sensor ID

A major temperature warning is present on the component monitored by the sensor.

This warning might occur because the temperature of the component has exceeded the upper major threshold or dropped below the lower major threshold.

Flashing red (1 Hz)

Temperature sensor ID

A critical temperature warning is present on the component monitored by the sensor.

This warning might occur because the temperature of the component has exceeded the upper critical threshold or dropped below the lower critical threshold.

 

CAP LED

LED status

Error code

Description

Flashing amber

01

The system power consumption has exceeded the power cap value.

 

Component LEDs

An alarm is present if a component LED has one of the following behaviors:

·     Flashing amber (1 Hz)—A major alarm has occurred.

·     Flashing red (1 Hz)—A critical alarm has occurred.

Use Table 15 to identify the faulty item if a component LED has one of those behaviors. To obtain records of component status changes, use the event log in HDM. For information about using the event log, see HDM online help.

Table 15 LED, error code and faulty item matrix

LED

Error code

Faulty item

BRD

01

System board

02

Processor mezzanine board

03

Drive backplane for front drive cage bay 1

04

Drive backplane for front drive cage bay 2

05

Drive backplane for front drive cage bay 3

11

Rear drive backplane

91

sLOM network adapter

CPU (processor)

01

Processor 1

02

Processor 2

03

Processor 3

04

Processor 4

DIMM

A1 through A9, AA, Ab, or AC

DIMM slots for processor 1:

·     A1 through A9—DIMMs in slots A1 through A9

·     AA—DIMM in slot A10

·     Ab—DIMM in slot A11

·     AC—DIMM in slot A12

b1 through b9, bA, bb, or bC

DIMM slots for processor 2:

·     b1 through b9—DIMMs in slots B1 through B9

·     bA—DIMM in slot B10

·     bb—DIMM in slot B11

·     bC—DIMM in slot B12

C1 through C9, CA, Cb, or CC

DIMM slots for processor 3:

·     C1 through C9—DIMMs in slots A1 through A9

·     CA—DIMM in slot A10

·     Cb—DIMM in slot A11

·     CC—DIMM in slot A12

d1 through d9, dA, db, or dC

DIMM slots for processor 4:

·     d1 through d9—DIMMs in slots B1 through B9

·     dA—DIMM in slot B10

·     db—DIMM in slot B11

·     dC—DIMM in slot B12

HDD

00 through 07

Drives in slots 0 through 7 in bay 1

08 through 15

Drives in slots 0 through 7 in bay 2

16 through 23

Drives in slots 0 through 7 in bay 3

32 and 33

·     32—Drive in slot 9 at the rear

·     33—Drive in slot 9 at the rear

PCIE

01 through 10

PCIe modules in PCIe slots 1 to 10 of a riser card

NOTE:

If a storage controller is installed in a PCIe slot, the RAID LED displays the storage controller status.

PSU

01

Power supply 1

02

Power supply 2

FAN

01 through 06

Fan 1 through Fan 6

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

VRD

01

P5V_STBY voltage on the system board

02

P3V3_STBY voltage on the system board

03

P1V05_PCH_STBY voltage on the system board

04

PVNN_PCH_STBY voltage on the system board

05

P1V8_PCH_STBY voltage on the system board

06

P5V voltage on the system board

07

P3V3 voltage on the system board

09

Primary power supply of the sLOM network adapter

10

Secondary power supply of the sLOM network adapter

11

Power supply of drive backplane 4 for rear 2SFF drives

12

Power supply of drive backplane 3 for front drive cage bay 1

13

Power supply of drive backplane 2 for front drive cage bay 2

14

Power supply of drive backplane 2 for front drive cage bay 3

21

Power supply of the riser card over riser connector 1

23

Power supply of the riser card over riser connector 2

26

Power supply of the riser card over riser connector 3

28

Power supply of the riser card over riser connector 4

40

Power supply fault summary

42

System board P12V overcurrent

44

Internal VR fault on processor 1 (fivr_fault)

45

Internal VR fault on processor 2 (fivr_fault)

50

Power supply of the processor mezzanine board

52

Processor mezzanine board P12V overcurrent

54

Internal VR fault on processor 3 (fivr_fault)

55

Internal VR fault on processor 4 (fivr_fault)

60

PVCCIO_CPU1 voltage on the system board

61

PVCCIN_CPU1 voltage on the system board

62

PVCCSA_CPU1 voltage on the system board

63

VDDQ_CPU1_ABC voltage on the system board

64

VDDQ_CPU1_DEF voltage on the system board

65

VPP_CPU1_ABC voltage on the system board

66

VPP_CPU1_DEF voltage on the system board

67

VTT_CPU1_ABC voltage on the system board

68

VTT_CPU1_DEF voltage on the system board

69

P1V0_CPU1 voltage on the system board

6A

PVMCP_CPU1 voltage on the system board

70

PVCCIO_CPU2 voltage on the system board

71

PVCCIN_CPU2 voltage on the system board

72

PVCCSA_CPU2 voltage on the system board

73

VDDQ_CPU2_ABC voltage on the system board

74

VDDQ_CPU2_DEF voltage on the system board

75

VPP_CPU2_ABC voltage on the system board

76

VPP_CPU2_DEF voltage on the system board

77

VTT_CPU2_ABC voltage on the system board

78

VTT_CPU2_DEF voltage on the system board

79

P1V0_CPU2 voltage on the system board

7A

PVMCP_CPU2 voltage on the system board

80

PVCCIO_CPU3 voltage on the processor mezzanine board

81

PVCCIN_CPU3 voltage on the processor mezzanine board

82

PVCCSA_CPU3 voltage on the processor mezzanine board

83

VDDQ_CPU3_ABC voltage on the processor mezzanine board

84

VDDQ_CPU3_DEF voltage on the processor mezzanine board

85

VPP_CPU3_ABC voltage on the processor mezzanine board

86

VPP_CPU3_DEF voltage on the processor mezzanine board

87

VTT_CPU3_ABC voltage on the processor mezzanine board

88

VTT_CPU3_DEF voltage on the processor mezzanine board

89

P1V0_CPU3 voltage on the processor mezzanine board

90

PVCCIO_CPU4 voltage on the processor mezzanine board

91

PVCCIN_CPU4 voltage on the processor mezzanine board

92

PVCCSA_CPU4 voltage on the processor mezzanine board

93

VDDQ_CPU4_ABC voltage on the processor mezzanine board

94

VDDQ_CPU4_DEF voltage on the processor mezzanine board

95

VPP_CPU4_ABC voltage on the processor mezzanine board

96

VPP_CPU4_DEF voltage on the processor mezzanine board

97

VTT_CPU4_ABC voltage on the processor mezzanine board

98

VTT_CPU4_DEF voltage on the processor mezzanine board

99

P1V0_CPU4 voltage on the processor mezzanine board

 

Storage options other than HDDs and SDDs

Model

Specifications

DVD-RW-Mobile-USB-A

Removable USB DVDRW drive module

IMPORTANT IMPORTANT:

For this module to work correctly, you must connect it to a USB 3.0 connector.

 

NVMe VROC modules

Model

Description

RAID levels

Compatible NVMe drives

NVMe-VROC-Key-S

NVMe VROC module standard edition

0, 1, 10

All NVMe drives

NVMe-VROC-Key-P

NVMe VROC module premium edition

0, 1, 5, 10

All NVMe drives

NVMe-VROC-Key-i

NVMe VROC module Intel edition

0, 1, 5, 10

Intel NVMe drives

 


Appendix C  Hot removal and managed hot removal of NVMe drives

The server supports hot removal and managed hot removal of NVMe drives.

Managed hot removal of NVMe drives enables you to remove NVMe drives safely while the server is operating.

For information about operating systems that support hot removal and managed hot removal of NVMe drives, visit the query tool at http://www.h3c.com/cn/Service/Document_Software/Document_Center/Server/.

Use Table 16 to determine the managed hot removal method depending on the VMD status and the operating system type. For more information about VMD, see the BIOS user guide for the server.

Table 16 Managed hot removal methods

VMD status

Operating system

Managed hot removal method

Auto/Enabled

Windows

Performing a managed hot removal in Windows.

Linux

Performing a managed hot removal in Linux.

Disabled

N/A

Contact the support.

 

Performing a managed hot removal in Windows

Prerequisites

Install Intel® Rapid Storage Technology enterprise (Intel® RSTe).

To obtain Intel® RSTe, use one of the following methods:

·     Go to https://platformsw.intel.com/KitSearch.aspx to download the software.

·     Contact Intel Support.

Procedure

1.     Stop reading data from or writing data to the NVMe drive to be removed.

2.     Identify the location of the NVMe drive. For more information, see "Drive configurations and numbering."

3.     Run Intel® RSTe.

4.     Unmount the NVMe drive from the operating system, as shown in Figure 27:

¡     Select the NVMe drive to be removed from the Devices list.

¡     Click Activate LED to turn on the Fault/UID LED on the drive.

¡     Click Remove Disk.

Figure 27 Removing an NVMe drive

 

5.     Observe the Fault/UID LED on the drive. If the Fault/UID LED turns steady blue and the drive is removed from the Devices list, remove the drive from the server.

For more information about the removal procedure, see "Replacing an NVMe drive."

Performing a managed hot removal in Linux

Prerequisites

Identify that your operating system is a non-SLES Linux operating system. SLES operating systems do not support hot removal of NVMe drives.

Performing a managed hot removal from the CLI

1.     Stop reading data from or writing data to the NVMe drive to be removed.

2.     Identify the location of the NVMe drive. For more information, see "Drive configurations and numbering."

3.     Access the CLI of the server.

4.     Execute the lsblk | grep nvme command to identify the drive letter of the NVMe drive, as shown in Figure 28.

Figure 28 Identifying the drive letter of the NVMe drive to be removed

 

5.     Execute the ledctl locate=/dev/drive_letter command to turn on the Fault/UID LED on the drive. The drive_letter argument represents the drive letter, nvme0n1 for example.

6.     Execute the echo 1 > /sys/block/drive_letter/device/device/remove command to unmount the drive from the operating system. The drive_letter argument represents the drive letter, nvme0n1 for example.

7.     Observe the Fault/UID LED on the drive. If the Fault/UID LED turns steady amber, remove the drive from the server.

For more information about the removal procedure, see "Replacing an NVMe drive."


Appendix D  Environment requirements

About environment requirements

The operating temperature requirements for the server vary depending on the server model and hardware configuration. When the general and component-based requirements conflict, use the component-based requirement.

Be aware that the actual maximum operating temperature of the server might be lower than what is stated because of poor site cooling performance. In a real data center, the server cooling performance might decrease because of adverse external factors, including poor cabinet cooling performance, high power density inside the cabinet, or insufficient spacing between devices.

General environment requirements

Item

Specifications

Operating temperature

Minimum: 5°C (41°F)

Maximum: Varies depending on the power consumed by the processors and presence of expansion modules. For more information, see "Operating temperature requirements."

Storage temperature

–40°C to +70°C (–40°F to +158°F)

Operating humidity

8% to 90%, noncondensing

Storage humidity

5% to 95%, noncondensing

Operating altitude

–60 m to +3000 m (–196.85 ft to +9842.52 ft)

The allowed maximum temperature decreases by 0.33 °C (32.59°F) as the altitude increases by 100 m (328.08 ft) from 900 m (2952.76 ft)

Storage altitude

–60 m to +5000 m (–196.85 ft to +16404.20 ft)

 

Operating temperature requirements

General guidelines

The server supports N+1 fan redundancy. If a fan fails, server performance might degrade.

The DPS-1600AB-13 R power supply operates correctly only at 30°C (86°F).

8SFF front drive configuration

Table 17 Operating temperature requirements

Maximum server operating temperature

Processor configuration

Hardware option configuration

30°C (86°F)

N/A

To use a GPU-T4 module, install the module to PCIe slot 1, 4, 6, or 7.

35°C (95°F)

N/A

To use a GPU-T4 module, install the module to PCIe slot 6 or 7.

40°C (104°F)

Processors with a TDP of 150 W or lower are supported (excluding processors 6240Y and 6252N).

The following hardware options are not supported:

·     DCPMMs.

·     GPUs.

·     NVMe drives.

45°C (113°F)

Processors with a TDP of 130 W or lower are supported.

The following hardware options are not supported:

·     DCPMMs.

·     NVMe SSD PCIe accelerator modules.

·     NVMe drives.

·     SATA M.2 SSDs.

·     GPU modules.

·     Rear SFF drives.

 

16SFF front drive configuration

Table 18 Operating temperature requirements

Maximum server operating temperature

Processor configuration

Hardware option configuration

30°C (86°F)

N/A

GPU-T4 module is not supported.

35°C (95°F)

When a GPU-P4 module is used, only processors with a TDP of 160 W or lower are supported.

GPU-T4 module is not supported.

40°C (104°F)

N/A

The following hardware options are not supported:

·     DCPMMs.

·     GPU modules.

·     NVMe drives.

45°C (113°F)

N/A

The following hardware options are not supported:

·     NVMe SSD PCIe accelerator modules.

·     NVMe drives.

·     SATA M.2 SSDs.

·     GPU modules.

·     Rear SFF drives.

 

24SFF front drive configuration

Table 19 Operating temperature requirements

Maximum server operating temperature

Processor configuration

Hardware option configuration

30°C (86°F)

When a GPU-P4 module is used, only processors with a TDP of 165 W or lower are supported.

GPU-T4 module is not supported.

35°C (95°F)

N/A

GPU modules and rear NVMe drives are not supported.

40°C (104°F)

Processors with a TDP of 130 W or lower are supported.

The following hardware options are not supported:

·     DCPMMs.

·     NVMe SSD PCIe accelerator modules.

·     GPU modules.

·     NVMe drives.

·     SATA M.2 SSDs.

·     Rear SFF drives.

45°C (113°F)

Processors with a TDP of 105 W or lower are supported.

The following hardware options are not supported:

·     DCPMMs.

·     NVMe SSD PCIe accelerator modules.

·     GPU modules.

·     NVMe drives.

·     SATA M.2 SSDs.

·     Rear SFF drives.

 


Appendix E  Product recycling

New H3C Technologies Co., Ltd. provides product recycling services for its customers to ensure that hardware at the end of its life is recycled. Vendors with product recycling qualification are contracted to New H3C to process the recycled hardware in an environmentally responsible way.

For product recycling services, contact New H3C at

·     Tel: 400-810-0504

·     E-mail: service@h3c.com

·     Website: http://www.h3c.com


Appendix F  Glossary

Item

Description

B

BIOS

Basic input/output system is non-volatile firmware pre-installed in a ROM chip on a server's system board. The BIOS stores basic input/output, power-on self-test, and auto startup programs to provide the most basic hardware initialization, setup and control functionality.

C

CPLD

Complex programmable logic device is an integrated circuit used to build reconfigurable digital circuits.

F

FIST

Fast Intelligent Scalable Toolkit provided by H3C for easy and extensible server management. It can guide users to configure a server quickly with ease and provide an API interface to allow users to develop their own management tools.

G

 

GPU

Graphics processing unit module converts digital signals to analog signals for output to a display device and assists processors with image processing to improve overall system performance.

H

HDM

Hardware Device Management is the server management control unit with which administrators can configure server settings, view component information, monitor server health status, and remotely manage the server.

Hot swapping

A module that supports hot swapping (a hot-swappable module) can be installed or removed while the server is running without affecting the system operation.

K

KVM

A device that allows remote users to use their local video display, keyboard, and mouse to monitor and control remote servers.

N

Network adapter

A network adapter, also called a network interface card (NIC), connects the server to the network.

NVMe VROC module

A module that works with VMD to provide RAID capability for the server to virtualize storage resources of NVMe drives.

R

RAID

Redundant array of independent disks (RAID) is a data storage virtualization technology that combines multiple physical hard drives into a single logical unit to improve storage and security performance.

Redundancy

A mechanism that ensures high availability and business continuity by providing backup modules. In redundancy mode, a backup or standby module takes over when the primary module fails.

T

Temperature sensors

A temperature sensor detects changes in temperature at the location where it is installed and reports the temperature data to the server system.

U

U

A unit of measure defined as 44.45 mm (1.75 in) in IEC 60297-1. It is used as a measurement of the overall height of racks, as well as equipment mounted in the racks.

V

VMD

VMD provides hot removal, management, and fault-tolerance functions for NVMe drives to increase availability, reliability, and serviceability.

 


Appendix G  Acronyms

Acronym

Full name

B

BIOS

Basic Input/Output System

C

CMA

Cable Management Arm

CPLD

Complex Programmable Logic Device

D

DCPMM

Data Center Persistent Memory Module

DDR

Double Data Rate

DIMM

Dual Inline Memory Module

DRAM

Dynamic Random Access Memory

F

FIST

Fast Intelligent Scalable Toolkit

G

GPU

Graphics Processing Unit

H

HBA

Host Bus Adapter

HDD

Hard Disk Drive

HDM

Hardware Device Management

I

IDC

Internet Data Center

K

KVM

Keyboard, Video, Mouse

L

LRDIMM

Load Reduced Dual Inline Memory Module

N

NCSI

Network Controller Sideband Interface

NVMe

Non-Volatile Memory Express

P

PCIe

Peripheral Component Interconnect Express

POST

Power-On Self-Test

R

RAID

Redundant Array of Independent Disks

RDIMM

Registered Dual Inline Memory Module

S

SAS

Serial Attached Small Computer System Interface

SATA

Serial ATA

SD

Secure Digital

SDS

Secure Diagnosis System

SFF

Small Form Factor

sLOM

Small form factor Local Area Network on Motherboard

SSD

Solid State Drive

T

TCM

Trusted Cryptography Module

TDP

Thermal Design Power

TPM

Trusted Platform Module

U

UID

Unit Identification

UPI

Ultra Path Interconnect

UPS

Uninterruptible Power Supply

USB

Universal Serial Bus

V

VROC

Virtual RAID on CPU

VMD

Volume Management Device

 

 

 

H3C reserves the right to modify its collaterals without any prior notice. For the latest information of the collaterals, please consult H3C sales or call 400 hotline.
  • Cloud & AI
  • InterConnect
  • Intelligent Computing
  • Security
  • SMB Products
  • Intelligent Terminal Products
  • Product Support Services
  • Technical Service Solutions
All Services
  • Resource Center
  • Policy
  • Online Help
All Support
  • Become a Partner
  • Partner Resources
  • Partner Business Management
All Partners
  • Profile
  • News & Events
  • Online Exhibition Center
  • Contact Us
All About Us
新华三官网