H3C UniServer R4300 G3 Server User Guide-6W101

HomeSupportServersH3C UniServer R4300 G3Install & UpgradeInstallation GuidesH3C UniServer R4300 G3 Server User Guide-6W101
02-Appendix
Title Size Download
02-Appendix 3.02 MB

Contents

Appendix A  Server specifications· 1

Chassis view· 1

Technical specifications· 1

Components· 3

Front panel 4

Front panel view· 4

LEDs and buttons· 5

Ports· 6

Rear panel 6

Rear panel view· 6

LEDs· 7

Ports· 9

System board· 10

System board components· 10

System maintenance switches· 11

DIMM slots· 11

Appendix B  Component specifications· 13

About component model names· 13

DIMMs· 13

DRAM DIMM rank classification label 13

HDDs and SSDs· 14

Drive LEDs· 14

Drive configurations and numbering· 15

PCIe modules· 16

Storage controllers· 16

Riser cards· 17

Riser card guidelines· 17

FHHL-2X16+X8-G3· 17

FHHL-2X8-G3· 18

FHHL-X16-G3· 19

RC-3FHFL-2U-4UG3· 20

RC-3FHFL-2U-4UG3-1· 21

RC-2LP-1U-4UG3· 22

B/D/F information· 23

Viewing B/D/F information· 23

Obtaining B/D/F information· 24

Fan modules· 25

Fan module layout 25

Power supplies· 25

550 W Platinum power supply· 25

800 W Platinum power supply· 26

800 W 336 V high-voltage power supply· 26

1200 W Platinum power supply· 27

550 W high-efficiency Platinum power supply· 27

800 W –48 VDC power supply· 28

850 W high-efficiency Platinum power supply· 28

1600 W power supply· 29

Storage options other than HDDs and SDDs· 29

NVMe VROC modules· 29

Appendix C  Hot removal and managed hot removal of NVMe drives· 31

Performing a managed hot removal in Windows· 31

Prerequisites· 31

Procedure· 31

Performing a managed hot removal in Linux· 32

Performing a managed hot removal from the CLI 32

Appendix D  Environment requirements· 34

About environment requirements· 34

General environment requirements· 34

Operating temperature requirements· 34

Appendix E  Product recycling· 35

Appendix F  Glossary· 36

Appendix G  Acronyms· 37

 


Appendix A  Server specifications

The information in this document might differ from your product if it contains custom configuration options or features.

The figures used in this document are for illustration only and might differ from your product.

Chassis view

H3C UniServer R4300 G3 servers are 4U rack servers with two Intel Purley or Jintide-C series processors. The servers feature large storage capacity for service expansion. They are suitable for cloud computing, IDC, and enterprise networks built based on new-generation infrastructure.

The server comes with only the 24LFF server model.

Figure 1 Chassis view

 

Technical specifications

Item

Description

Dimensions (H × W × D)

·     Without a security bezel: 174.8 × 447 × 782 mm (6.88 × 17.60 × 30.79 in)

·     With a security bezel: 174.8 × 447 × 804 mm (6.88 × 17.60 × 31.65 in)

Max. weight

56 kg (123.46 lb)

Processors

2 × Intel Purley or Jintide-C series processors

(Up to 3.8 GHz base frequency, maximum 165 W power consumption, and 38.5 MB cache per processor)

Memory

A maximum of 24 DIMMs

Supports mixture of DCPMMs and DRAM DIMMs

Chipset

Intel C621 Lewisburg chipset

Network connection

·     1 × onboard 1 Gbps HDM dedicated network port

·     2 × onboard 1 Gbps Ethernet port

·     1 × FLOM network adapter connector

I/O connectors

·     3 × USB 3.0 connectors (one at the server front and two at the server rear)

·     1 × onboard mini-SAS-HD connector (×8 SATA connectors)

·     1 × RJ-45 HDM dedicated port at the server rear

·     1 × VGA connector at the server rear

·     1 × BIOS serial port at the server rear

Expansion slots

10 × PCIe 3.0 modules (eight standard PCIe modules, one mezzanine storage controller, and one FLOM network adapter)

Power supplies

2 × hot-swappable power supplies in redundancy

Server management software

HDM

Standards

CCC, CECP, and SEPA included

 

Components

Figure 2 R4300 G3 server components

 

Table 1 R4300 G3 server components

Item

Description

(1) Chassis

N/A

(2) Access panel

N/A

(3) Security bezel

N/A

(4) System board

One of the most important parts of a server, on which multiple components are installed, such as processor, memory, and fan. It is integrated with basic server components, including the BIOS chip, HDM chip, and PCIe connectors.

(5) Processor retaining bracket

Attaches a processor to the heatsink.

(6) Processor

Integrates a memory processing unit and a PCIe controller to provide data processing capabilities for the server.

(7) Processor heatsink

Cools the processor.

(8) Drive

Drive for data storage, which is hot swappable.

(9) Memory

Stores computing data and data exchanged with external storage.

(10) Drive cage

Encloses drives.

(11) M.2 expander module

Expands the server with a maximum of two SATA M.2 SSDs.

(12) Drive backplane

Provides power and data channels for drives.

(13) Chassis air baffle

Provides ventilation aisles for airflows in the chassis.

(14) Standard storage controller

Installed in a PCIe slot to provide RAID capability for the server.

(15) PCIe network adapter

Installed in a PCIe slot for network expansion.

(16) FLOM network adapter

Installed on the FLOM network adapter connector of the system board for network expansion.

(17) Mezzanine storage controller

Installed on the mezzanine storage controller connector of the system board to provide RAID capability for the server.

(18) Supercapacitor

Supplies power to the flash card of the power fail safeguard module, which enables the storage controller to back up data to the flash card for protection when power outage occurs.

(19) Supercapacitor holder

Secures a supercapacitor in the chassis.

(20) Chassis-open alarm module

Generates a chassis open alarm every time the access panel is removed. The alarms can be displayed from the HDM Web interface.

(21) Riser card blank

Installed on an empty riser card connector to ensure good ventilation.

(22) Riser card

Installed in the server to provide additional slots for PCIe modules.

(23) Power supply

Supplies power to the server. It supports hot swapping and 1+1 redundancy.

(24) Fan cage

Used for holding fans.

(25) Fan module

Supports hot swapping and N+1 redundancy.

(26) Chassis ear

Attaches the server to the rack, integrated with LEDs and buttons.

(27) System battery

Supplies power to the system clock.

(28) GPU

Provides computing services, such as image processing and artificial intelligence for the server.

(29) NVMe SSD expander module

Provides the PCIe signal relay function to analyze, reassemble, and transmit the PCIe signal for NVMe drives if the link between NVMe drives and the system board is too long.

 

Front panel

Front panel view

Figure 3 shows the front panel view of the server.

Figure 3 Front panel view

(1) NVMe drives (optional)

(2) 24LFF SAS/SATA drives

(3) USB 3.0 connector

 

LEDs and buttons

Figure 4 shows the front panel LEDs and buttons. Table 2 describes the status of the front panel LEDs.

Figure 4 Front panel LEDs and buttons

(1) Power on/standby button and system power LED

(2) UID button LED

(3) Health LED

(4) Ethernet port LED

 

Table 2 LEDs and buttons on the front panel

Button/LED

Status

Power on/standby button and system power LED

·     Steady green—The system has started.

·     Flashing green (1 Hz)—The system is starting.

·     Steady amber—The system is in Standby state.

·     Off—No power is present. Possible reasons:

¡     No power source is connected.

¡     No power supplies are present.

¡     The installed power supplies are faulty.

¡     The system power cords are not connected correctly.

UID button LED

·     Steady blue—UID LED is activated. The UID LED can be activated by using the following methods:

¡     Press the UID button LED.

¡     Activate the UID LED from HDM.

·     Flashing blue:

¡     1 Hz—The firmware is being upgraded or the system is being managed from HDM.

¡     4 Hz—HDM is restarting. To restart HDM, press the UID button LED for a minimum of eight seconds.

·     Off—UID LED is not activated.

Health LED

·     Steady green—The system is operating correctly or a minor alarm has occurred.

·     Flashing green (4 Hz)—HDM is initializing.

·     Flashing amber (1 Hz)—A major alarm has occurred.

·     Flashing red (1 Hz)—A critical alarm has occurred.

If a system alarm is present, log in to HDM to obtain more information about the system running status.

Ethernet port LED

·     Steady green—A link is present on the port.

·     Flashing green (1 Hz)—The port is receiving or sending data.

·     Off—No link is present on the port.

 

Ports

Table 3 Ports on the front panel

Port

Type

Description

USB connector

USB 3.0

Connects the following devices:

·     USB flash drive.

·     USB keyboard or mouse.

·     USB optical drive for operating system installation.

 

Rear panel

Rear panel view

Figure 5 shows the rear panel view.

Figure 5 Rear panel components

(1) Support for either of the following configurations:

·     PCIe slots 1 through 3 from the top down (slots 1 and 2 for processor 1 and slot 3 for processor 2)

·     2LFF SAS/SATA drives in a 4LFF drive cage

(2) Support for either of the following configurations:

·     PCIe slots 4 through 6 from the top down (processor 2)

·     2LFF SAS/SATA drives in a 2LFF or 4LFF drive cage

(3) PCIe slots 7 and 8 from the top down (processor 2)(optional)

(4) SAS/SATA drives or NVMe drives (optional)

(5) Power supply 2

(6) Power supply 1

(7) BIOS serial port

(8) VGA connector

(9) Ethernet port 2 (1 Gbps)

(10) Ethernet port 1 (1 Gbps)

(11) USB 3.0 connectors

(12) HDM dedicated network port (1 Gbps, RJ-45, default IP address 192.168.1.2/24)

(13) FLOM network adapter in slot 9

(14) Serial label pull tab module

(15) 12LFF SAS/SATA drives

 

LEDs

Figure 6 shows the rear panel LEDs. Table 4 describes the status of the rear panel LEDs.

Figure 6 Rear panel LEDs

(1) UID LED

(2) Link LED of the Ethernet port

(3) Activity LED of the Ethernet port

(4) Power supply 1 LED

(5) Power supply 2 LED

 

Table 4 LEDs on the rear panel

LED

Status

UID LED

·     Steady blue—UID LED is activated. The UID LED can be activated by using the following methods:

¡     Press the UID button LED.

¡     Enable UID LED from HDM.

·     Flashing blue:

¡     1 Hz—The firmware is being updated or the system is being managed by HDM.

¡     4 Hz—HDM is restarting. To restart HDM, press the UID button LED for a minimum of eight seconds.

·     Off—UID LED is not activated.

Link LED of the Ethernet port

·     Steady green—A link is present on the port.

·     Off—No link is present on the port.

Activity LED of the Ethernet port

·     Flashing green (1 Hz)—The port is receiving or sending data.

·     Off—The port is not receiving or sending data.

Power supply LED

·     Steady green—The power supply is operating correctly.

·     Flashing green (1 Hz)—Power is being input correctly but the system is not powered on.

·     Flashing green (0.33 Hz)—The power supply is in standby state and does not output power.

·     Steady amber—Either of the following conditions exists:

¡     The power supply is faulty.

¡     The power supply does not have power input, but the other power supply has correct power input.

·     Flashing amber (1 Hz)—An alarm has occurred on the power supply.

·     Off—No power supplies have power input, which can be caused by an incorrect power cord connection or power source shutdown.

 

Ports

Table 5 Ports on the rear panel

Port

Type

Description

HDM dedicated network port

RJ-45

Establishes a network connection to manage the server from the HDM Web interface.

Ethernet ports 1 and 2

RJ-45

Establishes a network connection for interaction with external devices.

USB connector

USB 3.0

Connects the following devices:

·     USB flash drive.

·     USB keyboard or mouse.

·     USB optical drive for operating system installation.

VGA connector

DB-15

Connects a display terminal, such as a monitor or KVM device.

BIOS serial port

RJ-45

The BIOS serial port is used for the following purposes:

·     Log in to the server when the remote network connection to the server has failed.

·     Establish a GSM modem or encryption lock connection.

Power receptacle

Standard single-phase

Connects the power supply to the power source.

 

System board

System board components

Figure 7 System board components

(1) TPM/TCM connector

(2) System battery

(3) PCIe riser connector 1 (processors 1 and 2)

(4) FLOM network adapter connector (slot 9)

(5) Mezzanine storage controller connector (slot 10)

(6) Mini-SAS-HD port (×8 SATA ports)

(7) Front I/O connector

(8) NVMe VROC module connector

(9) Rear drive backplane power connector 1

(10) Rear drive backplane AUX connector 1

(11) Front drive backplane AUX connector 3

(12) Chassis-open alarm module connector

(13) Front drive backplane power connector 3

(14) Rear drive backplane power connector 2

(15) Rear drive backplane power connector 4

(16) Rear drive backplane AUX connector 2

(17) SlimSAS connector 1 (×8 PCIe3.0, for processor 2)

(18) SlimSAS connector 2 (×8 PCIe3.0, for processor 2)

(19) SlimSAS connector 3 (×8 PCIe3.0, for processor 2)

(20) SlimSAS connector 4 (×8 PCIe3.0, for processor 2)

(21) M.2 expander module connector

(22) PCIe riser connector 2 (processor 2)

(X1) System maintenance switch 1

(X2) System maintenance switch 2

(X3) System maintenance switch 3

 

System maintenance switches

Use the system maintenance switches if you forget HDM username, HDM password, or BIOS password, or need to restore default BIOS settings, as described in Table 6. To identify the location of the switches on the system board, see Figure 7.

Table 6 System maintenance switches

Item

Description

Remarks

System maintenance switch 1

·     Pins 1-2 jumped (default)—HDM login requires the username and password of a valid HDM user account.

·     Pins 2-3 jumped—HDM login requires the default username and password.

For security purposes, jump pins 1 and 2 after you complete tasks with the default username and password as a best practice.

System maintenance switch 2

·     Pins 1-2 jumped (default)—Normal server startup.

·     Pins 2-3 jumped—Restores the default BIOS settings.

To restore the default BIOS settings, jump pins 2 and 3 for over 30 seconds and then jump pins 1 and 2 for normal server startup.

System maintenance switch 3

·     Pins 1-2 jumped (default)—Normal server startup.

·     Pins 2-3 jumped—Clears all passwords from the BIOS at server startup.

To clear all passwords from the BIOS, jump pins 2 and 3 and then start the server. All the passwords will be cleared from the BIOS. Before the next server startup, jump pins 1 and 2 to perform a normal server startup.

 

DIMM slots

The server provides 6 DIMM channels per processor, 12 channels in total. Each channel contains one white-coded slot and one black-coded slot, as shown in Table 7.

Table 7 DIMM slot numbering and color-coding scheme

Processor

DlMM slots

Processor 1

A1 through A6 (white coded)

A7 through A12 (black coded)

Processor 2

B1 through B6 (white coded)

B7 through B12 (black coded)

 

Figure 8 shows the physical layout of the DIMM slots on the system board. For more information about the DIMM slot population rules, see the guidelines in "Installing DIMMs."

Figure 8 DIMM physical layout

 


Appendix B  Component specifications

For components compatible with the server and detailed component information, visit the query tool at http://www.h3c.com/cn/Service/Document_Software/Document_Center/Server/.

About component model names

The model name of a hardware option in this document might differ slightly from its model name label.

A model name label might add a prefix or suffix to the hardware-coded model name for purposes such as identifying the matching server brand or applicable region. For example, the DDR4-2666-8G-1Rx8-R memory model represents memory module labels including UN-DDR4-2666-8G-1Rx8-R, UN-DDR4-2666-8G-1Rx8-R-F, and UN-DDR4-2666-8G-1Rx8-R-S, which have different suffixes.

DIMMs

DRAM DIMM rank classification label

A DIMM rank is a set of memory chips that the system accesses while writing or reading from the memory. On a multi-rank DIMM, only one rank is accessible at a time.

To determine the rank classification of a DRAM DIMM, use the label attached to the DIMM, as shown in Figure 9.

Figure 9 DRAM DIMM rank classification label

 

Table 8 DIMM rank classification label description

Callout

Description

Remarks

1

Capacity

·     8GB.

·     16GB.

·     32GB.

2

Number of ranks

·     1R—One rank.

·     2R—Two ranks.

·     4R—Four ranks.

·     8R—Eight ranks.

3

Data width

·     ×4—4 bits.

·     ×8—8 bits.

4

DIMM generation

Only DDR4 is supported.

5

Data rate

·     2133P—2133 MHz.

·     2400T—2400 MHz.

·     2666V—2666 MHz.

·     2933Y—2933 MHz.

6

DIMM type

·     L—LRDIMM.

·     R—RDIMM.

 

HDDs and SSDs

Drive LEDs

The server supports SAS, SATA, and NVMe drives. SAS and SATA drives support hot swapping. NVMe drives support hot insertion and managed hot removal when the VMD status is set to Auto or Enabled. You can use the LEDs on a drive to identify its status after it is connected to a storage controller.

Figure 10 shows the location of the LEDs on a drive.

Figure 10 Drive LEDs

(1) Fault/UID LED

(2) Present/Active LED

 

To identify the status of a SAS or SATA drive, use Table 9. To identify the status of an NVMe drive, use Table 10.

Table 9 SAS/SATA drive LED description

Fault/UID LED status

Present/Active LED status

Description

Flashing amber (0.5 Hz)

Steady green/Flashing green (4.0 Hz)

A drive failure is predicted. Replace the drive immediately.

Steady amber

Steady green/Flashing green (4.0 Hz)

The drive is faulty. Replace the drive immediately.

Steady blue

Steady green/Flashing green (4.0 Hz)

The drive is operating correctly and is selected by the RAID controller.

Off

Flashing green (4.0 Hz)

The drive is performing a RAID migration or rebuilding, or the system is reading or writing data to the drive.

Off

Steady green

The drive is present but no data is being read or written to the drive.

Off

Off

The drive is not securely installed or a drive failure has occurred.

 

Table 10 NVMe drive LED description

Fault/UID LED status

Present/Active LED status

Description

Flashing amber (0.5 Hz)

Off

The managed hot removal process is completed. You can remove the drive safely.

Flashing amber (4.0 Hz)

Off

The drive is in hot insertion process.

Steady amber

Steady green/Flashing green (4.0 Hz)

The drive is faulty. Replace the drive immediately.

Steady blue

Steady green/Flashing green (4.0 Hz)

The drive is operating correctly and selected by the RAID controller.

Off

Flashing green (4.0 Hz)

The drive is performing a RAID migration or rebuilding, or the system is reading or writing data to the drive.

Off

Steady green

The drive is present but no data is being read or written to the drive.

Off

Off

The drive is not securely installed.

 

Drive configurations and numbering

The server supports 24 LFF drives at the server front, and you can install LFF and SFF drives at the server rear for drive expansion. Table 11 presents the drive configurations available for the server and their compatible types of storage controllers.

Table 11 Drive and storage controller configurations

Drive configuration

Storage controller

24 front LFF drives + 2 or 4 rear LFF drives + 2 or 4 rear SFF drives

Mezzanine storage controller

Standard storage controller in PCIe slot 1

2 × standard storage controllers (Installed in PCIe slots of riser card 1 or riser card 2 as a best practice)

24 front LFF drives + 12 rear LFF drives + 2 or 4 rear SFF drives + 2 or 4 rear LFF drives

Mezzanine storage controller

Standard storage controller in PCIe slot 1

Mezzanine storage controller + 1 × standard storage controller (Installed in PCIe slot 1 as a best practice)

2 × standard storage controllers (Installed in PCIe slots of riser card 1 or riser card 2 as a best practice)

 

 

NOTE:

Mezzanine and standard storage controllers support both SAS/SATA drives and NVMe drives.

 

Figure 11 shows the drive population and drive numbering scheme on the server.

Figure 11 Drive numbering for server drive configuration (front view and rear view)

 

 

NOTE:

·     To install drives in drive slots 1, 2, 4, and 5 at the server rear, a 4LFF drive cage is required.

·     To install drives in drive slots 4 and 5 at the server rear, a 2LFF drive cage is required.

·     To install drives in drive slots 7, 8, 9, and 10 at the server rear, a 4SFF drive cage is required.

·     To install drives in drive slots 9 and 10 at the server rear, a 2SFF drive cage is required.

 

PCIe modules

Typically, the PCIe modules are available in the following standard form factors:

·     LP—Low profile.

·     FHHL—Full height and half length.

·     FHFL—Full height and full length.

·     HHHL—Half height and half length.

·     HHFL—Half height and full length.

Some PCIe modules, such as mezzanine storage controllers, are in non-standard form factors.

Storage controllers

The server supports the following types of storage controllers depending on their form factors:

·     Embedded RAID controllerEmbedded in the server and does not require installation.

·     Mezzanine storage controllerInstalled on the mezzanine storage controller connector of the system board and does not require a riser card for installation. For the location of the mezzanine storage controller connector, see "System board components."

·     Standard storage controllerComes in a standard PCIe form factor and typically requires a riser card for installation.

Embedded RSTe RAID controller

Item

Specifications

Type

Embedded in PCH of the system board

Connectors

One onboard ×8 mini-SAS-HD connector

Number of internal ports

8 internal SATA ports

Drive interface

6 Gbps SATA 3.0

PCIe interface

PCIe3.0 ×4

RAID levels

0, 1, 5, 10

Built-in cache memory

N/A

Power fail safeguard module

Not supported

Firmware upgrade

Upgraded with the BIOS

 

Mezzanine and standard storage controllers

For more information, visit the query tool at http://www.h3c.com/cn/Service/Document_Software/Document_Center/Server/.

Riser cards

To expand the server with PCIe modules, you can install riser cards on the PCIe riser connectors.

Riser card guidelines

Each PCIe slot in a riser card can supply a maximum of 75 W power to the PCIe module. You must connect a separate power cord to the PCIe module only if it requires more than 75 W power.

If a processor is faulty or absent, the corresponding PCIe slots are unavailable.

FHHL-2X16+X8-G3

Item

Specifications

PCIe riser connector

Connector 1

PCIe slots

·     Slot 1: PCIe3.0 ×16 (16, 8, 4, 2, 1) for processor 1

·     Slot 2: PCIe3.0 ×16 (16, 8, 4, 2, 1) for processor 1

·     Slot 3: PCIe3.0 ×8 (8, 4, 2, 1) for processor 2

NOTE:

The numbers in parentheses represent link widths. Both slots 1 and 2 support GPUs.

SlimSAS connectors

1 × SlimSAS connector (×8 PCIe3.0)

Form factors of supported PCIe modules

FHHL

Maximum power supplied per PCIe slot

75 W

 

Figure 12 PCIe slots and SlimSAS connectors on the FHHL-2X16+X8-G3 riser card

(1) PCIe slot 3

(2) PCIe slot 2

(3) SlimSAS connector (×8 PCIe3.0 port)

(4) PCIe slot 1

 

FHHL-2X8-G3

Item

Specifications

PCIe riser connector

Connector 2

PCIe slots

·     Slot 4: PCIe3.0 ×8 (8, 4, 2, 1) for processor 2

·     Slot 5: PCIe3.0 ×8 (8, 4, 2, 1) for processor 2

NOTE:

The numbers in parentheses represent link widths.

Form factors of supported PCIe modules

FHHL

Maximum power supplied per PCIe slot

75 W

 

Figure 13 PCIe slots on the FHHL-2X8-G3 riser card

(1) PCIe slot 5

(2) PCIe slot 4

 

FHHL-X16-G3

Item

Specifications

PCIe riser connector

Connector 2

PCIe slots

Slot 4: PCIe3.0 ×16 (16, 8, 4, 2, 1) for processor 2

NOTE:

The numbers in parentheses represent link widths. The slot supports a GPU.

Form factors of supported PCIe modules

FHHL

Maximum power supplied per PCIe slot

75 W

 

Figure 14 PCIe slots on the FHHL-X16-G3 riser card

(1) PCIe slot 4

 

RC-3FHFL-2U-4UG3

Item

Specifications

PCIe riser connector

Connector 1

PCIe slots

·     Slot 1: PCIe3.0 ×8 (8, 4, 2, 1) for processor 1

·     Slot 2: PCIe3.0 ×8 (8, 4, 2, 1) for processor 1

·     Slot 3: PCIe3.0 ×8 (8, 4, 2, 1) for processor 2

NOTE:

The numbers in parentheses represent link widths. All slots support GPUs.

Form factors of supported PCIe modules

FHHL

Maximum power supplied per PCIe slot

75 W

 

Figure 15 PCIe slots on the RC-3FHFL-2U-4UG3 riser card

(1) PCIe slot 3

(2) PCIe slot 2

(3) PCIe slot 1

 

RC-3FHFL-2U-4UG3-1

Item

Specifications

PCIe riser connector

Connector 2

PCIe slots

·     Slot 4: PCIe3.0 ×8 (8, 4, 2, 1) for processor 2

·     Slot 5: PCIe3.0 ×8 (8, 4, 2, 1) for processor 2

·     Slot 6: PCIe3.0 ×8 (8, 4, 2, 1) for processor 2

NOTE:

The numbers in parentheses represent link widths. All slots support GPUs.

SlimSAS connectors

1 × SlimSAS connector (×8 PCIe3.0 port)

Form factors of supported PCIe modules

FHHL

Maximum power supplied per PCIe slot

75 W

 

 

Figure 16 PCIe slots and SlimSAS connectors on the RC-3FHFL-2U-4UG3-1 riser card

(1) PCIe slot 6

(2) PCIe slot 5

(3) SlimSAS connector (×8 PCIe3.0 port)

(4) PCIe slot 4

 

RC-2LP-1U-4UG3

Item

Specifications

PCIe riser connector

Connector 3

PCIe slots

·     Slot 7: PCIe3.0 ×8 (8, 4, 2, 1) for processor 2

·     Slot 8: PCIe3.0 ×8 (8, 4, 2, 1) for processor 2

NOTE:

The numbers in parentheses represent link widths. Both slots support GPUs.

SlimSAS connectors

·     SlimSAS connector 1 (×8 PCIe3.0 port)

·     SlimSAS connector 2 (×8 PCIe3.0 port)

Form factors of supported PCIe modules

LP

Maximum power supplied per PCIe slot

75 W

 

Figure 17 PCIe slots and SlimSAS connectors on the RC-2LP-1U-4UG3 riser card

(1) PCIe slot 7

(2) PCIe slot 8

(3) SlimSAS connector 1 (×8 PCIe3.0 port)

(4) SlimSAS connector 2 (×8 PCIe3.0 port)

 

B/D/F information

Viewing B/D/F information

Table 12 lists the default Bus/Device/Function numbers (B/D/F) when the following conditions are all met:

·     All processors are installed.

·     All PCIe riser connectors are installed with riser cards.

·     All PCIe slots in riser cards are installed with PCIe modules.

·     The FLOM network adapter is installed in slot 9.

·     The mezzanine storage controller is installed in slot 10.

B/D/F information in Table 12 might change if any of the above conditions is not met or a PCIe module with a PCIe bridge is installed, for example, an NVMe SSD expander module.

For more information about riser cards, see "Riser cards" and "Replacing riser cards and PCIe modules." For more information the locations of slots 9 and 10, see "System board components."

For information about how to obtain B/D/F information, see "Obtaining B/D/F information."

Table 12 PCIe modules and the corresponding Bus/Device/Function numbers

PCIe riser connector

Riser card model

PCIe slot

Processor

Port number

Root port (B/D/F)

Endpoint (B/D/F)

Connector 1

FHHL-2X16+X8-G3

Slot 1

Processor 1

Port 2A

3a:00:00

3b:00:00

Slot 2

Processor 1

Port 1A

17:00.00

18:00.00

Slot 3

Processor 2

Port 1A

85:00:00

86:00:00

RC-3FHFL-2U-4UG3

Slot 1

Processor 2

Port 2A

3a:00:00

3b:00:00

Slot 2

Processor 2

Port 1A

17:00.00

18:00.00

Slot 3

Processor 2

Port 1C

17:02.00

19:00.00

Connector 2

RC-3FHFL-2U-4UG3-1

Slot 4

Processor 2

Port 2A

ae:00.00

af:00.00

Slot 5

Processor 2

Port 2C

ae:02.00

b0:00.00

Slot 6

Processor 2

Port 1C

85:02.00

87:00.00

FHHL-2X8-G3

Slot 4

Processor 2

Port 2A

ae:00.00

af:00.00

Slot 5

Processor 2

Port 2C

ae:02.00

b0:00.00

FHHL-X16-G3

Slot 4

Processor 2

Port 2A

ae:00.00

af:00.00

Connector 3

RC-2LP-1U-4UG3

Slot 7

Processor 2

Port 1A

85:00:00

86:00:00

Slot 8

Processor 2

Port 3A

d7:00.00

d8:00.00

N/A

N/A

Sot 9 (for FLOM network adapter)

Processor 1

Port 3C

5d:02:00

3d:00:00

N/A

N/A

Slot 10 (for mezzanine storage controller)

Processor 1

Port 3A

5d:00:00

5e:00:00

 

 

NOTE:

·     The root port (B/D/F) indicates the bus number of the PCIe root node in the processor.

·     The endpoint (B/D/F) indicates the bus number of a PCIe module in the operating system.

 

Obtaining B/D/F information

You can obtain B/D/F information by using one of the following methods:

·     BIOS log—Search the dumpiio keyword in the BIOS log.

·     UEFI shell—Execute the pci command. For information about how to execute the command, execute the help pci command.

·     Operating system—The obtaining method varies by OS.

¡     For Linux, execute the lspci command.

If Linux does not support the lspci command by default, you must execute the yum command to install the pciutils package.

¡     For Windows, install the pciutils package, and then execute the lspci command.

¡     For VMware, execute the lspci command.

Fan modules

Fan module layout

The server supports a maximum of four hot swappable fan modules. Figure 18 shows the layout of the fan modules in the chassis.

Figure 18 Fan module layout

 

Power supplies

The power supplies have an overtemperature protection mechanism. A power supply stops working when an overtemperature occurs and automatically recovers when the overtemperature condition is removed.

550 W Platinum power supply

Item

Specifications

Model

PSR550-12A

Rated input voltage range

·     100 VAC to 240 VAC @ 50/60 Hz

·     192 VDC to 288 VDC (240 HVDC power source)

Maximum rated input current

·     8.0 A @ 100 VAC to 240 VAC

·     2.75 A @ 240 VDC

Maximum rated output power

550 W

Efficiency at 50 % load

94%, 80 Plus Platinum level

Temperature requirements

·     Operating temperature: 0°C to 50°C (32°F to 122°F)

·     Storage temperature: –40°C to +70°C (–40°F to +158°F)

Operating humidity

5% to 90%

Maximum altitude

5000 m (16404.20 ft)

Redundancy

1+1 redundancy

Hot swappable

Yes

Cold backup

Yes

 

800 W Platinum power supply

Item

Specifications

Model

PSR800-12A

Rated input voltage range

·     100 VAC to 240 VAC @ 50/60 Hz

·     192 VDC to 288 VDC (240 HVDC power source)

Maximum rated input current

·     10.0 A @ 100 VAC to 240 VAC

·     4.0 A @ 240 VDC

Maximum rated output power

800 W

Efficiency at 50 % load

94%, 80 Plus Platinum level

Temperature requirements

·     Operating temperature: 0°C to 50°C (32°F to 122°F)

·     Storage temperature: –40°C to +70°C (–40°F to +158°F)

Operating humidity

5% to 90%

Maximum altitude

5000 m (16404.20 ft)

Redundancy

1+1 redundancy

Hot swappable

Yes

Cold backup

Yes

 

800 W 336 V high-voltage power supply

Item

Specifications

Model

PSR800-12AHD

Rated input voltage range

·     100 VAC to 240 VAC @ 50/60 Hz

·     180 VDC to 400 VDC (240 to 336 HVDC power source)

Maximum rated input current

·     10.0 A @ 100 VAC to 240 VAC

·     3.8 A @ 240 VDC

Maximum rated output power

800 W

Efficiency at 50 % load

94%

Temperature requirements

·     Operating temperature: 0°C to 50°C (32°F to 122°F)

·     Storage temperature: –40°C to +70°C (–40°F to +158°F)

Operating humidity

5% to 90%

Maximum altitude

5000 m (16404.20 ft)

Redundancy

1+1 redundancy

Hot swappable

Yes

Cold backup

Yes

 

1200 W Platinum power supply

Item

Specifications

Model

PSR1200-12A

Rated input voltage range

·     100 VAC to 127 VAC @ 50/60 Hz (1000 W)

·     200 VAC to 240 VAC @ 50/60 Hz (1200 W)

·     192 VDC to 288 VDC (240 HVDC power source) (1200 W)

Maximum rated input current

·     12.0 A @ 100 VAC to 240 VAC

·     6.0 A @ 240 VDC

Maximum rated output power

1200 W

Efficiency at 50 % load

94%, 80 Plus Platinum level

Temperature requirements

·     Operating temperature: 0°C to 50°C (32°F to 122°F)

·     Storage temperature: –40°C to +70°C (–40°F to +158°F)

Operating humidity

5% to 90%

Maximum altitude

5000 m (16404.20 ft)

Redundancy

1+1 redundancy

Hot swappable

Yes

Cold backup

Yes

 

550 W high-efficiency Platinum power supply

Item

Specifications

Model

DPS-550W-12A

Rated input voltage range

·     100 VAC to 240 VAC @ 50/60 Hz (10 A receptacle)

·     192 VDC to 288 VDC (240 HVDC power source)

Maximum rated input current

·     7.1 A @ 100 VAC to 240 VAC

·     2.8 A @ 240 VDC

Maximum rated output power

550 W

Efficiency at 50 % load

94%, 80 Plus Platinum level

Temperature requirements

·     Operating temperature: 0°C to 55°C (32°F to 131°F)

·     Storage temperature: –40°C to +70°C (–40°F to +158°F)

Operating humidity

5% to 90%

Maximum altitude

5000 m (16404.20 ft)

Redundancy

1+1 redundancy

Hot swappable

Yes

Cold backup

Yes

 

800 W –48 VDC power supply

Item

Specifications

Model

DPS-800W-12A-48V

Rated input voltage range

–48 VDC to –60 VDC

Maximum rated input current

20.0 A @ –48 VDC to –60 VDC

Maximum rated output power

800 W

Efficiency at 50 % load

92%

Temperature requirements

·     Operating temperature: 0°C to 55°C (32°F to 131°F)

·     Storage temperature: –40°C to +70°C (–40°F to +158°F)

Operating humidity

5% to 90%

Maximum altitude

5000 m (16404.20 ft)

Redundancy

1+1 redundancy

Hot swappable

Yes

Cold backup

Yes

 

850 W high-efficiency Platinum power supply

Item

Specifications

Model

DPS-850W-12A

Rated input voltage range

·     100 VAC to 240 VAC @ 50/60 Hz (10 A receptacle)

·     192 VDC to 288 VDC (240 HVDC power source)

Maximum rated input current

·     10.0 A @ 100 VAC to 240 VAC

·     4.4 A @ 240 VDC

Maximum rated output power

850 W

Efficiency at 50 % load

94%, 80 Plus Platinum level

Temperature requirements

·     Operating temperature: 0°C to 55°C (32°F to 131°F)

·     Storage temperature: –40°C to +70°C (–40°F to +158°F)

Operating humidity

5% to 85%

Maximum altitude

5000 m (16404.20 ft)

Redundancy

1+1 redundancy

Hot swappable

Yes

Cold backup

Yes

 

1600 W power supply

Item

Specifications

Model

PSR1600-12A

Rated input voltage range

·     200 VAC to 240 VAC @ 50/60 Hz

·     192 VDC to 288 VDC (240 HVDC power source)

Maximum rated input current

·     9.5 A @ 200 VAC to 240 VAC

·     8.0 A @ 240 VDC

Maximum rated output power

1600 W

Efficiency at 50 % load

94%, 80 Plus platinum level

Temperature requirements

·     Operating temperature: 0°C to 50°C (32°F to 122°F)

·     Storage temperature: –40°C to +70°C (–40°F to +158°F)

Operating humidity

5% to 90%

Maximum altitude

5000 m (16404.20 ft)

Redundancy

1+1 redundancy

Hot swappable

Yes

Cold backup

Yes

 

Storage options other than HDDs and SDDs

Model

Specifications

DVD-RW-Mobile-USB-A

Removable USB DVDRW drive module

IMPORTANT IMPORTANT:

For this module to work correctly, you must connect it to a USB 3.0 connector.

 

NVMe VROC modules

Model

Description

RAID levels

Compatible NVMe drives

NVMe-VROC-Key-S

NVMe VROC module standard edition

0, 1, 10

All NVMe drives

NVMe-VROC-Key-P

NVMe VROC module premium edition

0, 1, 5, 10

All NVMe drives

NVMe-VROC-Key-i

NVMe VROC module Intel edition

0, 1, 5, 10

Intel NVMe drives

 


Appendix C  Hot removal and managed hot removal of NVMe drives

The server supports hot removal and managed hot removal of NVMe drives.

Managed hot removal of NVMe drives enables you to remove NVMe drives safely while the server is operating.

Use Table 13 to determine the managed hot removal method depending on the VMD status and the operating system. By default, the VMD status is Auto. For more information about VMD, see the BIOS user guide for the server.

Table 13 Managed hot removal methods

VMD status

Operating system

Managed hot removal method

Auto/Enabled

Windows

Performing a managed hot removal in Windows.

Linux

Performing a managed hot removal in Linux.

Disabled

N/A

Contact the support.

 

Performing a managed hot removal in Windows

Prerequisites

Install Intel® Rapid Storage Technology enterprise (Intel® RSTe).

To obtain Intel® RSTe, use one of the following methods:

·     Go to https://platformsw.intel.com/KitSearch.aspx to download the software.

·     Contact Intel Support.

Procedure

1.     Stop reading data from or writing data to the NVMe drive to be removed.

2.     Identify the location of the NVMe drive. For more information, see "Drive LEDs."

3.     Run Intel® RSTe.

4.     Unmount the NVMe drive from the operating system, as shown in Figure 19:

¡     Select the NVMe drive to be removed from the Devices list.

¡     Click Activate LED to turn on the Fault/UID LED on the drive.

¡     Click Remove Disk.

Figure 19 Removing an NVMe drive

 

5.     Observe the Fault/UID LED on the drive. If the Fault/UID LED turns steady blue and the drive is removed from the Devices list, remove the drive from the server.

Performing a managed hot removal in Linux

In Linux, you can perform a managed hot removal of NVMe drives from the CLI.

Performing a managed hot removal from the CLI

1.     Stop reading data from or writing data to the NVMe drive to be removed.

2.     Identify the location of the NVMe drive. For more information, see "Drive LEDs."

3.     Access the CLI of the server.

4.     Execute the lsblk | grep nvme command to identify the drive letter of the NVMe drive, as shown in Figure 20.

Figure 20 Identifying the drive letter of the NVMe drive to be removed

 

5.     Execute the ledctl locate=/dev/drive_letter command to turn on the Fault/UID LED on the drive. The drive_letter argument represents the drive letter, nvme0n1 for example.

6.     Execute the echo 1 > /sys/block/drive_letter/device/device/remove command to unmount the drive from the operating system. The drive_letter argument represents the drive letter, nvme0n1 for example.

7.     Observe the Fault/UID LED on the drive. If the Fault/UID LED turns steady amber, remove the drive from the server.


Appendix D  Environment requirements

About environment requirements

The operating temperature requirements for the server vary depending on the hardware configuration. When the general and component-based requirements conflict, use the component-based requirement.

Be aware that the actual maximum operating temperature of the server might be lower than what is stated because of poor site cooling performance. In a real data center, the server cooling performance might decrease because of adverse external factors, including poor cabinet cooling performance, high power density inside the cabinet, or insufficient spacing between devices.

General environment requirements

Item

Specifications

Operating temperature

Minimum: 5°C (41°F)

Maximum: Varies depending on the power consumed by the processors and presence of expansion modules. For more information, see "Operating temperature requirements."

Storage temperature

–30°C to +60°C (–22°F to +140°F)

Operating humidity

·     Without drives installed at the server rear: 10% to 90%, noncondensing

·     With drives installed at the server rear: 8% to 90%, noncondensing

Storage humidity

5% to 95%, noncondensing

 

Operating temperature requirements

Use Table 14 to determine the maximum operating temperature of the server with different configurations.

If a single fan fails, the maximum server operating temperature drops by 5°C (41°F) and the GPU performance might degrade.

 

 

NOTE:

All maximum server operating temperature values are provided on the basis that all the fans are installed and operating correctly. For more information about fan configurations, see the guidelines in "Replacing a fan."

 

Table 14 Operating temperature requirements

Maximum server operating temperature

Hardware option configuration

30°C (86°F)

A GPU installed in slot 3 or 6.

35°C (95°F)

·     Front NVMe drives.

·     Rear drives.

·     GPUs installed in a slot other than slot 3 and slot 6.

40°C (104°F)

None of the above hardware option confirmations exists.

 

Appendix E  Product recycling

New H3C Technologies Co., Ltd. provides product recycling services for its customers to ensure that hardware at the end of its life is recycled. Vendors with product recycling qualification are contracted to New H3C to process the recycled hardware in an environmentally responsible way.

For product recycling services, contact New H3C at

·     Tel: 400-810-0504

·     E-mail: service@h3c.com

·     Website: http://www.h3c.com


Appendix F  Glossary

Item

Description

B

BIOS

Basic input/output system is non-volatile firmware pre-installed in a ROM chip on a server's system board. The BIOS stores basic input/output, power-on self-test, and auto startup programs to provide the most basic hardware initialization, setup and control functionality.

C

CPLD

Complex programmable logic device is an integrated circuit used to build reconfigurable digital circuits.

G

 

GPU

Graphics processing unit module converts digital signals to analog signals for output to a display device and assists processors with image processing to improve overall system performance.

H

HDM

Hardware Device Management is the server management control unit with which administrators can configure server settings, view component information, monitor server health status, and remotely manage the server.

Hot swapping

A module that supports hot swapping (a hot-swappable module) can be installed or removed while the server is running without affecting the system operation.

K

KVM

A device that allows remote users to use their local video display, keyboard, and mouse to monitor and control remote servers.

N

Network adapter

A network adapter, also called a network interface card (NIC), connects the server to the network.

NVMe VROC module

A module that works with VMD to provide RAID capability for the server to virtualize storage resources of NVMe drives.

R

RAID

Redundant array of independent disks (RAID) is a data storage virtualization technology that combines multiple physical hard drives into a single logical unit to improve storage and security performance.

Redundancy

A mechanism that ensures high availability and business continuity by providing backup modules. In redundancy mode, a backup or standby module takes over when the primary module fails.

S

Security bezel

A locking bezel mounted to the front of a server to prevent unauthorized access to modules such as hard drives.

U

U

A unit of measure defined as 44.45 mm (1.75 in) in IEC 60297-1. It is used as a measurement of the overall height of racks, as well as equipment mounted in the racks.

V

VMD

VMD provides hot removal, management, and fault-tolerance functions for NVMe drives to increase availability, reliability, and serviceability.

 


Appendix G  Acronyms

Acronym

Full name

B

BIOS

Basic Input/Output System

C

CMA

Cable Management Arm

CPLD

Complex Programmable Logic Device

D

DCPMM

Data Center Persistent Memory Module

DDR

Double Data Rate

DIMM

Dual Inline Memory Module

DRAM

Dynamic Random Access Memory

F

FLOM

Flexible Local Area Network on Motherboard

G

GPU

Graphics Processing Unit

H

HBA

Host Bus Adapter

HDD

Hard Disk Drive

HDM

Hardware Device Management

I

IDC

Internet Data Center

K

KVM

Keyboard, Video, Mouse

L

LFF

Large Form Factor

LRDIMM

Load Reduced Dual Inline Memory Module

N

NCSI

Network Controller Sideband Interface

NVMe

Non-Volatile Memory Express

P

PCIe

Peripheral Component Interconnect Express

PDU

Power Distribution Unit

POST

Power-On Self-Test

R

RAID

Redundant Array of Independent Disks

RDIMM

Registered Dual Inline Memory Module

S

SAS

Serial Attached Small Computer System Interface

SATA

Serial ATA

SD

Secure Digital

SDS

Secure Diagnosis System

SFF

Small Form Factor

SSD

Solid State Drive

T

TCM

Trusted Cryptography Module

TPM

Trusted Platform Module

U

UID

Unit Identification

UPI

Ultra Path Interconnect

UPS

Uninterruptible Power Supply

USB

Universal Serial Bus

V

VROC

Virtual RAID on CPU

VMD

Volume Management Device

 

  • Cloud & AI
  • InterConnect
  • Intelligent Computing
  • Security
  • SMB Products
  • Intelligent Terminal Products
  • Product Support Services
  • Technical Service Solutions
All Services
  • Resource Center
  • Policy
  • Online Help
All Support
  • Become a Partner
  • Partner Resources
  • Partner Business Management
All Partners
  • Profile
  • News & Events
  • Online Exhibition Center
  • Contact Us
All About Us
新华三官网