H3C UniServer R3950 G6 Server User Guide-5W100

HomeSupportServersH3C UniServer R3950 G6Technical DocumentsInstall & UpgradeInstallation GuidesH3C UniServer R3950 G6 Server User Guide-5W100
02-Appendix
Title Size Download
02-Appendix 6.96 MB

Contents

Appendix A  Server specifications· 1

Server models and chassis view· 1

Technical specifications· 1

Components· 2

Front panel 4

Front panel view of the server 4

LEDs and buttons· 6

Ports· 8

Rear panel 9

Rear panel view· 9

LEDs· 9

Ports· 10

System board· 11

System board components· 11

System maintenance switch· 13

DIMM slots· 14

Appendix B  Component specifications· 15

About component model names· 15

DIMMs· 15

DRAM DIMM rank classification label 15

HDDs and SSDs· 16

Drive numbering· 16

Drive LEDs· 17

Drive backplanes· 18

Front 8SFF SAS/SATA drive backplane· 19

Front 8SFF UniBay drive backplane· 19

Front 8LFF SAS/SATA drive backplane· 20

Front 12LFF SAS/SATA drive backplane· 20

Front 8SAS/SATA+4UniBay drive backplane· 21

Front 4SAS/SATA+8UniBay drive backplane· 21

Front 12LFF UniBay drive backplane· 22

Front 17SAS/SATA+8UniBay drive backplane· 23

Rear 2LFF SAS/SATA drive backplane· 24

Rear 4LFF SAS/SATA drive backplane· 24

Rear 2SFF SAS/SATA drive backplane· 24

Rear 2SFF UniBay drive backplane· 25

Rear 4SFF SAS/SATA drive backplane· 26

Rear 4SFF UniBay drive backplane· 26

Riser cards· 27

RC-3FHFL-2U-G6· 27

RC-3FHHL-2U-G6· 29

RC-1FHHL-2U-G6· 30

RC-2FHHL-2U-G6· 31

Riser 4 assembly module (accommodating two FHFL PCIe modules) 31

Riser 3 assembly module (accommodating two HHHL PCIe modules) 32

Fan modules· 32

SATA M.2 expander module· 33

M.2 SSD storage controller 34

Server management module· 34

Serial & DSD module· 35

B/D/F information· 36

Appendix C  Managed removal of OCP network adapters· 37

Before you begin· 37

Performing a hot removal 37

Appendix D  Environment requirements· 39

About environment requirements· 39

General environment requirements· 39

Appendix E  Product recycling· 40

Appendix F  Glossary· 41

Appendix G  Acronyms· 42

 


Appendix A  Server specifications

The information in this document might differ from your product if it contains custom configuration options or features.

Figures in this document are for illustration only.

Server models and chassis view

H3C UniServer R3950 G6 server is a rack server based on AMD EPYC 9004 series processors developed by New H3C. The server is 2U high and supports one processor. It features low power consumption, high reliability, strong scalability, and easy management and deployment. The server is suitable for virtualization, distributed storage, and data analysis.

Figure 1 Chassis view

 

The servers come in the models listed in Table 1. These models support different drive configurations.

Table 1 R3950 G6 server models

Model

Maximum drive configuration

LFF

12LFF drives at the front + drives (2LFF+4SFF or 4LFF+2SFF) at the rear

SFF

25SFF drives at the front + drives (2LFF+4SFF or 4LFF+2SFF) at the rear

 

Technical specifications

Category

Item

Specifications

Physical parameters

Dimensions (H × W × D)

·     Without a security bezel: 87.5 × 445.4 × 780 mm (3.44 × 17.54 × 30.71 in)

·     With a security bezel: 87.5 × 445.4 × 808 mm (3.44 × 17.54 × 31.81 in)

Max. weight

·     34 kg (74.96 lb)

Power consumption

The power consumption varies by configuration. For more information, visit H3C Server Power Consumption Evaluation.

Environmental specifications

Temperature

Operating temperature: 5°C to 45°C (41°F to 113°F)

NOTE:

The maximum operating temperature requirement for the server might be lower than that stated, depending on the hardware configuration. For more information, see operating temperature specifications in appendix A.

Storage temperature: –40°C to +70°C (–40°F to +158°F)

Humidity

·     Operating humidity: 8% to 90% (non-condensing)

·     Storage humidity: 5% to 95% (non-condensing)

Altitude

·     Operating altitude: –60 m to +3000 m (–196.85 ft to +9842.52 ft)
The allowed maximum temperature decreases by 0.33°C (32.59°F) as the altitude increases by 100 m (328.08 ft) from 900 m (2952.76 ft)

·     Storage altitude: -60m to 5000m (–196.85 ft to +16404.20 ft)

 

Components

Figure 2 R3950 G6 server components

 

Item

Description

(1) Chassis access panel

N/A

(2) Processor heatsink

Cools the processor.

(3) OCP network adapter

Network adapter installed onto the OCP network adapter connector on the system board.

(4) Processor

Integrates memory and PCIe controllers to provide data processing capabilities for the server.

(5) Storage controller

Provides RAID capability to SAS/SATA drives, including RAID configuration and RAID scale-up. It supports online upgrade of the controller firmware and remote configuration.

(6) Standard PCIe network adapter

Installed in a standard PCIe slot to provide network ports.

(7) Riser card

Provides PCIe slots.

(8) Memory

Stores computing data and data exchanged with external storage temporarily.

The server supports DDR5 DIMMs.

(9) Processor socket cover

Installed over an empty processor socket to protect pins in the socket.

(10) Server management module

Provides I/O connectors and HDM out-of-band management features.

(11) System board

One of the most important parts of a server, on which multiple components are installed, such as processor, memory, and fan. It is integrated with basic server components, including the BIOS chip and PCIe connectors.

(12) Rear drive backplane

Provides power and data channels for drives at the server rear.

(13) Rear drive cage

Installed at the server rear to accommodate drives.

(14) Riser card blank

Installed on an empty PCIe riser connector to ensure good ventilation.

(15) Power supply

Supplies power to the server. The power supplies support hot swapping and 1+1 redundancy.

(16) Chassis

N/A

(17) Chassis ears

Attach the server to the rack. The right ear is integrated with the front I/O component, and the left ear is integrated with VGA connector, HDM dedicated management connector (Type-C), and USB 3.0 connector.

(18) Front drive backplane

Provides power and data channels for drives at the server front.

(19) Drive

Provides data storage space. Drives support hot swapping. Both SSDs and HDDs are supported and the supported drive interface types include SAS, SATA, M.2, and PCIe.

(20) Supercapacitor holder

Secures a supercapacitor in the chassis.

(21) Supercapacitor

Supplies power to the flash card on the power fail safeguard module, which enables the storage controller to back up data to the flash card for protection when power outage occurs.

(22) SATA M.2 SSD expander module

Provides M.2 SSD slots.

(23) SATA M.2 SSD

Provides data storage space for the server.

(24) Serial & DSD module

Provides one serial port and two SD card slots.

(26) Encryption module

Provides encryption services for the server to enhance data security.

(27) Fan cage

Accommodates fan modules.

(28) Fan

Helps server ventilation. Fans support hot swapping and N+1 redundancy.

(29) System battery

Supplies power to the system clock to ensure system time correctness.

(30) Chassis open-alarm module

Detects if the access panel is removed. The detection result can be displayed from the HDM Web interface.

(31) Air baffle

Provides ventilation aisles for processor heatsinks and memory modules and provides support for the supercapacitor.

 

Front panel

Front panel view of the server

Figure 3 8LFF front panel

 

Table 2 8LFF front panel description

Item

Description

1

USB 3.0 connector

2

Drive or LCD smart management module (optional)

3

Serial label pull tab

4

HDM dedicated management connector

5

USB 2.0 connector

6

VGA connector

 

Figure 4 12LFF front panel

 

Table 3 12LFF front panel description

Item

Description

1

12LFF drives (optional)

2

USB 3.0 connector

3

Drive or LCD smart management module (optional)

4

Serial label pull tab

5

HDM dedicated management connector

6

USB 2.0 connector

7

VGA connector

 

Figure 5 8SFF front panel

 

Table 4 8SFF front panel description

Item

Description

1

Bay 1: 8SFF drives (optional)*

2

Bay 2: 8SFF drives (optional)*

3

Bay 3: 8SFF drives (optional)*

4

USB 3.0 connector

5

LCD smart management module (optional)

6

Serial label pull tab

7

HDM dedicated management connector

8

USB 2.0 connector

9

VGA connector

*: Drive types supported by the server vary by drive backplane configuration. For more information, see "Drive backplanes."

 

Figure 6 25SFF front panel

 

Table 5 25SFF front panel description

Item

Description

1

25SFF drives (optional)

2

USB 3.0 connector

3

Drive or LCD smart management module (optional)

4

Serial label pull tab

5

HDM dedicated management connector

6

USB 2.0 connector

7

VGA connector

 

LEDs and buttons

Front panel LEDs and buttons

Figure 7 Front panel LEDs and buttons

 

Table 6 LEDs and buttons on the front panel

Button/LED

Status

Power on/standby button and system power LED

·     Steady green—The system has started.

·     Flashing green (1 Hz)—The system is starting.

·     Steady amber—The system is in standby state.

·     Off—No power is present. Possible reasons:

¡     No power source is connected.

¡     No power supplies are present.

¡     The installed power supplies are faulty.

¡     The system power cords are not connected correctly.

OCP 3.0 network adapter Ethernet port LED

·     Steady green—A link is present on a port of an OCP 3.0 network adapter.

·     Flashing green—A port on an OCP 3.0 network adapter is receiving or sending data.

·     Off—All ports on the OCP 3.0 network adapter are not in use.

NOTE:

The server supports two OCP3.0 network adapters and supports expansion of one more OCP3.0 network adapter.

Health LED

·     Steady green—The system is operating correctly or a minor alarm is present.

·     Flashing green (4 Hz)—HDM is initializing.

·     Flashing amber (1 Hz)—A major alarm is present.

·     Flashing red (1 Hz)—A critical alarm is present.

If a system alarm is present, log in to HDM to obtain more information about the system running status.

UID button LED

·     Steady blue—UID LED is activated. The UID LED can be activated by using the following methods:

¡     Press the UID button LED.

¡     Activate the UID LED from HDM.

·     Flashing blue:

¡     1 Hz—The firmware is being upgraded or HDM is performing out-of-band firmware update. Do not power off the server.

¡     4 Hz—HDM is restarting. To restart HDM, press the UID button LED for eight seconds.

·     Off—UID LED is not activated.

 

Intelligent security bezel LEDs and buttons

The LEDs of the intelligent security panel support linkage with server health status, reflecting the running status and health information of servers. This can accelerate on-site inspections and fault location. The LED effects of the intelligent security panel also support custom settings. The default LED effects are as shown in Table 7.

Figure 8 Intelligent security bezel LEDs and buttons

 

Table 7 LEDs and buttons on the intelligent security bezel

Phase

LED state

Standby

Standby

Steady white

Startup

POST phase

White LEDs gradually light up from the middle to both sides, reflecting the percentage progress of the POST process

POST finished

white LEDs flow from the middle to both sides three times with a flowing effect

Running

Normal

Breathing white (0.2Hz brightness transition), where the number of lit LEDs indicates the load level.

As the overall load power consumption increases, the LEDs gradually light up from the middle and spread to both sides. The proportion of lit LEDs for different loads is as follows:

·     No load (below 10%)

·     Light load (10% to 50%)

·     Medium load (50% to 80%)

·     Heavy load (above 80%)

Predictive alarming (only drive predictive alarming is supported)

Breathing white (1 Hz brightness transition)

Major alarm present

Flashing amber (1 Hz)

Critical alarm present (only power error is supported)

Flashing red (1 Hz)

Remote control

System is being remotely managed or HDM is updating firmware through out-of-band (Do not power off the server)

All flashing white (1 Hz)

HDM restarting

Some flashing white (1 Hz)

 

Ports

Table 8 Ports on the front panel

Port

Type

Description

VGA connector

DB-15

Connects a display terminal, such as a monitor or KVM device.

USB connector

USB 3.0

Connects the following devices:

·     USB flash drive.

·     USB keyboard or mouse.

·     USB optical drive for operating system installation.

HDM dedicated management connector

Type-C

Connects a Type-C to USB adapter cable, which connects to a USB Wi-Fi adapter or USB drive.

 

Rear panel

Rear panel view

Figure 9 Rear panel components

 

Table 9 Rear panel description

Item

Description

1

PCIe riser bay 1: PCIe slots 1 through 3

2

PCIe riser bay 2: PCIe slots 4 through 6

3

PCIe riser bay 3: PCIe slots 7 and 8

5

Power supply 2

6

Power supply 1

7

OCP 3.0 network adapter/Serial & DSD module (optional)

8

VGA connector

9

Two USB 3.0 connectors

10

HDM dedicated network port (1Gbps, RJ-45, default IP address 192.168.1.2/24)

11

OCP 3.0 network adapter (optional)

For more information about serial & DSD modules, see "Serial & DSD module."

 

LEDs

Figure 10 Rear panel LEDs

 

(1) UID LED

(2) Link LED of the Ethernet port

(3) Activity LED of the Ethernet port

(4) Power supply LED for power supply 1

(5) Power supply LED for power supply 2

 

Table 10 LEDs on the rear panel

LED

Status

UID LED

·     Steady blue—UID LED is activated. The UID LED can be activated by using the following methods:

¡     Press the UID button LED.

¡     Enable UID LED from HDM.

·     Flashing blue:

¡     1 Hz—The firmware is being upgraded or HDM is performing out-of-band firmware update. Do not power off the server.

¡     4 Hz—HDM is restarting. To restart HDM, press the UID button LED for 8 seconds.

·     Off—UID LED is not activated.

Link LED of the Ethernet port

·     Steady green—A link is present on the port.

·     Off—No link is present on the port.

Activity LED of the Ethernet port

·     Flashing green (1 Hz)—The port is receiving or sending data.

·     Off—The port is not receiving or sending data.

Power supply LED

·     Steady green—The power supply is operating correctly or the server is in standby state.

·     Flashing green (0.33 Hz)—The power supply is in standby state and does not output power.

·     Flashing green (2 Hz)—The power supply is updating its firmware.

·     Steady amber—Either of the following conditions exists:

¡     The power supply is faulty.

¡     The power supply does not have power input, but another power supply has correct power input.

·     Flashing amber (1 Hz)—An alarm has occurred on the power supply.

·     Off—No power supplies have power input, which can be caused by an incorrect power cord connection or power source shutdown.

 

Ports

Table 11 Ports on the rear panel

Port

Type

Description

VGA connector

DB-15

Connects a display terminal, such as a monitor or KVM device.

BIOS port

DB-9

The BIOS serial port is used for the following purposes:

·     Log in to the server when the remote network connection to the server has failed.

·     Establish a GSM modem or encryption lock connection.

NOTE:

The port is on the serial & DSD module. For more information, see "Serial & DSD module."

USB connector

USB 3.0

Connects the following devices:

·     USB flash drive.

·     USB keyboard or mouse.

·     USB optical drive for operating system installation.

HDM dedicated network port

RJ-45

Establishes a network connection to manage HDM from its Web interface.

Power receptacle

Standard single-phase

Connects the power supply to the power source.

 

System board

System board components

Figure 11 System board components

 

Table 12 System board components

No.

Description

Mark

1

OCP 3.0 network adapter connector 2/DSD module connector

OCP2&DSD&UART CARD

2

Server management module connector

BMC CON

3

TPM/TCM connector

TPM

4

System battery

N/A

5

PCIe riser connector 1

RISER1 PCIe x16

6

Fan connector for OCP 3.0 network adapter 1

OCP1 FAN

7

AMD HDT debugging connector

AMD HDT

8

MCIO connector C1-G1A

C1-G1A

9

AUX connector for OCP 3.0 network adapter 1

OCP1 AUX

10

MCIO connector C1-G1C

C1-G1C

11

Drive backplane AUX connector 7

AUX 7

12

Front M.2 AUX connector

M.2 AUX(FRONT)

13

Drive backplane AUX connector 9

AUX 9

14

Power connector for OCP 3.0 network adapter 1

OCP1 PWR

15

Front I/O connector

RIGHT EAR

16

Drive backplane AUX connector 3

AUX3

17

Drive backplane AUX connector 2

AUX2

18

LP SlimSAS connector C1-P4A

C1-P4A

19

LCD smart management module connector

DIAG LCD

20

MCIO connector C1-P1C

C1-P1C

21

MCIO connector C1-P1A

C1-P1A

22

MCIO connector C1-P0A

C1-P0A

23

MCIO connector C1-P0C

C1-P0C

24

Temperature sensor connector

TEMP SENSE

25

MCIO connector C1-P2C

C1-P2C

26

MCIO connector C1-P2A

C1-P2A

27

MCIO connector C1-P3A

C1-P3A

28

MCIO connector C1-P3C

C1-P3C

29

Drive backplane AUX connector 1

AUX1

30

Front VGA and USB 2.0 connector

LEFT EAR

31

Fan board AUX connector 1

FAN AUX1

32

Power connector for the fan board

N/A

33

Chassis-open alarm module connector

INTRUDER

34

Power board AUX connector

N/A

35

Drive backplane AUX connector 8

AUX8

36

Drive backplane power connector 1

PWR1

37

Drive backplane power connector 2

PWR2

38

Drive backplane power connector 3

PWR3

39

Drive backplane power connector 6

PWR6

40

Power board STBY power connector

STBY PWR

41

Drive backplane power connector 8

PWR8

42

Drive backplane power connector 7

PWR7

43

Drive backplane power connector 5

PWR5

44

Drive backplane power connector 4

PWR4

45

MCIO connector C1-G3A

C1-G3A

46

OCP 3.0 network adapter connector 2 (PCIe expansion connector)

OCP2 x8

47

PCIe riser connector 2

RISER2 PCIe x16

48

MCIO connector C1-G3C

C1-G3C

49

Embedded USB3.0 connector

INTER USB3.0

50

Liquid leakage detection module connector

LEAKDET

51

Drive backplane AUX connector 5

AUX5

52

Drive backplane AUX connector 6

AUX6

53

Drive backplane AUX connector 4

AUX4

X

System maintenance switch

N/A

 

System maintenance switch

Figure 12 shows the system maintenance switch. Table 13 describes how to use the maintenance switch.

Figure 12 System maintenance switch

 

Table 13 System maintenance switch description

Item

Description

Remarks

1

·     Off (default)—HDM login requires the username and password of a valid HDM user account.

·     On—HDM login requires the default username and password.

For security purposes, turn off the switch after you complete tasks with the default username and password as a best practice.

5

·     Off (default)—Normal server startup.

·     On—Restores the default BIOS settings.

To restore the default BIOS settings:

1.     Power off the server, and turn on the switch.

2.     Power on the switch and wait for a minimum of 10 seconds.

3.     Power off the server and then turn off the switch.

4.     Start the server and verify that the POST screen prompts The CMOS defaults were loaded.

CAUTION CAUTION:

The server cannot start up when the switch is turned on. To avoid service data loss, stop running services and power off the server before turning on the switch.

6

·     Off (default)—Normal server startup.

·     On—Clears all passwords from the BIOS at server startup.

If this switch is on, the server will clear all the passwords at each startup. Make sure you turn off the switch before the next server startup if you do not need to clear all the passwords.

2, 3, 4, 7, and 8

Reserved for future use.

N/A

 

DIMM slots

A0, B0…H0, A1, B1…H1 represent the DIMM slot numbers, as shown in Figure 13.

Figure 13 System board DIMM slot layout

 


Appendix B  Component specifications

For components compatible with the server and detailed component information, use the component compatibility lookup tool at http://www.h3c.com/en/home/qr/default.htm?id=66.

About component model names

The model name of a hardware option in this document might differ slightly from its model name label.

A model name label might add a prefix or suffix to the hardware-coded model name for purposes such as identifying the matching server brand or applicable region. For example, the DDR5-4800-32G-1Rx4 memory model represents memory module labels including UN-DDR5-4800-32G-1Rx4-R, UN-DDR5-4800-32G-1Rx4-F, and UN-DDR5-4800-32G-1Rx4-S, which have different prefixes and suffixes.

DIMMs

The server supports one processor. The processor supports 12 channels, and each channel supports one DIMM, that is, one processor supports 12 DIMMs.For the physical layout of DIMM slots, see "DIMM slots."

DRAM DIMM rank classification label

A DIMM rank is a set of memory chips that the system accesses while writing or reading from the memory. On a multi-rank DIMM, only one rank is accessible at a time.

To determine the rank classification of a DIMM, use the label attached to the DIMM, as shown in Figure 14.

Figure 14 DDR DIMM rank classification label

 

Table 14 DIMM rank classification label description

Callout

Description

Remarks

1

Capacity

Options include:

·     32GB.

·     64 GB.

2

Number of ranks

Options include:

·     1R—One rank (Single-Rank).

·     2R—Two ranks (Dual-Rank). A 2R DIMM is equivalent to two 1R DIMMs.

·     4R—Four ranks (Quad-Rank). A 4R DIMM is equivalent to two 2R DIMMs

·     8R—Eight ranks (8-Rank). An 8R DIMM is equivalent to two 4R DIMMs.

3

Data width

Options include:

·     ×4—4 bits.

·     ×8—8 bits.

4

DIMM generation

DDR5

5

Data rate

4800B, indicating 4800 MHz.

6

DIMM type

·     R, indicating RDIMM.

 

HDDs and SSDs

Drive numbering

The server provides different drive numbering schemes for different drive configurations at the server front and rear, as shown in Figure 15 through Figure 20.

Figure 15 Drive numbering for front 25SFF drive configuration

 

Figure 16 Drive numbering for front 12LFF drive configuration

 

Figure 17 Drive numbering for front 8LFF drive configuration

 

Figure 18 Drive numbering for rear 2LFF+4SFF drive configuration

 

Figure 19 Drive numbering for rear 4LFF+2SFF drive configuration

 

Figure 20 Drive numbering for rear 2SFF+2SFF+4SFF drive configuration

 

Drive LEDs

The server supports SAS/SATA drives and NVMe drives.

Figure 21shows the location of the LEDs on a drive to indicate the drive status.

Figure 21 Drive LEDs

 

(1) Fault/UID LED

(2) Present/Active LED

 

To identify the status of a SAS or SATA drive, use Table 15 . To identify the status of an NVMe drive, use Table 16.

Table 15 SAS/SATA drive LED description

Fault/UID LED status

Present/Active LED status

Description

Flashing amber (0.5 Hz)

Steady green/Flashing green (4.0 Hz)

A drive failure is predicted. As a best practice, replace the drive before it fails.

Steady amber

Steady green/Flashing green (4.0 Hz)

The drive is faulty. Replace the drive immediately.

Steady blue

Steady green/Flashing green (4.0 Hz)

The drive is operating correctly and is selected by the RAID controller.

Off

Flashing green (4.0 Hz)

The drive is performing a RAID migration or rebuilding, or the system is reading or writing data to the drive.

Off

Steady green

The drive is present but no data is being read or written to the drive.

Off

Off

The drive is not securely installed.

 

Table 16 NVMe drive LED description

Fault/UID LED status

Present/Active LED status

Description

Flashing amber (4 Hz)

Off

The drive is in hot insertion process.

Steady amber

Steady green/Flashing green (4.0 Hz)

The drive is faulty. Replace the drive immediately.

Steady blue

Steady green/Flashing green (4.0 Hz)

The drive is operating correctly and selected by the RAID controller.

Off

Flashing green (4.0 Hz)

The drive is performing a RAID migration or rebuilding, or the system is reading or writing data to the drive.

Off

Steady green

The drive is present but no data is being read or written to the drive.

Off

Off

The drive is not securely installed.

 

Drive backplanes

The server supports the following types of drive backplanes:

·     SAS/SATA drive backplanesSupport only SAS/SATA drives.

·     UniBay drive backplanesSupport both SAS/SATA and NVMe drives. You must connect both SAS/SATA and NVMe data cables. The number of supported drives varies by drive cabling.

·     X SAS/SATA+Y UniBay drive backplanesSupport SAS/SATA drives in all slots and support NVMe drives in certain slots.

·     X: Number of slots supporting only SAS/SATA drives.

·     Y: Number of slots supporting both SAS/SATA and NVMe drives.

For UniBay drive backplanes and X SAS/SATA+Y UniBay drive backplanes:

·     The two drive types are supported only when both SAS/SATA and NVMe data cables are connected.

·     The number of supported SAS/SATA drives and the number of supported NVMe drives vary by cable connection.

Front 8SFF SAS/SATA drive backplane

The PCA-BP-8SFF-2U-G6 8SFF SAS/SATA drive backplane can be installed at the server front to support eight 2.5-inch SAS/SATA drives.

Figure 22 8SFF SAS/SATA drive backplane

 

(1) x8 SlimSAS connector (SAS PORT1)

(2) AUX connector (AUX)

(3) Power connector (PWR)

 

 

Front 8SFF UniBay drive backplane

The PCA-BP-8UniBay-2U-G6 8SFF UniBay drive backplane can be installed at the server front to support eight 2.5-inch SAS/SATA/NVMe drives.

Figure 23 8SFF UniBay drive backplane

 

(1) x8 SlimSAS connector (SAS PORT)

(2) AUX ( AUX)

(3) MCIO connector B3/B4 (PCIe5.0 x8)(NVMe B3/B4)

(4) Power connector (POWER)

(5) MCIO connector B1/B2 (PCIe5.0 x8)(NVMe B1/B2)

(6) MCIO connector A3/A4 (PCIe5.0 x8)(NVMe A3/A4)

(7) MCIO connector A1/A2 (PCIe5.0 x8)(NVMe A1/A2)

PCIe5.0 x8 description:

·     PCIe5.0: Fifth-generation signal speed.

·     x8: Bus bandwidth.

 

Front 8LFF SAS/SATA drive backplane

The PCA-BP-8LFF-2U-G6 8LFF SAS/SATA drive backplane can be installed at the server front to support eight 3.5-inch SAS/SATA drives.

Figure 24 8LFF SAS/SATA drive backplane

 

(1) x8 Mini-SAS-HD connector (SAS PORT)

(2) Power connector (PWR)

(3) AUX connector (AUX)

 

Front 12LFF SAS/SATA drive backplane

The PCA-BP-12LFF-2U-G6 12LFF SAS/SATA drive backplane can be installed at the server front to support twelve 3.5-inch SAS/SATA drives.

Figure 25 12LFF SAS/SATA drive backplane

 

(1) x4 SlimSAS connector (SAS PORT 2), managing the last four SAS/SATA drives on the backplane

(2) Power connector 2 (PWR 2)

(3) AUX connector (AUX)

(4) Power connector 1 (PWR 1)

(5) x8 SlimSAS connector (SAS PORT 1), managing the first eight SAS/SATA drives on the backplane

 

Front 8SAS/SATA+4UniBay drive backplane

The PCA-BP-12LFF-4NVMe-2U-G6 12LFF drive backplane can be installed at the server front to support twelve 3.5-inch SAS/SATA/NVMe drives, including eight SAS/SATA drives and four SAS/SATA/NVMe drives.

Figure 26 8SAS/SATA+4UniBay drive backplane

 

(1) MCIO connector A3 (PCIe5.0 x4)(NVMe-A3), supporting NVMe drive 9

(2) x4 SlimSAS connector (SAS PORT 2), managing the last four SAS/SATA drives on the backplane

(3) Power connector 2 (PWR 2)

(4) AUX connector 1(AUX 1)

(5) Power connector 1 (PWR 1)

(6) x8 SlimSAS connector (SAS PORT 1), managing the first eight SAS/SATA drives on the backplane

(7) MCIO connector A4 (PCIe5.0 x4)(NVMe-A4), supporting NVMe drive 8

(8) MCIO connector A1/A2 (PCIe5.0 x8)(NVMe-A1/A2), supporting NVMe drives 10 and 11

For more information about drive numbering, see "Drive numbering."

PCIe5.0 x8 description:

·     PCIe5.0: Fifth-generation signal speed.

·     x8: Bus bandwidth.

 

Front 4SAS/SATA+8UniBay drive backplane

The PCA-BP-12LFF-EXP-2U-G6 12LFF drive backplane can be installed at the server front to support twelve 3.5-inch SAS/SATA/NVMe drives, including four SAS/SATA drives and eight SAS/SATA/NVMe drives. The drive backplane integrates an Expander chip to manage 12 SAS/SATA drives through an x8 SlimSAS connector. The drive backplane also provides three downlink interfaces to connect to other drive backplanes and support more drives.

Figure 27 4SAS/SATA+8UniBay drive backplane

 

(1) x8 SlimSAS uplink interface (SAS PORT), managing all drives on the backplane

(2) x4 SlimSAS downlink interface 3 (SAS EXP3)

(3) Power connector 2 (PWR2)

(4) MCIO connector B1/B2 (PCIe5.0 x8)(NVMe-B1/B2), supporting NVMe drives 6 and 7

(5) Power connector 1 (PWR1)

(6) x8 SlimSAS downlink interface 2 (SAS EXP2)

(7) x4 SlimSAS downlink interface 1 (SAS EXP1)

(8) AUX connector (AUX)

(9) MCIO connector B3/B4 (PCIe5.0 x8)(NVMe B3/B4), supporting NVMe drives 4 and 5

(10) MCIO connector A3/A4 (PCIe5.0 x8)(NVMe A3/A4), supporting NVMe drives 8 and 9

(11) MCIO connector A1/A2 (PCIe5.0 x8)(NVMe A1/A2), supporting NVMe drives 10 and 11

For more information about drive numbering, see "Drive numbering."

PCIe5.0 x8 description:

·     PCIe5.0: Fifth-generation signal speed.

·     x8: Bus bandwidth.

 

Front 12LFF UniBay drive backplane

The PCA-BP-12LFF-UniBay-2U-G6 12LFF UniBay drive backplane can be installed at the server front to support twelve 3.5-inch SAS/SATA/NVMe drives.

Figure 28 12LFF UniBay drive backplane

 

(1) MCIO connector A3 (PCIe5.0 x4)(NVMe-A3)

(2) x4 SlimSAS connector (SAS PORT 2), managing the last four SAS/SATA drives on the backplane

(3) MCIO connector B1/B2 (PCIe5.0 x8)(NVMe-B1/B2)

(4) Power connector 2 (PWR 2)

(5) AUX connector 1 (AUX 1)

(6) MCIO connector C1 (PCIe5.0 x4)(NVMe-C1)

(7) Power connector 1 (PWR 1)

(8) x8 SlimSAS connector (SAS PORT 1), managing the first eight SAS/SATA drives on the backplane

(9) MCIO connector C3/C4 (PCIe5.0 x8)(NVMe-C3/C4)

(10) MCIO connector C2 (PCIe5.0 x4)( NVMe-C2)

(11) MCIO connector B3/B4 (PCIe5.0 x8)(NVMe-B3/B4)

(12) MCIO connector A4 (PCIe5.0 x4)(NVMe-A4)

(13) MCIO connector A1/A2 (PCIe5.0 x8)(NVMe-A1/A2)

PCIe5.0 x8 description:

·     PCIe5.0: Fifth-generation signal speed.

·     x8: Bus bandwidth.

 

Front 17SAS/SATA+8UniBay drive backplane

The PCA-BP-25SFF-2U-G6 25SFF drive backplane can be installed at the server front to support twenty-five 2.5-inch SAS/SATA/NVMe drives, including 17 SAS/SATA drives and 8 SAS/SATA/NVMe drives. The drive backplane can use an x8 SlimSAS connector to manage 25 SAS/SATA drives. The drive backplane also integrates an Expander chip and three downlink interfaces to connect to other drive backplanes and support more drives.

Figure 29 17SAS/SATA+8UniBay drive backplane

 

(1) x4 SlimSAS downlink interface 3 (SAS EXP 3)

(2) x8 SlimSAS uplink interface (SAS PORT), managing all drives on the backplane

(3) x8 SlimSAS downlink interface 2 (SAS EXP 2)

(4) x4 SlimSAS downlink interface 1 (SAS EXP 1)

(5) Power connector 1 (PWR 1)

(6) Power connector 2 (PWR 2)

(7) MCIO connector 4 (PCIe5.0 x8)(NVMe 4), supporting NVMe drives 17 and 18

(8) AUX connector (AUX)

(9) MCIO connector 3 (PCIe5.0 x8)(NVMe 3), supporting NVMe drives 19 and 20

(10) MCIO connector 2 (PCIe5.0 x8)(NVMe 2), supporting NVMe drives 21 and 22

(11) Power connector 3 (PWR 3)

(12) MCIO connector 1 (PCIe5.0 x8)(NVMe 1), supporting NVMe drives 23 and 24

For more information about drive numbering, see "Drive numbering."

PCIe5.0 x8 description:

·     PCIe5.0: Fifth-generation signal speed.

·     x8: Bus bandwidth.

 

Rear 2LFF SAS/SATA drive backplane

The PCA-BP-2LFF-2U-G6 2LFF SAS/SATA drive backplane is installed at the server rear to support two 3.5-inch SAS/SATA drives.

Figure 30 2LFF SAS/SATA drive backplane

 

(1) x4 Mini-SAS-HD connector (SAS PORT1)

(2) AUX connector (AUX1)

(3) Power connector (PWR1)

 

Rear 4LFF SAS/SATA drive backplane

The PCA-BP-4LFF-2U-G6 4LFF SAS/SATA drive backplane is installed at the server rear to support four 3.5-inch SAS/SATA drives.

Figure 31 4LFF SAS/SATA drive backplane

 

(1) AUX connector (AUX1)

(2) Power connector (PWR1)

(3) x4 Mini-SAS-HD connector (SAS PORT1)

 

Rear 2SFF SAS/SATA drive backplane

The PCA-BP-2SFF-2U-G6 2SFF SAS/SATA drive backplane is installed at the server rear to support two 2.5-inch SAS/SATA drives.

Figure 32 2SFF SAS/SATA drive backplane

 

(1) Power connector (PWR)

(2) x4 Mini-SAS-HD connector (SAS PORT)

(3) AUX connector (AUX)

 

Rear 2SFF UniBay drive backplane

The PCA-BP-2SFF-2UniBay-2U-G6 2SFF UniBay drive backplane is installed at the server rear to support two 2.5-inch SAS/SATA/NVMe drives.

Figure 33 2SFF UniBay drive backplane

 

(1) Power connector (PWR)

(2) x4 Mini-SAS-HD connector (SAS PORT)

(3) SlimSAS connector (PCIe4.0 x8)(NVME)

(4) AUX connector (AUX)

PCIe4.0 x8 description:

·     PCIe4.0: Fifth-generation signal speed.

·     x8: Bus bandwidth.

 

Rear 4SFF SAS/SATA drive backplane

The PCA-BP-4SFF-2U-G6 4SFF SAS/SATA drive backplane is installed at the server rear to support four 2.5-inch SAS/SATA drives.

Figure 34 4SFF SAS/SATA drive backplane

 

(1) x4 Mini-SAS-HD connector (SAS PORT)

(2) AUX connector (AUX)

(3) Power connector (PWR)

 

Rear 4SFF UniBay drive backplane

The PCA-BP-4SFF-4UniBay-2U-G6 4SFF UniBay drive backplane is installed at the server rear to support four 2.5-inch SAS/SATA/NVMe drives.

Figure 35 4SFF UniBay drive backplane

 

(1) AUX connector (AUX)

(2) Power connector (PWR)

(3) MCIO connector B1/B2 (PCIe5.0 x8) (NVME-B1/B2)

(4) MCIO connector B3/B4 (PCIe5.0 x8) (NVME-B3/B4)

(5) x4 Mini-SAS-HD connector (SAS PORT)

 

PCIe5.0 x8 description:

·     PCIe5.0: Fifth-generation signal speed.

·     x8: Bus bandwidth.

 

Riser cards

The server supports the following riser cards:

·     RC-3FHFL-2U-G6

·     RC-3FHHL-2U-G6

·     RC-1FHHL-2U-G6

·     RC-2FHHL-2U-G6

·     Riser 4 assembly module (accommodating two FHFL PCIe modules)

·     Riser 3 assembly module (accommodating two FHFL PCIe modules)

RC-3FHFL-2U-G6

Figure 36 RC-3FHFL-2U-G6 (1)

 

Figure 37 RC-3FHFL-2U-G6 (2)

 

(1) PCIe5.0 x16 (16,8,4,2,1) slot 2/5

(2) PCIe5.0 x16 (16,8,4,2,1) slot 3/6

(3) GPU module power connector

(4) PCIe5.0 x16 (16,8,4,2,1) slot 1/4*

(5) MCIO connector 2-C

(6) MCIO connector 2-A

(7) MCIO connector 1-A

(8) MCIO connector 1-C

·     PCIe5.0 x16 (16,8,4,2,1) description:

·     PCIe5.0: Fifth-generation signal speed.

·     x16: Connector bandwidth.

·     (16,8,4,2,1): Compatible bus bandwidth, including x16, x8, x4, x2, and x1.

 

 

NOTE:

slot 1/4: When the riser card is installed in PCIe riser bay 1, this slot corresponds to PCIe slot 1. When the riser card is installed in PCIe riser bay 2, this slot corresponds to PCIe slot 4. This rule applies to all the other PCIe slots. For information about PCIe slots, see "Rear panel view."

 

RC-3FHHL-2U-G6

Figure 38 RC-3FHHL-2U-G6 (1)

 

Figure 39 RC-3FHHL-2U-G6 (2)

 

(1) PCIe5.0 x16 (8,4,2,1) slot 2/5

(2) PCIe5.0 x16 (8,4,2,1) slot 3/6

(3) PCIe5.0 x16 (16,8,4,2,1) slot 1/4*

(4) MCIO connector 1-A

(5) MCIO connector 1-C

·     PCIe5.0 x16 (16,8,4,2,1) description:

·     PCIe5.0: Fifth-generation signal speed.

·     x16: Connector bandwidth.

·     (16,8,4,2,1): Compatible bus bandwidth, including x16, x8, x4, x2, and x1.

 

 

NOTE:

slot 1/4: When the riser card is installed in PCIe riser bay 1, this slot corresponds to PCIe slot 1. When the riser card is installed in PCIe riser bay 2, this slot corresponds to PCIe slot 4. This rule applies to all the other PCIe slots. For information about PCIe slots, see "Rear panel view."

 

RC-1FHHL-2U-G6

Figure 40 RC-1FHHL-2U-G6

 

(1) PCIe5.0 x16 slot 3/6*

 

 

NOTE:

slot 3/6: When the riser card is installed in PCIe riser bay 1, this slot corresponds to PCIe slot 3. When the riser card is installed in PCIe riser bay 2, this slot corresponds to PCIe slot 6. This rule applies to all the other PCIe slots. For information about PCIe slots, see "Rear panel view."

 

RC-2FHHL-2U-G6

Figure 41 RC-2FHHL-2U-G6

 

(1)     PCIe5.0 x16 slot 5/6*

(2)     PCIe5.0 x16 slot 2/3

 

 

 

NOTE:

slot 5/6: When the riser card is installed in PCIe riser bay 1, this slot corresponds to PCIe slot 5. When the riser card is installed in PCIe riser bay 2, this slot corresponds to PCIe slot 6. This rule applies to all the other PCIe slots. For information about PCIe slots, see "Rear panel view."

 

Riser 4 assembly module (accommodating two FHFL PCIe modules)

This riser 4 assembly module is as shown in Figure 42.

Figure 42 Riser 4 assembly module (accommodating two FHFL PCIe modules)

 

1

PCIe interface cable S2 from slot 9 (connected to connector C1-P3C on the system board)

2

PCIe interface cable S1 from slot 10 (connected to connector C1-G3A on the system board)

3

PCIe5.0 x16 (16,8,4,2,1) in slot 10

4

PCIe5.0 x16 (16,8,4,2,1) in slot 9

5

PCIe interface cable S2 from slot 10 (connected to connector C1-G3C on the system board)

6

Power connector S3 from slot 10 (connected to connector PWR6 on the system board)

7

Power connector S3 from slot 9 (connected to connector PWR7 on the system board)

8

PCIe interface cable S1 from slot 9 (connected to connector C1-P3A on the system board)

PCIe5.0 x16 (16, 8,4,2,1) description:

·     PCIe5.0: Fifth-generation signal speed.

·     x16: Connector bandwidth.

·     (16,8,4,2,1): Compatible bus bandwidth, including x16, x8, x4, x2, and x1.

 

Riser 3 assembly module (accommodating two HHHL PCIe modules)

This riser 3 assembly module is as shown in Figure 43.

Figure 43 Riser 3 assembly module (accommodating two HHHL PCIe modules)

 

1

PCIe interface cable S1 from slot 8 (connected to connector C2-P3C on the system board)

2

Power connector S2 from slot 8 (connected to connector PWR7 on the system board)

3

PCIe5.0 x8 (8,4,2,1) in slot 8

4

PCIe5.0 x8 (8,4,2,1) in slot 7

5

Power connector S2 from slot 7 (connected to connector PWR6 on the system board)

6

PCIe interface cable S1 from slot 7 (connected to connector C2-P3A on the system board)

PCIe5.0 x8 (8,4,2,1) description:

·     PCIe5.0: Fifth-generation signal speed.

·     x8: Connector bandwidth.

·     (8,4,2,1): Compatible bus bandwidth, including x8, x4, x2, and x1.

 

Fan modules

The server supports four hot swappable fan modules. The server supports N+1 fan module redundancy. Figure 44 shows the layout of the fan modules in the chassis.

The server can automatically adjust the fan speed based on the actual system temperature. The speed policy is designed to optimize system cooling while minimizing noise levels, achieving an optimal balance between the two.

Figure 44 Fan module layout

 

SATA M.2 expander module

Figure 45 SATA M.2 expander module front view

114-M.2转接卡正面

(1) SATA data cable connector

(2) SATA M.2 SSD card slot 1

 

Figure 46 SATA M.2 expander module rear view

115-M.2转接卡反面

 

(1) SATA M.2 SSD card slot 2

 

M.2 SSD storage controller

A M.2 SSD storage controller supports installation of up to two SATA M.2 SSDs.

If you are installing two SATA M.2 SSDs on a M.2 SSD storage controller, install two SATA M.2 SSDs of the same model to ensure high availability.

The M.2 SSD storage controller supports setting up SATA M.2 SSDs in RAID 0 or RAID 1. For the RAID configuration method, see the storage controller card user guide.

The M.2 SSD RAID storage controller can be installed in any PCIe slot with x8 bus bandwidth or higher.

As a best practice, use SATA M.2 SSDs to install the operating system.

Figure 47 M.2 SSD storage controller

 

(1) SATA  M.2 SSD card slot 1

(2) SATA  M.2 SSD card slot 2

 

Server management module

The server management module is installed on the system board to provide I/O connectors and HDM out-of-band features for the server.

Figure 48 Server management module

 

(1) VGA connector

(2) Two USB 3.0 connectors

(3) HDM dedicated network interface

(4) UID LED

(5) HDM serial port

(6) iFIST module

(7) NCSI connector

 

 

Serial & DSD module

The serial & DSD module is installed in the slot on the server rear panel. The module provides two SD slots and forms RAID 1 by default.

Figure 49 Serial & DSD module

44-040-DSD模块_画板 1

 

Table 17 Component description

Item

Description

1

SD card slot 1

2

SD card slot 2

3

Serial port

 

B/D/F information

You can obtain B/D/F information by using one of the following methods:

·     BIOS log—Search the dumpiio keyword in the BIOS log.

·     UEFI shell—Execute the pci command. For information about how to execute the command, execute the help pci command.

·     Operating system—The obtaining method varies by OS.

·     For Linux, execute the lspci command.

If Linux does not support the lspci command by default, obtain and install the pci-utils package from the yum repository.

·     For Windows, install the pciutils package, and then execute the lspci command.

·     For VMware, execute the lspci command.


Appendix C  Managed removal of OCP network adapters

Before you begin

Before you perform a managed removal of an OCP network adapter, perform the following tasks:

·     Use the OS compatibility query tool at http://www.h3c.com/en/home/qr/default.htm?id=66 to obtain operating systems that support managed removal of OCP network adapters.

·     Make sure the BIOS version is 6.30.17 or higher, the HDM2 version is 1.21 or higher, and the CPLD version is V007 or higher.

Performing a hot removal

This section uses an OCP network adapter in slot 16 as an example.

To perform a hot removal:

1.     Access the operating system.

2.     Execute the dmidecode -t 9 command to search for the bus address of the OCP network adapter. As shown in Figure 50, the bus address of the OCP network adapter in slot 16 is 0000:31:00.0.

Figure 50 Searching for the bus address of an OCP network adapter by slot number

 

3.     Power off the OCP network adapter by executing the echo 0 > /sys/bus/pci/slots/slot number/power command, where slot number represents the number of the slot where the OCP network adapter resides.

Figure 51 Executing the echo 0 > /sys/bus/pci/slots/slot number/power command

 

4.     Identify whether the OCP network adapter has been disconnected:

¡     Observe the OCP network adapter LED. If the LED is off, the OCP network adapter has been disconnected.

¡     Execute the lspci vvv s 0000:31:00.0 command. If no output is displayed, the OCP network adapter has been disconnected.

Figure 52 Identifying OCP network adapter status

 

5.     Re-insert the OCP network adapter.

6.     Power on the OCP network adapter by executing the echo 1 > /sys/bus/pci/slots/slot number/power command, where slot number represents the number of the slot where the OCP network adapter resides.

7.     Identify whether the OCP network adapter has been connected:

¡     Observe the OCP network adapter LED. If the LED is on, the OCP network adapter has been connected.

¡     Execute the lspci vvv s 0000:31:00.0 command. If an output is displayed, the OCP network adapter has been connected.

Figure 53 Identifying OCP network adapter status

 

8.     Identify whether any exception exists. If any exception occurred, contact H3C Support.


Appendix D  Environment requirements

About environment requirements

The operating temperature requirements for the server vary depending on the server model and hardware configuration. When the general and component-based requirements conflict, use the component-based requirement.

Be aware that the actual maximum operating temperature of the server might be lower than what is stated because of poor site cooling performance. In a real data center, the server cooling performance might decrease because of adverse external factors, including poor cabinet cooling performance, high power density inside the cabinet, or insufficient spacing between devices.

General environment requirements

Item

Specifications

Operating temperature

Minimum: 5°C (41°F)

Maximum: 45°C (113°F)

CAUTION CAUTION:

The maximum temperature varies by hardware option presence. For more information, see "Operating temperature requirements."

Storage temperature

–40°C to +70°C (–40°F to +158°F)

Operating humidity

8% to 90%, noncondensing

Storage humidity

5% to 95%, noncondensing

Operating altitude

–60 m to +3000 m (–196.85 ft to +9842.52 ft)

The allowed maximum temperature decreases by 0.33°C (32.59°F) as the altitude increases by 100 m (328.08 ft) from 900 m (2952.76 ft)

Storage altitude

–60 m to +5000 m (–196.85 ft to +16404.20 ft)

 


Appendix E  Product recycling

New H3C Technologies Co., Ltd. provides product recycling services for its customers to ensure that hardware at the end of its life is recycled. Vendors with product recycling qualification are contracted to New H3C to process the recycled hardware in an environmentally responsible way.

For product recycling services, contact New H3C at

·     Tel: 400-810-0504

·     E-mail: [email protected]

·     Website: http://www.h3c.com

 


Appendix F  Glossary

Item

Description

B

BIOS

Basic input/output system is non-volatile firmware pre-installed in a ROM chip on a server's management module. The BIOS stores basic input/output, power-on self-test, and auto startup programs to provide the most basic hardware initialization, setup and control functionality.

C

CPLD

Complex programmable logic device is an integrated circuit used to build reconfigurable digital circuits.

E

Ethernet adapter

An Ethernet adapter, also called a network interface card (NIC), connects the server to the network.

F

UniSystem

UniSystem provided by H3C for easy and extensible server management. It can guide users to configure a server quickly with ease and provide an API interface to allow users to develop their own management tools.

G

GPU module

Graphics processing unit module converts digital signals to analog signals for output to a display device and assists processors with image processing to improve overall system performance.

H

HDM

Hardware Device Management is the server management control unit with which administrators can configure server settings, view component information, monitor server health status, and remotely manage the server.

Hot swapping

A module that supports hot swapping (a hot-swappable module) can be installed or removed while the server is running without affecting the system operation.

K

KVM

KVM is a management method that allows remote users to use their local video display, keyboard, and mouse to monitor and control the server.

R

RAID

Redundant array of independent disks (RAID) is a data storage virtualization technology that combines multiple physical hard drives into a single logical unit to improve storage and security performance.

Redundancy

A mechanism that ensures high availability and business continuity by providing backup modules. In redundancy mode, a backup or standby module takes over when the primary module fails.

U

U

A unit of measure defined as 44.45 mm (1.75 in) in IEC 60297-1. It is used as a measurement of the overall height of racks, as well as equipment mounted in the racks.

 


Appendix G  Acronyms

Acronym

Full name

B

BIOS

Basic Input/Output System

C

CMA

Cable Management Arm

CPLD

Complex Programmable Logic Device

D

DCPMM

Data Center Persistent Memory Module

DDR

Double Data Rate

DIMM

Dual In-Line Memory Module

DRAM

Dynamic Random Access Memory

G

GPU

Graphics Processing Unit

H

HBA

Host Bus Adapter

HDD

Hard Disk Drive

HDM

Hardware Device Management

I

IDC

Internet Data Center

iFIST

integrated Fast Intelligent Scalable Toolkit

K

KVM

Keyboard, Video, Mouse

L

LRDIMM

Load Reduced Dual Inline Memory Module

N

NCSI

Network Controller Sideband Interface

NVMe

Non-Volatile Memory Express

P

PCIe

Peripheral Component Interconnect Express

POST

Power-On Self-Test

R

RAID

Redundant Arrays of Independent Disks

RDIMM

Registered Dual Inline Memory Module

S

SAS

Serial Attached Small Computer System Interface

SATA

Serial ATA

SD

Secure Digital

SDS

Secure Diagnosis System

SFF

Small Form Factor

SSD

Solid State Drive

T

TCM

Trusted Cryptography Module

TDP

Thermal Design Power

TPM

Trusted Platform Module

U

UID

Unit Identification

UPS

Uninterruptible Power Supply

USB

Universal Serial Bus

 

  • Cloud & AI
  • InterConnect
  • Intelligent Computing
  • Intelligent Storage
  • Security
  • SMB Products
  • Intelligent Terminal Products
  • Product Support Services
  • Technical Service Solutions
All Services
  • Resource Center
  • Policy
  • Online Help
  • Technical Blogs
All Support
  • Become A Partner
  • Partner Policy & Program
  • Global Learning
  • Partner Sales Resources
  • Partner Business Management
  • Service Business
All Partners
  • Profile
  • News & Events
  • Online Exhibition Center
  • Contact Us
All About Us
新华三官网