- Table of Contents
- Related Documents
-
Title | Size | Download |
---|---|---|
02-Appendix | 5.60 MB |
Contents
Appendix A Server specifications
Server models and chassis view
Front panel view of the server
Processor mezzanine board components
Appendix B Component specifications
DRAM DIMM rank classification label
Front 8SFF SAS/SATA drive backplane
Front 8SFF UniBay drive backplane
Front 25SFF drive backplane (17SAS/SATA + 8UniBay)
Appendix C Managed removal of OCP network adapters
Appendix D Environment requirements
About environment requirements
General environment requirements
Operating temperature requirements
Any drive configuration except for 8SFF drive configuration
Appendix A Server specifications
The information in this document might differ from your product if it contains custom configuration options or features.
Figures in this document are for illustration only.
Server models and chassis view
H3C UniServer R6900 G6 server is a new generation enterprise-level 4U four-processor rack server independently developed by H3C for critical business applications. The overall design has been fully optimized based on the previous generation products, reaching new heights in computing efficiency, expansion capability, and low-carbon energy saving, and is a benchmark four-processor server product following the G5 product. It is an ideal choice for data-intensive applications such as large-scale virtualization, databases, memory computing, data analysis, data warehousing, business intelligence, and ERP.
Figure 1 Chassis view
Technical specifications
Item |
Specifications |
Dimensions (H × W × D) |
· Without a security bezel: 174.8 × 447 × 829.4 mm (6.88 × 17.60 × 32.65 in) · With a security bezel: 174.8 × 447 × 858 mm (6.88 × 17.60 × 33.78 in) |
Max. weight |
56.60 kg (124.78 lb) |
Processors |
4 × Eagle Stream processors (Maximum 350 W power consumption) |
Memory |
A maximum of 64 DIMMs Supports DDR5 DIMMs |
Storage controllers |
· Embedded VROC storage controller · Storage controller · NVMe VROC module · Serial & DSD module (supports RAID 1) |
Chipset |
Intel C741 Emmitsburg chipset |
Network connectors |
· 1 × embedded 1 Gbps HDM dedicated port · 2 × OCP 3.0 network adapter connector (supports NCSI) |
Integrated graphics |
The graphics chip (model AST2600) is integrated in the BMC management chip to provide a maximum resolution of 1920 × 1200@60Hz (32bpp), where: · 1920 × 1200: 1920 horizontal pixels and 1200 vertical pixels. · 60Hz: Screen refresh rate, 60 times per second. · 32bpp: Color depts. The higher the value, the more colors that can be displayed. Only after installing the graphics driver that is compatible with the operating system, can the integrated graphics support a maximum resolution of 1920 × 1200 pixels. Otherwise, it can only support the default resolution of the operating system. If you attach monitors to both the front and rear VGA connectors, only the monitor connected to the front VGA connector is available. |
· 6 × USB connectors (two on the system board, two at the server rear, and two at the server front): · 10 × embedded SATA connectors: ¡ 1 × x2 M.2 connector ¡ 2 × x4 SlimSAS connectors · 32 × embedded MCIO connectors (PCIe5.0 x8) · 1 × RJ-45 HDM dedicated port (at the server rear) · 2 × VGA connectors (one at the server rear and one at the server front) · 1 × serial port (available only when a Serial&DSD module is installed) · 1 × dedicated management interface (at the server front) |
|
Expansion slots |
· 22 × PCIe 5.0 standard slots · 2 × OCP 3.0 network adapter slots |
External USB optical drives |
|
Management |
· HDM agentless management tool (with independent management port) · H3C iFIST/UniSystem management software · LCD smart management module · U-Center data center management platform (optional) |
Security |
· Security enclosure · TCM/TPM · Dual-factor authentication · Silicon Root of Trust firmware protection module (optional) · Secure retirement |
Power supplies |
4 × hot-swappable power supplies, N + N redundancy |
Standards |
CCC and SEPA |
Components
Figure 2 R6900 G5 server components
Table 1 R6900 G5 server components
Description |
|
(1) Chassis access panel |
N/A |
(2) Storage controller |
Provides RAID capability to SAS/SATA drives, including RAID configuration and RAID scale-up. It supports online upgrade of the controller firmware and remote configuration. |
(3) Standard PCIe network adapter |
Installed in a standard PCIe slot to provide network ports. |
(4) SATA M.2 SSD |
Provides data storage space for the server. |
(5) Chassis open-alarm module |
Detects if the access panel is removed. The detection result can be displayed from the HDM Web interface. |
(6) SATA M.2 SSD expander module |
Provides M.2 SSD slots. |
(7) OCP 3.0 network adapter |
Installed on the OCP network adapter connector on the system board. |
(8) Riser card blank |
Installed on an empty PCIe riser connector to ensure good ventilation. |
(9) GPU module |
Provides computing capability for image processing and AI services. |
(10) Encryption module |
Provides encryption services for the server to enhance data security. |
(11) Serial&DSD module |
Provides a serial port and dual-SD card slots. |
(12) Server management module |
Provides various I/O interfaces and HDM out-of-band management. |
(13) System battery |
Supplies power to the system clock to ensure system time correctness. |
(14) Riser card |
Provides PCIe slots. |
(15) Power expander module |
Provides power connectors for power supplies 3 and 4. |
(16) Power supply |
Supplies power to the server. The power supplies support hot swapping and N+N redundancy. |
(17) NVMe VROC module |
Works with Intel VMD to provide RAID capability for the server to virtualize storage resources of NVMe drives. |
(18) Chassis |
N/A |
(19) Chassis ears |
Attach the server to the rack. The right ear is integrated with the front I/O component, and the left ear is integrated with VGA and USB 2.0 connectors. |
(20) LCD smart management module |
Displays basic server information, operating status, and fault information. Together with HDM event logs, users can fast locate faulty components and troubleshoot the server, ensuring server operation. |
(21) Drive backplane |
Provides power and data channels for drives. |
(22) Drive |
Provides data storage space. Drives support hot swapping. |
(23) Fan module |
Helps server ventilation. Fan modules support hot swapping and N+1 redundancy. |
(24) Fan module cage |
Accommodates fan modules. |
(25) Air baffle |
Provides ventilation aisles for processor heatsinks and memory modules and provides support for the supercapacitor. |
(26) System board |
One of the most important parts of a server, on which multiple components are installed, such as processor, memory, and fan module. It is integrated with basic server components, including the BIOS chip and PCIe connectors. |
(27) Memory |
Stores computing data and data exchanged with external storage temporarily. |
(28) Supercapacitor holder |
Secures a supercapacitor in the chassis. |
(29) Supercapacitor |
Supplies power to the flash card on the power fail safeguard module, which enables the storage controller to back up data to the flash card for protection when power outage occurs. |
(30) Processor retaining bracket |
Attaches a processor to the heatsink. |
(31) Processor |
Integrates memory and PCIe controllers to provide data processing capabilities for the server. |
(32) Processor socket cover |
Installed over an empty processor socket to protect pins in the socket. |
(33) Processor heatsink |
Cools the processor. |
(34) Processor mezzanine board |
Provides extension slots for processors and memory modules. |
Front panel
Front panel view of the server
Figure 3 shows the front panel view.
Table 2 Front panel description
Item |
Description |
1 |
Bay 1 for 8SFF/8SFF UniBay drives (optional) |
2 |
Bay 2 for 8SFF/8SFF UniBay drives (optional) |
3 |
Bay 3 for 8SFF/8SFF UniBay drives (optional) |
4 |
Drive or LCD smart management module (optional) |
5 |
USB 3.0 connector |
6 |
8SFF UniBay drives (optional) |
7 |
25SFF drives (optional) |
8 |
Bay 6 for 8SFF UniBay drives (optional) |
9 |
Bay 5 for 8SFF/8SFF UniBay drives (optional) |
10 |
Bay 4 for 8SFF/8SFF UniBay drives (optional) |
11 |
Serial label pull tab |
12 |
HDM dedicated management connector |
13 |
USB 2.0 connector |
14 |
VGA connector |
|
NOTE: Different drives require different drive backplanes. For more information about drive backplanes, see "Drive backplanes." |
LEDs and buttons
The LED and buttons are the same on all server models. Figure 4 shows the front panel LEDs and buttons. Table 3 describes the status of the front panel LEDs.
Figure 4 Front panel LEDs and buttons
(1) Power on/standby button and system power LED |
(2) OCP network adapter Ethernet port LED |
(3) Health LED |
(4) UID button LED |
Table 3 LEDs and buttons on the front panel
Button/LED |
Status |
Power on/standby button and system power LED |
· Steady green—The system has started. · Flashing green (1 Hz)—The system is starting. · Steady amber—The system is in standby state. · Off—No power is present. Possible reasons: ¡ No power source is connected. ¡ No power supplies are present. ¡ The installed power supplies are faulty. ¡ The system power cords are not connected correctly. |
OCP network adapter Ethernet port LED |
· Steady green—A link is present on the port. · Flashing green—The port is receiving or sending data. · Off—No link is present on the port. NOTE: The server supports up to two OCP 3.0 network adapters. |
Health LED |
· Steady green—The system is operating correctly or a minor alarm is present. · Flashing green (4 Hz)—HDM is initializing. · Flashing amber (1 Hz)—A major alarm is present. · Flashing red (1 Hz)—A critical alarm is present. If a system alarm is present, log in to HDM to obtain more information about the system running status. |
UID button LED |
· Steady blue—UID LED is activated. The UID LED can be activated by using the following methods: ¡ Press the UID button LED. ¡ Activate the UID LED from HDM. · Flashing blue: ¡ 1 Hz—The firmware is being upgraded or the system is being managed from HDM. Do not power off the server. ¡ 4 Hz—HDM is restarting. To restart HDM, press the UID button LED for eight seconds. · Off—UID LED is not activated. |
Ports
Table 4 Ports on the front panel
Port |
Type |
Description |
VGA connector |
DB-15 |
Connects a display terminal, such as a monitor or KVM device. |
USB connector |
USB 2.0/3.0 |
Connects the following devices: · USB flash drive. · USB keyboard or mouse. · USB optical drive for operating system installation. |
HDM dedicated management connector |
Type-C |
Connects a Type-C to USB adapter cable, which connects to a USB Wi-Fi adapter. |
Rear panel
Rear panel view
Figure 5 shows the rear panel view.
Figure 5 Rear panel components
Table 5 Rear panel description
Description |
||
1 |
PCIe riser bay 1: PCIe slots 1 through 7 |
|
2 |
PCIe riser bay 2: PCIe slots 8 through 14 |
|
3 |
PCIe riser bay 3: PCIe slots 15 through 18 |
|
4 |
PCIe riser bay 4: PCIe slots 19 through 22 |
|
5 |
Power supply 4 |
|
6 |
Power supply 2 |
|
7 |
Power supply 3 |
|
8 |
Power supply 1 |
|
9 |
OCP 3.0 network adapter or Serial&DSD module (slot 24)(optional) |
|
10 |
VGA connector |
|
11 |
Two USB 3.0 connectors |
|
12 |
HDM dedicated network port (1Gbps, RJ-45, default IP address 192.168.1.2/24) |
|
13 |
OCP 3.0 network adapter (slot 23)(optional) |
|
14 |
Serial label pull tab |
|
LEDs
Figure 6 shows the rear panel LEDs. Table 6 describes the status of the rear panel LEDs.
(1) UID LED |
(2) Link LED of the Ethernet port |
(3) Activity LED of the Ethernet port |
(4) Power supply LED for power supply 1 |
(5) Power supply LED for power supply 3 |
(6) Power supply LED for power supply 2 |
(7) Power supply LED for power supply 4 |
Table 6 LEDs on the rear panel
LED |
Status |
UID LED |
· Steady blue—UID LED is activated. The UID LED can be activated by using the following methods: ¡ Press the UID button LED. ¡ Enable UID LED from HDM. · Flashing blue: ¡ 1 Hz—The firmware is being upgraded or the system is being managed from HDM. Do not power off the server. ¡ 4 Hz—HDM is restarting. To restart HDM, press the UID button LED for eight seconds. · Off—UID LED is not activated. |
Link LED of the Ethernet port |
· Steady green—A link is present on the port. · Off—No link is present on the port. |
Activity LED of the Ethernet port |
· Flashing green (1 Hz)—The port is receiving or sending data. · Off—The port is not receiving or sending data. |
Power supply LED |
· Steady green—The power supply is operating correctly. · Flashing green (0.33 Hz)—The power supply is in standby state and does not output power. · Flashing green (2 Hz)—The power supply is updating its firmware. · Steady amber—Either of the following conditions exists: ¡ The power supply is faulty. ¡ The power supply does not have power input, but another power supply has correct power input. · Flashing amber (1 Hz)—An alarm has occurred on the power supply. · Off—No power supplies have power input, which can be caused by an incorrect power cord connection or power source shutdown. |
Ports
Table 7 Ports on the rear panel
Port |
Type |
Description |
VGA connector |
DB-15 |
Connects a display terminal, such as a monitor or KVM device. |
BIOS serial port |
DB-9 |
The BIOS serial port is used for the following purposes: · Log in to the server when the remote network connection to the server has failed. · Establish a GSM modem or encryption lock connection. NOTE: The serial port is on the Serial&DSD module. For more information, see "Serial & DSD module." |
USB connector |
USB 3.0 |
Connects the following devices: · USB flash drive. · USB keyboard or mouse. · USB optical drive for operating system installation. |
HDM dedicated network port |
RJ-45 |
Establishes a network connection to manage HDM from its Web interface. |
Power receptacle |
Standard single-phase |
Connects the power supply to the power source. |
System board
System board components
Figure 7 shows the system board layout.
Figure 7 System board components
Table 8 System board components
Description |
Mark |
|
1 |
OCP network adapter connector 2/Serial&DSD module connector |
OCP2 |
2 |
Fan connector for OCP 3.0 network adapter 2 |
OCP2 FAN |
3 |
TPM/TCM connector |
TPM |
4 |
PFR module connector |
PFRCPLD |
5 |
Server management module connector |
BMC |
6 |
PCIe riser connector 1 (x16 PCIe5.0, for processor 1) |
RISER1 PCIe X16 |
7 |
Fan connector for OCP 3.0 network adapter 1 |
OCP1 FAN |
8 |
OCP 3.0 network adapter connector 1 |
OCP1 |
9 |
SlimSAS connector 2 (x4 SATA) |
SATA PORT2 |
10 |
SlimSAS connector 1 (x4 SATA) |
SATA PORT1 |
11 |
M.2 connector (x2 SATA) |
M.2 PORT |
12 |
NVMe VROC module connector |
NVMe RAID KEY |
13 |
Front I/O connector |
RIGHT EAR |
14 |
Built-in USB 2.0 connector |
INTERNAL USB2.0 |
15 |
System battery |
- |
16 |
MCIO connector C1-P1C (x8 PCIe5.0, for processor 1) |
C1-P1C |
17 |
MCIO connector C1-P1A (x8 PCIe5.0, for processor 1) |
C1-P1A |
18 |
MCIO connector C1-P3A (x8 PCIe5.0, for processor 1) |
C1-P3A |
19 |
MCIO connector C1-P3C (x8 PCIe5.0, for processor 1) |
C1-P3C |
20 |
MCIO connector C1-P4C (x8 PCIe5.0, for processor 1) |
C1-P4C |
21 |
MCIO connector C1-P4A (x8 PCIe5.0, for processor 1) |
C1-P4A |
22 |
AUX connector 3 for the front drive backplane |
AUX3 |
23 |
AUX connector 2 for the front drive backplane |
AUX2 |
24 |
Power connector 5 for the front drive backplane |
AUX5 |
25 |
LCD smart management module connector |
DIAG LCD |
26 |
Temperature sensing module connector |
SENSOR1 |
27 |
Power connector 3 for the front drive backplane |
PWR3 |
28 |
Power connector 6 for the front drive backplane |
PWR6 |
29 |
AUX connector 5 for the front drive backplane |
AUX 5 |
30 |
AUX connector 6 for the front drive backplane |
AUX 6 |
31 |
AUX connector 4 for the front drive backplane |
AUX 4 |
32 |
MCIO connector C2-P4A (x8 PCIe5.0, for processor 2) |
C2-P4A |
33 |
MCIO connector C2-P4C (x8 PCIe5.0, for processor 2) |
C2-P4C |
34 |
Power connector 4 for the front drive backplane |
PWR4 |
35 |
MCIO connector C2-P3C (x8 PCIe5.0, for processor 2) |
C2-P3C |
36 |
Power connector 1 for the front drive backplane |
PWR1 |
37 |
MCIO connector C2-P3A (x8 PCIe5.0, for processor 2) |
C2-P3A |
38 |
Power connector 2 for the front drive backplane |
PWR2 |
39 |
AUX connector 1 for the front drive backplane |
AUX1 |
40 |
Chassis-open alarm module connector |
INTRUDER |
41 |
Front VGA and USB2.0 connector |
LEFT EAR |
42 |
Power connector 8 for the rear drive backplane |
PWR8 |
43 |
Signal connector for power supplies 3 and 4 |
PSU34 |
44 |
Power connector 7 for the rear drive backplane |
PWR7 |
45 |
Power connector for RISER3 and 4/GPU/OCP 3.0 network adapter 3 |
RISER & GPU POWER |
46 |
MCIO connector C2-P2A (x8 PCIe5.0, for processor 2) |
C2-P2A |
47 |
MCIO connector C2-P2C (x8 PCIe5.0, for processor 2) |
C2-P2C |
48 |
NCSI connector for OCP 3.0 network adapter 3 |
OCP3 |
49 |
AUX connector for PCIe riser card 4 |
RISER4 AUX |
50 |
AUX connector for PCIe riser card 3 |
RISER3 AUX |
51 |
PCIe riser connector 2 (x16 PCIe5.0, for processor 2) |
RISER2 PCIe X16 |
52 |
AUX connector 7 for the rear drive backplane |
AUX7 |
53 |
Built-in USB 2.0 connector |
INTERNAL USB2.0 |
54 |
AUX connector 8 for the rear drive backplane |
AUX8 |
55 |
Mid module connector |
- |
X |
System maintenance switch |
MAINTENANCE SW |
System maintenance switch
Figure 8 shows the system maintenance switch. Table 9 describes how to use the maintenance switch.
Figure 8 System maintenance switch
Table 9 System maintenance switch description
Item |
Description |
Remarks |
1 |
· Off (default)—HDM login requires the username and password of a valid HDM user account. · On—HDM login requires the default username and password. |
For security purposes, turn off the switch after you complete tasks with the default username and password as a best practice. |
5 |
· Off (default)—Normal server startup. · On—Restores the default BIOS settings at server startup. |
To restore the default BIOS settings, turn on and then turn off the switch. The server starts up with the default BIOS settings at the next startup. The server cannot start up when the switch is turned on. To avoid service data loss, stop running services and power off the server before turning on the switch. |
6 |
· Off (default)—Normal server startup. · On—Clears all passwords from the BIOS at server startup. |
If this switch is on, the server will clear all the passwords at each startup. Make sure you turn off the switch before the next server startup if you do not need to clear all the passwords. |
2, 3, 4, 7, and 8 |
Reserved for future use. |
N/A |
Processor mezzanine board components
Figure 9 show the processor mezzanine board layout.
Figure 9 Processor mezzanine board components
Table 10 Processor mezzanine board components
Item |
Description |
Mark |
1 |
MCIO connector C4-P2C (x8 PCIe5.0, for processor 4) |
C4-P2C |
2 |
MCIO connector C4-P2A (x8 PCIe5.0, for processor 4) |
C4-P2A |
3 |
MCIO connector C4-P1A (x8 PCIe5.0, for processor 4) |
C4-P1A |
4 |
MCIO connector C4-P1C (x8 PCIe5.0, for processor 4) |
C4-P1C |
5 |
MCIO connector C4-P0C (x8 PCIe5.0, for processor 4) |
C4-P0C |
6 |
MCIO connector C4-P0A (x8 PCIe5.0, for processor 4) |
C4-P0A |
7 |
MCIO connector C3-P2C (x8 PCIe5.0, for processor 3) |
C3-P2C |
8 |
MCIO connector C3-P2A (x8 PCIe5.0, for processor 3) |
C3-P2A |
9 |
MCIO connector C3-P1A (x8 PCIe5.0, for processor 3) |
C3-P1A |
10 |
MCIO connector C3-P1C (x8 PCIe5.0, for processor 3) |
C3-P1C |
11 |
MCIO connector C3-P0C (x8 PCIe5.0, for processor 3) |
C3-P0C |
12 |
MCIO connector C3-P0A (x8 PCIe5.0, for processor 3) |
C3-P0A |
13 |
MCIO connector C3-P4A (x8 PCIe5.0, for processor 3) |
C3-P4A |
14 |
MCIO connector C3-P4C (x8 PCIe5.0, for processor 3) |
C3-P4C |
15 |
MCIO connector C3-P3C (x8 PCIe5.0, for processor 3) |
C3-P3C |
16 |
MCIO connector C3-P3A (x8 PCIe5.0, for processor 3) |
C3-P3A |
17 |
MCIO connector C4-P4A (x8 PCIe5.0, for processor 4) |
C4-P4A |
18 |
MCIO connector C4-P4C (x8 PCIe5.0, for processor 4) |
C4-P4C |
19 |
MCIO connector C4-P3C (x8 PCIe5.0, for processor 4) |
C4-P3C |
20 |
MCIO connector C4-P3A (x8 PCIe5.0, for processor 4) |
C4-P3A |
PCIe5.0 x8 description: · PCIe5.0—Fifth-generation signal speed. · x8—Bus bandwidth. |
DIMM slots
The system board and processor mezzanine board each provide six DIMM channels per processor, and 12 channels in total, as shown in Figure 10 and Figure 11, respectively. Each channel contains two DIMM slots.
Figure 10 System board DIMM slot layout
Figure 11 Processor mezzanine board DIMM slot layout
Appendix B Component specifications
For components compatible with the server and detailed component information, contact Technical Support.
About component model names
The model name of a hardware option in this document might differ slightly from its model name label.
A model name label might add a prefix or suffix to the hardware-coded model name for purposes such as identifying the matching server brand or applicable region. For example, the DDR5-4800-32G-1Rx4 memory model represents memory module labels including UN-DDR5-4800-32G-1Rx4-R, UN-DDR5-4800-32G-1Rx4-F, and UN-DDR5-4800-32G-1Rx4-S, which have different prefixes and suffixes.
DIMMs
The server provides 6 DIMM channels per processor, 24 channels in total. Each DIMM channel has two DIMM slots. For the physical layout of DIMM slots, see "DIMM slots."
DRAM DIMM rank classification label
A DIMM rank is a set of memory chips that the system accesses while writing or reading from the memory. On a multi-rank DIMM, only one rank is accessible at a time.
To determine the rank classification of a DRAM DIMM, use the label attached to the DIMM, as shown in Figure 12.
Figure 12 DRAM DIMM rank classification label
Table 11 DIMM rank classification label description
Callout |
Description |
Remarks |
1 |
Capacity |
Options include: · 8GB. · 16GB. · 32GB. |
2 |
Number of ranks |
Options include: · 1R— One rank. · 2R—Two ranks. · 4R—Four ranks. · 8R—Eight ranks. |
3 |
Data width |
Options include: · ×4—4 bits. · ×8—8 bits. |
4 |
DIMM generation |
Only DDR4 is supported. |
5 |
Data rate |
Options include: · 2133P—2133 MHz. · 2400T—2400 MHz. · 2666V—2666 MHz. · 2933Y—2933 MHz. |
6 |
DIMM type |
Options include: · L—LRDIMM. · R—RDIMM. |
HDDs and SSDs
Drive numbering
The server provides drive slots at both the server front and the server rear, as shown in Figure 13 and Figure 14.
Figure 13 Drive numbering at the server front
Figure 14 Drive numbering at the server rear
Drive LEDs
The server supports SAS, SATA, and NVMe drives (including E1.S drives), of which SAS and SATA drives support hot swapping and NVMe drives support hot insertion and managed hot removal. You can use the LEDs on a drive to identify its status after it is connected to a storage controller.
For more information about OSs that support hot insertion and managed hot removal of NVMe drives, contact Technical Support.
Figure 15 and Figure 16 show the locations of LEDs on a drive.
(1) Fault/UID LED |
(2) Present/Active LED |
(1) Fault/UID LED |
(2) Present/Active LED |
To identify the status of a SAS or SATA drive, use Table 12. To identify the status of an NVMe drive, use Table 13. To identify the status of an NVMe drive, use Table 14.
Table 12 SAS/SATA drive LED description
Fault/UID LED status |
Present/Active LED status |
Description |
Flashing amber (0.5 Hz) |
Steady green/Flashing green (4.0 Hz) |
A drive failure is predicted. As a best practice, replace the drive before it fails. |
Steady amber |
Steady green/Flashing green (4.0 Hz) |
The drive is faulty. Replace the drive immediately. |
Steady blue |
Steady green/Flashing green (4.0 Hz) |
The drive is operating correctly and is selected by the RAID controller. |
Off |
Flashing green (4.0 Hz) |
The drive is performing a RAID migration or rebuilding, or the system is reading or writing data to the drive. |
Off |
Steady green |
The drive is present but no data is being read or written to the drive. |
Off |
Off |
The drive is not securely installed. |
Table 13 NVMe drive LED description
Fault/UID LED status |
Present/Active LED status |
Description |
Flashing amber (0.5 Hz) |
Off |
The managed hot removal process is completed and the drive is ready for removal. |
Flashing amber (4 Hz) |
Off |
The drive is in hot insertion process. |
Flashing amber (0.5 Hz) |
Steady green/Flashing green (4.0 Hz) |
A drive predictive alarm is present. Replace the drive in time. |
Steady amber |
Steady green/Flashing green (4.0 Hz) |
The drive is faulty. Replace the drive immediately. |
Steady blue |
Steady green/Flashing green (4.0 Hz) |
The drive is operating correctly and selected by the RAID controller. |
Off |
Flashing green (4.0 Hz) |
The drive is performing a RAID migration or rebuilding, or the system is reading or writing data to the drive. |
Off |
Steady green |
The drive is present but no data is being read or written to the drive. |
Off |
Off |
The drive is not securely installed. |
Table 14 E1.S drive LED description
Fault/UID LED status |
Present/Active LED status |
Description |
Flashing amber (0.5 Hz) |
Off |
The drive has completed the managed hot removal process and can be removed directly. |
Flashing amber (4 Hz) |
Steady green/Flashing green (4 Hz) |
The drive is in hot insertion process or is selected by the RAID controller. |
Flashing amber (0.5 Hz) |
Steady green/Flashing green (4 Hz) |
A drive predictive alarm is present. Replace the drive in time. |
Steady amber |
Steady green/Flashing green (4 Hz) |
A drive error is present. Replace the drive immediately. |
Off |
Flashing green (4 Hz) |
The drive is performing a RAID migration or rebuilding, or the system is reading or writing data to the drive. |
Off |
Steady green |
The drive is present but no data is being read or written to the drive. |
Off |
Off |
The drive is not securely installed. |
Drive configurations
The server supports multiple drive configurations. For more information about drive configurations and their required storage controller and riser cards, see H3C UniServer R6900 G5 Server Drive Configurations and Cabling Guide.
Drive backplanes
The server supports the following types of drive backplanes:
· SAS/SATA drive backplanes—Support only SAS/SATA drives.
· UniBay drive backplanes—Support both SAS/SATA and NVMe drives. You must connect both SAS/SATA and NVMe data cables. The number of supported drives varies by drive cabling.
· X SAS/SATA+Y UniBay drive backplanes—Support SAS/SATA drives in all slots and support NVMe drives in certain slots.
¡ X: Number of slots supporting only SAS/SATA drives.
¡ Y: Number of slots supporting both SAS/SATA and NVMe drives.
For UniBay drive backplanes and X SAS/SATA+Y UniBay drive backplanes:
· The two drive types are supported only when both SAS/SATA and NVMe data cables are connected.
· The number of supported SAS/SATA drives and the number of supported NVMe drives vary by cable connection.
Front 8SFF SAS/SATA drive backplane
The PCA-BP-8SFF-2U-G6 8SFF SAS/SATA drive backplane can be installed at the server front to support eight 2.5-inch SAS/SATA drives.
Figure 17 8SFF SAS/SATA drive backplane
Figure 18 Component description
Item |
Description |
Mark |
1 |
x8 SlimSAS connector |
SAS PORT 1 |
2 |
AUX connector |
AUX |
3 |
Power connector |
PWR |
Front 8SFF UniBay drive backplane
The PCA-BP-8UniBay-2U-G6 8SFF UniBay drive backplane can be installed at the server front to support eight 2.5-inch SAS/SATA/NVMe drives.
Figure 19 8SFF UniBay drive backplane
Figure 20 Component description
Item |
Description |
Mark |
1 |
x8 SlimSAS connector |
SAS PORT |
2 |
AUX connector |
AUX |
3 |
MCIO connector B3/B4 (PCIe5.0 x8) |
NVMe B3/B4 |
4 |
Power connector |
POWER |
5 |
MCIO connector B1/B2 (PCIe5.0 x8) |
NVMe B1/B2 |
6 |
MCIO connector A3/A4 (PCIe5.0 x8) |
NVMe A3/A4 |
7 |
MCIO connector A1/A2 (PCIe5.0 x8) |
NVMe A1/A2 |
PCIe5.0 x8 description: · PCIe5.0: Fifth-generation signal speed. · x8: Bus bandwidth. |
Front 25SFF drive backplane (17SAS/SATA + 8UniBay)
The PCA-BP-25SFF-2U-G6 25SFF drive backplane can be installed at the server front to support 25 2.5-inch drives, including 17 SAS/SATA drives and 8 SAS/SATA/NVMe drives. The drive backplane integrates an Expander chip to manage 25 SAS/SATA drives through an x8 SlimSAS connector. The backplane also provides three downlink connectors to connect to other backplanes and manage more drives.
Figure 21 25SFF UniBay drive backplane
Figure 22 Component description
Item |
Description |
Mark |
1 |
x4 SlimSAS downlink connector 3 |
SAS EXP 3 |
2 |
x8 SlimSAS uplink connector for managing all drives connected to the drive backplane |
SAS PORT |
3 |
x8 SlimSAS downlink connector 2 |
SAS EXP 2 |
4 |
x4 SlimSAS downlink connector 1 |
SAS EXP 1 |
5 |
Power connector 1 |
PWR 1 |
6 |
Power connector 2 |
PWR 2 |
7 |
MCIO connector 4 (PCIe5.0 x8) |
NVMe 4 |
8 |
AUX connector |
AUX |
9 |
MCIO connector 3 (PCIe5.0 x8) |
NVMe 3 |
10 |
MCIO connector 2 (PCIe5.0 x8) |
NVMe 2 |
11 |
Power connector 3 |
PWR 3 |
12 |
MCIO connector 1 (PCIe5.0 x8) |
NVMe 1 |
PCIe5.0 x8 description: · PCIe5.0: Fifth-generation signal speed. · x8: Bus bandwidth. |
Rear 8E1.S drive backplane
The PCA-BP-8E1S-2U-G6 8E1.S drive backplane can be installed at the server rear to support eight E1.S drives each with a thickness of 15 mm (0.59 in).
Figure 23 8E1.S drive backplane
Figure 24 Component description
Item |
Description |
Mark |
1 |
AUX connector |
AUX |
2 |
MCIO connector A1/A2 (PCIe5.0 x8) |
EDSFF-A1/A2 |
3 |
Power connector 1 |
PWR 1 |
4 |
MCIO connector A3/A4 (PCIe5.0 x8) |
EDSFF-A3/A4 |
5 |
MCIO connector B1/B2 (PCIe5.0 x8) |
EDSFF-B1/B2 |
6 |
MCIO connector B3/B4 (PCIe5.0 x8) |
EDSFF-B3/B4 |
PCIe5.0 x8 description: · PCIe5.0: Fifth-generation signal speed. · x8: Bus bandwidth. |
Riser cards
The server supports the following riser cards:
· RC-7FHHL-4U-G6
· RC-3FHHL/1FHFL-4U-G6
· RC-2FHHL/2FHFL-4U-G6
· RC-4HHHL-R3-4U-G6
· RC-1FHHL/1FHFL-R3-4U-G6
· RC-2FHFL-R3-4U-G6
· RC-4HHHL-R4-4U-G6
RC-7FHHL-4U-G6
Figure 25 RC-7FHHL-4U-G6 riser card (1)
Figure 26 RC-7FHHL-4U-G6 riser card (2)
Table 15 Component description
Item |
Description |
1 |
PCIe5.0 x16 (8,4) slot 7/14 |
2 |
PCIe5.0 x16 (8,4) slot 6/13 |
3 |
PCIe5.0 x16 (8,4) slot 5/12 |
4 |
PCIe5.0 x16 (8,4) slot 4/11 |
5 |
PCIe5.0 x16 (8,4) slot 3/10 |
6 |
PCIe5.0 x16 (8,4) slot 2/9 |
7 |
PCIe5.0 x16 (8,4) slot 1/8 |
8 |
MCIO connector C2 |
9 |
MCIO connector C1 |
10 |
MCIO connector B1 |
11 |
MCIO connector A1 |
12 |
MCIO connector B2 |
13 |
MCIO connector A2 |
PCIe5.0 x16 (8,4) description: · PCIe5.0: Fifth-generation signal speed. · x16: Connector bandwidth. · (8,4): Compatible bus bandwidth, including x8 and x4. |
|
NOTE: slot 1/8: When the riser card is installed in PCIe riser bay 1, this slot corresponds to PCIe slot 1. When the riser card is installed in PCIe riser bay 2, this slot corresponds to PCIe slot 8. This rule applies to all the other PCIe riser bays. For information about PCIe slots, see "Rear panel view." |
RC-3FHHL/1FHFL-4U-G6
Figure 27 RC-3FHHL/1FHFL-4U-G6 riser card (1)
Figure 28 RC-3FHHL/1FHFL-4U-G6 riser card (2)
Table 16 Component description
Item |
Description |
1 |
PCIe5.0 x16 (8,4) slot 7/14 |
2 |
GPU power connector 2 |
3 |
PCIe5.0 x16 (8,4) slot 6/13 |
4 |
GPU power connector 1 |
5 |
PCIe5.0 x16 (8,4) slot 4/11 |
6 |
PCIe5.0 x16 (16,8,4) slot 2/9 |
7 |
MCIO connector C2 |
8 |
MCIO connector C1 |
9 |
MCIO connector B1 |
10 |
MCIO connector A1 |
11 |
MCIO connector B2 |
12 |
MCIO connector A2 |
PCIe5.0 x16 (8,4) description: · PCIe5.0: Fifth-generation signal speed. · x16: Connector bandwidth. · (8,4): Compatible bus bandwidth, including x8 and x4. |
|
NOTE: slot 2/9: When the riser card is installed in PCIe riser bay 1, this slot corresponds to PCIe slot 2. When the riser card is installed in PCIe riser bay 2, this slot corresponds to PCIe slot 9. This rule applies to all the other PCIe riser bays. For information about PCIe slots, see "Rear panel view." |
RC-2FHHL/2FHFL-4U-G6
Figure 29 RC-2FHHL/2FHFL-4U-G6 riser card (1)
Figure 30 RC-2FHHL/2FHFL-4U-G6 riser card (2)
Table 17 Component description
item |
Description |
1 |
PCIe5.0 x16 (8,4) slot 7/14 |
2 |
GPU power connector 2 |
3 |
GPU power connector 1 |
4 |
PCIe5.0 x16 (16,8,4,) slot 4/11 |
5 |
PCIe5.0 x16 (16,8,4) slot 2/9 |
6 |
MCIO connector B1 |
7 |
MCIO connector A1 |
8 |
MCIO connector B2 |
9 |
MCIO connector A2 |
PCIe5.0 x16 (8,4) description: · PCIe5.0: Fifth-generation signal speed. · x16: Connector bandwidth. · (8,4): Compatible bus bandwidth, including x8 and x4. |
|
NOTE: slot 2/9: When the riser card is installed in PCIe riser bay 1, this slot corresponds to PCIe slot 2. When the riser card is installed in PCIe riser bay 2, this slot corresponds to PCIe slot 9. This rule applies to all the other PCIe riser bays. For information about PCIe slots, see "Rear panel view." |
RC-4HHHL-R3-4U-G6
Figure 31 RC-4HHHL-R3-4U-G6 riser card
Table 18 Component description
Item |
Description |
1 |
PCIe5.0 x16 (8,4) slot 18 |
2 |
PCIe5.0 x16 (8,4) slot 17 |
3 |
Riser&GPU power connector |
4 |
AUX connector |
5 |
PCIe5.0 x16 (8,4) slot 16 |
6 |
PCIe5.0 x16 (8,4) slot 15 |
PCIe5.0 x16 (8,4) description: · PCIe5.0: Fifth-generation signal speed. · x16: Connector bandwidth. · (8,4): Compatible bus bandwidth, including x8 and x4. |
RC-1FHHL/1FHFL-R3-4U-G6
Figure 32 RC-1FHHL/1FHFL-R3-4U-G6 riser card
Table 19 Component description
Item |
Description |
1 |
PCIe5.0 x16 (16,8,4) slot 18 |
2 |
PCIe5.0 x16 (8,4) slot 16 |
3 |
Riser&GPU power connector |
4 |
AUX connector |
PCIe5.0 x16 (8,4) description: · PCIe5.0: Fifth-generation signal speed. · x16: Connector bandwidth. · (8,4): Compatible bus bandwidth, including x8 and x4. |
RC-2FHFL-R3-4U-G6
Figure 33 RC-2FHFL-R3-4U-G6 riser card
Figure 34 Component description
Item |
Description |
1 |
PCIe5.0 x16 (16,8,4) slot 18 |
2 |
Riser&GPU power connector |
3 |
AUX connector |
4 |
PCIe5.0 x16 (16,8,4) slot 16 |
PCIe5.0 x16 (8,4) description: · PCIe5.0: Fifth-generation signal speed. · x16: Connector bandwidth. · (8,4): Compatible bus bandwidth, including x8 and x4. |
RC-4HHHL-R4-4U-G6
Figure 35 RC-4HHHL-R4-4U-G6 riser card
Figure 36 Component description
Item |
Description |
1 |
PCIe5.0 x16 (8,4) slot 22 |
2 |
PCIe5.0 x16 (8,4) slot 21 |
3 |
Riser&GPU power connector |
4 |
AUX connector |
5 |
PCIe5.0 x16 (8,4) slot 20 |
6 |
PCIe5.0 x16 (8,4) slot 19 |
PCIe5.0 x16 (8,4) description: · PCIe5.0: Fifth-generation signal speed. · x16: Connector bandwidth. · (8,4): Compatible bus bandwidth, including x8 and x4. |
Power expander module
Figure 37 Power expander module
Figure 38 Component description
Item |
Description |
1 |
Signal connector |
2 |
Power expander module connector |
UPI Mezz module
If the server is not installed with the processor mezzanine board, install the UPI Mezz module for interconnection of two processors. When you install the UPI Mezz module, align the connectors marked by the blue frames with the midplane module connector. For more information, see "System board components."
LCD smart management module
An LCD smart management module displays basic server information, operating status, and fault information, and provides diagnostics and troubleshooting capabilities. You can locate and troubleshoot component failures by using the LCD module in conjunction with the event logs generated in HDM.
Figure 41 LCD smart management module
Table 20 Component description
Item |
Item |
Description |
1 |
Mini-USB connector |
Used for upgrading the firmware of the LCD module. |
2 |
LCD module cable |
Connects the LCD module to the system board of the server. For information about the LCD smart management module connector on the system board, see "System board components." |
3 |
LCD module shell |
Protects and secures the LCD screen. |
4 |
LCD screen |
Displays basic server information, operating status, and fault information. |
Fan modules
The server supports four hot swappable fan modules. Each fan module includes two fans and each fan includes two rotors. The fan rotors support N+1 redundancy. That is, the server can operate correctly when a single fan fails. Figure 42 shows the layout of the fan modules in the chassis.
The server adopts intelligent fan energy-saving and noise-reduction technology, which combines various AI algorithms. This can monitor real-time temperature, power, and other status information of the server, analyze the optimal fan adjustment policy, and dynamically adjust fan duty cycle settings to meet the energy-saving and noise-reduction requirements.
During system POST and operation, the server will be gracefully powered off through HDM if the temperature detected by any sensor in the server reaches the critical threshold. The server will be powered off directly if the temperature of any key components such as processors exceeds the upper threshold. For more information about the thresholds and detected temperatures, access the HDM Web interface and see HDM2 online help.
PCIe slots
You can install riser cards and rear 8E1.S drive modules. PCIe slot numbers vary by server configuration.
Figure 43 PCIe slot numbering for rear riser cards
Obtaining B/D/F information
You can obtain B/D/F information by using one of the following methods:
· BIOS log—Search the dumpiio keyword in the BIOS log.
· UEFI shell—Execute the pci command. For information about how to execute the command, execute the help pci command.
· Operating system—The obtaining method varies by OS.
¡ For Linux, execute the lspci command.
If Linux does not support the lspci command by default, you must execute the yum command to install the pci-utils package.
¡ For Windows, install the pciutils package, and then execute the lspci command.
¡ For VMware, execute the lspci command.
Appendix C Managed removal of OCP network adapters
Before you begin
Before you perform a managed removal of an OCP network adapter, perform the following tasks:
· Use the OS compatibility query tool at http://www.h3c.com/en/home/qr/default.htm?id=66 to obtain operating systems that support managed removal of OCP network adapters.
· Make sure the HDM2 version is 1.55 or higher, the CPLD1 and CPLD2 version is V003.
Performing a hot removal
This section uses an OCP network adapter in slot 11 as an example.
To perform a hot removal:
1. Access the operating system.
2. Execute the dmidecode -t 9 command to search for the bus address of the OCP network adapter. As shown in Figure 48, the bus address of the OCP network adapter in slot 23 is 0000:31:00.0.
Figure 44 Searching for the bus address of an OCP network adapter by slot number
3. Execute the echo 0 > /sys/bus/pci/slots/slot number/power command, where slot number represents the number of the slot where the OCP network adapter resides.
Figure 45 Executing the echo 0 > /sys/bus/pci/slots/slot number/power command
4. Identify whether the OCP network adapter has been disconnected:
¡ Observe the OCP network adapter LED. If the LED is off, the OCP network adapter has been disconnected.
¡ Execute the lspci –vvv –s 0000:31:00.0 command. If no output is displayed, the OCP network adapter has been disconnected.
Figure 46 Identifying OCP network adapter status
5. Replace the OCP network adapter.
6. Identify whether the OCP network adapter has been connected:
¡ Observe the OCP network adapter LED. If the LED is on, the OCP network adapter has been connected.
¡ Execute the lspci –vvv –s 0000:31:00.0 command. If an output is displayed, the OCP network adapter has been connected.
Figure 47 Identifying OCP network adapter status
7. Identify whether any exception exists. If any exception occurred, contact H3C Support.
Appendix D Environment requirements
About environment requirements
The operating temperature requirements for the server vary depending on the server model and hardware configuration. When the general and component-based requirements conflict, use the component-based requirement.
Be aware that the actual maximum operating temperature of the server might be lower than what is stated because of poor site cooling performance. In a real data center, the server cooling performance might decrease because of adverse external factors, including poor cabinet cooling performance, high power density inside the cabinet, or insufficient spacing between devices.
General environment requirements
Item |
Specifications |
Operating temperature |
Minimum: 5°C (41°F) Maximum: · For other drive configurations except for 8SFF drive configuration: 40°C (104°F) · For 8SFF drive configuration: 45°C (113°F) The maximum temperature varies by hardware option presence. For more information, see "Operating temperature requirements." |
Storage temperature |
–40°C to +70°C (–40°F to +158°F) |
Operating humidity |
8% to 90%, noncondensing |
Storage humidity |
5% to 95%, noncondensing |
Operating altitude |
–60 m to +3000 m (–196.85 ft to +9842.52 ft) The allowed maximum temperature decreases by 0.33 °C (32.59°F) as the altitude increases by 100 m (328.08 ft) from 900 m (2952.76 ft) |
Storage altitude |
–60 m to +5000 m (–196.85 ft to +16404.20 ft) |
Operating temperature requirements
General guidelines
When a motor fails, the maximum server operating temperature decreases by 5°C (41°F).
If you install two TDPs of 205 W or higher in the server, the server performance might decrease.
The DPS-1600AB-13 R power supply operates correctly only at 30°C (86°F).
The GPU-V100S-32G module can be installed only in the server that uses 8SFF drive configuration.
With GPU modules installed in the server, you cannot use drive HDD-2.4T-SAS3-10K-SFF, HDD-2.4T-SAS-12G-10K-SFF, HDD-1.8T-SAS3-10K-SFF, or HDD-1.8T-SAS-12G-10K-SFF in the server. Otherwise, the drive performance might decrease.
Any drive configuration except for 8SFF drive configuration
Table 21 Operating temperature requirements
Maximum server operating temperature |
Hardware options |
30°C (86°F) |
All hardware options are supported. |
35°C (95°F) |
DPS-1600AB-13 R power supply is not supported. |
40°C (104°F) |
The following hardware options are not supported: · Processors with a TDP of more than 205 W. · DCPMMs. · NVMe SSD PCIe accelerator modules. · NVMe drives. · SATA M.2 SSDs. · GPU modules. · DPS-1600AB-13 R power supply. |
8SFF drive configuration
The 8SFF drives installed in slots 0 to 7. For more information about drive slots, see "Front panel view of the server."
Table 22 Operating temperature requirements
Maximum server operating temperature |
Hardware options |
30°C (86°F) |
All hardware options are supported. |
35°C (95°F) |
With a GPU-V100S-32G module installed in the server, only processors with a TDP of 165 W or lower are supported. |
40°C (104°F) |
The following hardware options are not supported: · DCPMMs. · NVMe SSD PCIe accelerator modules. · NVMe drives. · SATA M.2 SSDs. · GPU modules. · DPS-1600AB-13 R power supply. |
45°C (113°F) |
The following hardware options are not supported: · DCPMMs. · NVMe SSD PCIe accelerator modules. · NVMe drives. · SATA M.2 SSDs. · GPU modules. · DPS-1600AB-13 R power supply. · Processors with a TDP of 165 W or lower. |
Appendix E Product recycling
New H3C Technologies Co., Ltd. provides product recycling services for its customers to ensure that hardware at the end of its life is recycled. Vendors with product recycling qualification are contracted to New H3C to process the recycled hardware in an environmentally responsible way.
For product recycling services, contact New H3C at
· Tel: 400-810-0504
· E-mail: [email protected]
· Website: http://www.h3c.com
Appendix F Glossary
Description |
|
B |
|
BIOS |
Basic input/output system is non-volatile firmware pre-installed in a ROM chip on a server's management module. The BIOS stores basic input/output, power-on self-test, and auto startup programs to provide the most basic hardware initialization, setup and control functionality. |
C |
|
CPLD |
Complex programmable logic device is an integrated circuit used to build reconfigurable digital circuits. |
E |
|
Ethernet adapter |
An Ethernet adapter, also called a network interface card (NIC), connects the server to the network. |
F |
|
UNISYSTEM |
Fast Intelligent Scalable Toolkit provided by H3C for easy and extensible server management. It can guide users to configure a server quickly with ease and provide an API interface to allow users to develop their own management tools. |
G |
|
GPU module |
Graphics processing unit module converts digital signals to analog signals for output to a display device and assists processors with image processing to improve overall system performance. |
H |
|
HDM |
Hardware Device Management is the server management control unit with which administrators can configure server settings, view component information, monitor server health status, and remotely manage the server. |
A module that supports hot swapping (a hot-swappable module) can be installed or removed while the server is running without affecting the system operation. |
|
K |
|
KVM |
KVM is a management method that allows remote users to use their local video display, keyboard, and mouse to monitor and control the server. |
N |
|
NVMe VROC module |
A module that works with Intel VMD to provide RAID capability for the server to virtualize storage resources of NVMe drives. |
R |
|
RAID |
Redundant array of independent disks (RAID) is a data storage virtualization technology that combines multiple physical hard drives into a single logical unit to improve storage and security performance. |
Redundancy |
A mechanism that ensures high availability and business continuity by providing backup modules. In redundancy mode, a backup or standby module takes over when the primary module fails. |
S |
|
Security bezel |
A locking bezel mounted to the front of a server to prevent unauthorized access to modules such as hard drives. |
U |
A unit of measure defined as 44.45 mm (1.75 in) in IEC 60297-1. It is used as a measurement of the overall height of racks, as well as equipment mounted in the racks. |
UniBay drive backplane |
A UniBay drive backplane supports both SAS/SATA and NVMe drives. |
V |
|
VMD |
VMD provides hot removal, management and fault-tolerance functions for NVMe drives to increase availability, reliability, and serviceability. |
Appendix G Acronyms
Acronym |
Full name |
B |
|
BIOS |
|
C |
|
CMA |
Cable Management Arm |
CPLD |
|
D |
|
DCPMM |
Data Center Persistent Memory Module |
DDR |
Double Data Rate |
DIMM |
Dual In-Line Memory Module |
DRAM |
Dynamic Random Access Memory |
G |
|
GPU |
|
H |
|
HBA |
Host Bus Adapter |
HDD |
Hard Disk Drive |
HDM |
|
I |
|
IDC |
Internet Data Center |
iFIST |
integrated Fast Intelligent Scalable Toolkit |
K |
|
KVM |
Keyboard, Video, Mouse |
L |
|
LRDIMM |
Load Reduced Dual Inline Memory Module |
N |
|
NCSI |
Network Controller Sideband Interface |
NVMe |
Non-Volatile Memory Express |
P |
|
PCIe |
Peripheral Component Interconnect Express |
POST |
Power-On Self-Test |
R |
|
RDIMM |
Registered Dual Inline Memory Module |
S |
|
SAS |
Serial Attached Small Computer System Interface |
SATA |
Serial ATA |
SD |
Secure Digital |
SDS |
Secure Diagnosis System |
SFF |
Small Form Factor |
sLOM |
Small form factor Local Area Network on Motherboard |
SSD |
Solid State Drive |
T |
|
TCM |
Trusted Cryptography Module |
TDP |
Thermal Design Power |
TPM |
Trusted Platform Module |
U |
|
UID |
Unit Identification |
UPI |
Ultra Path Interconnect |
UPS |
Uninterruptible Power Supply |
USB |
Universal Serial Bus |
V |
|
VROC |
Virtual RAID on CPU |
VMD |
Volume Management Device |