- Table of Contents
- Related Documents
-
Title | Size | Download |
---|---|---|
02-Appendix | 6.65 MB |
Contents
Appendix A Server specifications
Server models and chassis view
Front panel view of the server
Appendix B Component specifications
DRAM DIMM rank classification label
Front 24LFF drive backplane (18SAS/SATA+6UniBay)
Front 24LFF drive backplane (18SAS/SATA+6UniBay)
Front 24LFF drive backplane (18SAS/SATA+6UniBay)
Rear 12LFF SAS/SATA drive backplane
Rear 2LFF SAS/SATA drive backplane
Rear 4LFF SAS/SATA drive backplane
Mid 4LFF SAS/SATA drive backplane
Rear 4SFF UniBay drive backplane
Rear 2SFF UniBay drive backplane
Rear 2SFF SAS/SATA drive backplane
Appendix C Managed removal of OCP network adapters
Appendix D Environment requirements
About environment requirements
General environment requirements
Operating temperature requirements
Appendix A Server specifications
The information in this document might differ from your product if it contains custom configuration options or features.
Figures in this document are for illustration only.
Server models and chassis view
H3C UniServer R4300 G6 servers are 4U dual-processor servers independently developed by H3C based on the new-generation Eagle Stream platform of Intel. The server can be widely used in the new generation of infrastructure, including cloud computing, Internet, IDC, and enterprise markets. The strong storage performance of R4300 G6 meets the comprehensive demands for high storage density, efficient data computing, and linear expansion. It is especially suitable for distributed storage, big data, and backup and archival applications in industries such as government, security, operators, Internet, and enterprises. In addition, with efficient energy-saving technology, the server can help customers build green and low-carbon data centers.
Figure 1 Chassis view
Technical specifications
Item |
Specifications |
Dimensions (H × W × D) |
· Without a security bezel: 174.8 × 447mm × 781 mm (6.88 × 17.60 × 30.75 in) · With a security bezel: 174.8 × 447mm × 809 mm (6.88 × 17.60 × 31.85 in) |
Max. weight |
68 kg (149.91 lb) |
Power consumption |
Power consumption parameters vary by configuration scheme. For more information, use the query tool for power consumption of the server at http://www.h3c.com/en/home/qr/default.htm?id=291. |
Processors |
· 2 × fourth-generation Xeon processors: ¡ Maximum 350 W power consumption per processor ¡ Processor-integrated memory controller, supporting eight DIMM channels ¡ Processor-integrated PCIe controller, supporting PCIe5.0, with each processor providing 80 PCIe lanes ¡ Four-channel UPI bus interconnection · Supports Montage Jintide processors For more information about processors, use the component compatibility lookup tool at http://www.h3c.com/en/home/qr/default.htm?id=66. |
Memory |
· Up to 32 DDR5 DIMMs · Up to 4800MT/s · Support RDIMMs · Support processing the maximum of 8 TB with two processors |
Storage controllers |
· HBA and smart storage controllers in standard PCIe slots · NVMe RAID module · Network adapters in standard PCIe slots, 10G/25G/100G/200G network adapters, and 100G/200G/400G IB cards |
Chipset |
Intel C741 |
Network connectors |
· 1 × embedded 1 Gbps HDM dedicated management port · Up to two OCP 3.0 network adapters (OCP 3.0 network adapters support NCSI and hot swapping) |
I/O connectors |
· Connectors at the server front: ¡ 1 × USB3.0 connector (on the right chassis ear) ¡ 1 × USB2.0 connector (on the left chassis ear) ¡ 1 × VGA connector (on the left chassis ear) ¡ 1 × HDM dedicated management port (on the left chassis ear) · Connectors at the server rear: ¡ 2 × USB3.0 connectors ¡ 1 × USB2.0 connector (on the system board) · 1 × RJ-45 HDM dedicated network port · 1 × VGA connector · 1 × serial port (available only when the Serial & DSD module is used) |
Expansion slots |
10 × PCIe standard slots, supporting PCIe 5.0 and two dedicated OCP 3.0 slots |
Optical drives |
External USB optical drives |
Management |
· HDM agentless management tool (with independent management port) · H3C iFIST/UniSystem management software · (Optional.) U-Center data center management platform |
Security |
· Security chassis · TCM/TPM · Dual-factor authentication |
Power supplies |
· (Optional.) 800W/1300W/1600W/2000W/2400W/2700W Platinum power supplies in 1+1 redundancy · (Optional.) –48V/336V Titanium DC power supplies in 1+1 redundancy, V-level power supplies certified by CQC31- 46128 · Support hot swapping |
Standards |
CCC, environmental labeling |
Components
Figure 2 R4300 G6 server components
Item |
Description |
(1) Chassis access panel |
N/A |
(2) OCP network adapter |
Network adapter installed onto the OCP network adapter connector on the system board. |
(3) Processor heatsink |
Cools the processor. |
(4) Processor |
Integrates memory and PCIe controllers to provide data processing capabilities for the server. |
(5) Storage controller |
Provides RAID capability to SAS/SATA drives, including RAID configuration and RAID scale-up. It supports online upgrade of the controller firmware and remote configuration. |
(6) Standard PCIe network adapter |
Installed in a standard PCIe slot to provide network ports. |
(7) Riser card |
Provides PCIe slots. |
(8) Memory |
Stores computing data and data exchanged with external storage temporarily. Both DDR5 and PMem200 are supported. |
(9) Processor socket cover |
Installed over an empty processor socket to protect pins in the socket. |
(10) System board |
One of the most important parts of a server, on which multiple components are installed, such as processor, memory, and fan. It is integrated with basic server components, including the BIOS chip and PCIe connectors. |
(11) Rear drive backplane |
Provides power and data channels for drives at the server rear. |
(12) Riser card blank |
Installed on an empty PCIe riser connector to ensure good ventilation. |
(13) Rear SFF drive cage |
Installed at the server rear to accommodate SFF drives. |
(14) Rear LFF drive cage |
Installed at the server rear to accommodate LFF drives. |
(15) Power supply |
Supplies power to the server. The power supplies support hot swapping and 1+1 redundancy. |
(16) Chassis |
Accommodate all components. |
(17) Chassis ears |
Attach the server to the rack. The right ear is integrated with the front I/O component, and the left ear is integrated with VGA connector, dedicated management connector, and USB 3.0 connector. |
(18) Front drive backplane |
Provides power and data channels for drives at the server front. |
(19) E1.S drive cage |
Accommodates E1.S drives. |
(20) Drive |
Provides data storage space. Drives support hot swapping. The server supports SSD and HDD drives, and multiple drive connectors such as SAS, SATA, M.2, and PCIe. |
(21) Supercapacitor holder |
Secures a supercapacitor in the chassis. |
(22) Supercapacitor |
Supplies power to the flash card on the power fail safeguard module, which enables the storage controller to back up data to the flash card for protection when a power outage occurs. |
(23) SATA M.2 SSD |
Provides data storage space for the server. |
(24) NVMe VROC module |
Works with Intel VMD to provide RAID capability for the server to virtualize storage resources of NVMe drives. |
(25) M.2 expander module |
Provides SATA M.2 SSD slots. |
(26) GPU module |
Provides computing services such as graphics processing and AI. |
(27) Fan |
Helps server ventilation. Fans support hot swapping and N+1 redundancy. |
(28) Fan cage |
Accommodates fan modules. |
(29) Serial & DSD module |
Provides one serial port and two SD card slots. |
(30) Encryption module |
Provides encryption services for the server to enhance data security. |
(31) System battery |
Supplies power to the system clock to ensure system time correctness. |
(32) Chassis open-alarm module |
Detects if the access panel is removed. The detection result can be displayed from the HDM Web interface. |
(33) Air baffle |
Provides ventilation aisles for processor heatsinks and memory modules and provides support for the supercapacitor. |
Front panel
Front panel view of the server
Figure 3 Front panel
Table 1 8LFF front panel description
Item |
Description |
1 |
NVMe drives (0 to 5) (optional) |
2 |
USB 3.0 connector |
3 |
Serial label pull tab |
4 |
Dedicated management connector |
5 |
USB 2.0 connector |
6 |
VGA connector |
LEDs and buttons
Figure 4 Front panel LEDs and buttons
Table 2 LEDs and buttons on the front panel
Button/LED |
Status |
Power on/standby button and system power LED |
· Steady green—The system has started. · Flashing green (1 Hz)—The system is starting. · Steady amber—The system is in standby state. · Off—No power is present. Possible reasons: ¡ No power source is connected. ¡ No power supplies are present. ¡ The installed power supplies are faulty. ¡ The system power cords are not connected correctly. |
OCP 3.0 network adapter Ethernet port LED |
· Steady green—A link is present on a port of an OCP 3.0 network adapter. · Flashing green (1 Hz)—A port on an OCP 3.0 network adapter is receiving or sending data. · Off—No link is present on any port of an OCP 3.0 network adapter. |
Health LED |
· Steady green—The system is operating correctly or a minor alarm is present. · Flashing green (4 Hz)—HDM is initializing. · Flashing amber (1 Hz)—A major alarm is present. · Flashing red (1 Hz)—A critical alarm is present. If a system alarm is present, log in to HDM to obtain more information about the system running status. |
UID button LED |
· Steady blue—UID LED is activated. The UID LED can be activated by using the following methods: ¡ Press the UID button LED. ¡ Activate the UID LED from HDM. · Flashing blue: ¡ 1 Hz—The firmware is being upgraded or the system is being managed from HDM. Do not power off the server. ¡ 4 Hz—HDM is restarting. To restart HDM, press the UID button LED for eight seconds. · Off—UID LED is not activated. |
Security bezel light
The security bezel provides hardened security and uses effect light to visualize operation and health status to help inspection and fault location. The default effect light is as shown in Figure 5.
Table 3 Security bezel effect light
System status |
Light status |
Standby |
Steady white: The system is in standby state. |
Startup |
· Beads turn on white from middle in turn—POST progress. · Beads turn on white from middle three times—POST has finished. |
Running |
· Breathing white (gradient at 0.2 Hz)—Normal state, indicating the system load by the percentage of beads turning on from the middle to the two sides of the security bezel. ¡ No load—Less than 10%. ¡ Light load—10% to 50%. ¡ Middle load—50% to 80%. ¡ Heavy load—More than 80%. · Breathing white (gradient at 1 Hz )—A pre-alarm is present. · Flashing amber (1 Hz)—A major alarm is present. · Flashing red (1 Hz)—A critical alarm is present. |
UID |
· All beads flash white (1 Hz)—The firmware is being upgraded or the system is being managed from HDM. Do not power off the server. · Some beads flash white (1 Hz)—HDM is restarting. |
Ports
Table 4 Ports on the front panel
Port |
Type |
Description |
VGA connector |
DB-15 |
Connects a display terminal, such as a monitor or KVM device. |
USB connector |
USB 3.0/2.0 |
Connects the following devices: · USB flash drive. · USB keyboard or mouse. · USB optical drive for operating system installation. |
Dedicated management connector |
Type-C |
Connects a Type-C to USB adapter cable, which connects to a USB Wi-Fi adapter. NOTE: The server supports only Xiaomi USB Wi-Fi adapters. |
Rear panel
Rear panel view
Figure 6 Rear panel components
Table 5 Rear panel description
Description |
||
1 |
PCIe riser bay 1: PCIe slots 1 through 3 |
|
2 |
PCIe riser bay 2: PCIe slots 4 through 6 |
|
3 |
PCIe riser bay 3: PCIe slots 7 and 8 |
|
4 |
PCIe riser bay 4: PCIe slots 9 and 10 |
|
5 |
Power supply 2 |
|
6 |
Power supply 1 |
|
7 |
OCP 3.0 network adapter/Serial & DSD module (in slot 17) (optional) |
|
8 |
VGA connector |
|
9 |
Two USB 3.0 connectors |
|
10 |
HDM dedicated network port (1Gbps, RJ-45, default IP address 192.168.1.2/24) |
|
11 |
OCP 3.0 network adapter (in slot 16) (optional) |
|
12 |
Serial label pull tab |
|
LEDs
(1) Power supply LED for power supply 2 |
(2) Power supply LED for power supply 1 |
(3) Activity LED of the Ethernet port |
(4) Link LED of the Ethernet port |
(5) UID LED |
Table 6 LEDs on the rear panel
LED |
Status |
UID LED |
· Steady blue—UID LED is activated. The UID LED can be activated by using the following methods: ¡ Press the UID button LED. ¡ Enable UID LED from HDM. · Flashing blue: ¡ 1 Hz—The firmware is being upgraded or the system is being managed from HDM. ¡ 4 Hz—HDM is restarting. To restart HDM, press the UID button LED for at least 8 seconds. · Off—UID LED is not activated. |
Link LED of the Ethernet port |
· Steady green—A link is present on the port. · Off—No link is present on the port. |
Activity LED of the Ethernet port |
· Flashing green (1 Hz)—The port is receiving or sending data. · Off—The port is not receiving or sending data. |
Power supply LED |
· Steady green—The power supply is operating correctly. · Flashing green (0.33 Hz)—The power supply is in standby state and does not output power. · Flashing green (2 Hz)—The power supply is updating its firmware. · Steady amber—Either of the following conditions exists: ¡ The power supply is faulty. ¡ The power supply does not have power input, but another power supply has correct power input. · Flashing amber (1 Hz)—An alarm has occurred on the power supply. · Off—No power supplies have power input, which can be caused by an incorrect power cord connection or power source shutdown. |
Ports
Table 7 Ports on the rear panel
Port |
Type |
Description |
VGA connector |
DB-15 |
Connects a display terminal, such as a monitor or KVM device. |
BIOS serial port |
RJ-45 |
The BIOS serial port is used for the following purposes: · Log in to the server when the remote network connection to the server has failed. · Establish a GSM modem or encryption lock connection. |
USB connector |
USB 3.0 |
Connects the following devices: · USB flash drive. · USB keyboard or mouse. · USB optical drive for operating system installation. |
HDM dedicated network port |
RJ-45 |
Establishes a network connection to manage HDM from its Web interface. |
Power receptacle |
Standard single-phase |
Connects the power supply to the power source. |
System board
System board components
Figure 8 System board components
Table 8 System board components
No. |
Description |
Mark |
1 |
PCIe riser connector 1 (for processor 1) |
RISER1 PCIe X16 |
2 |
Server management module slot |
BMC |
3 |
Fan connector 1 for the OCP 3.0 network adapter |
OCP1 FAN |
4 |
OCP 3.0 network adapter connector 1 |
OCP1 |
5 |
SlimSAS connector 1 (x4 SATA or M.2 SSD) |
M.2&SATA PORT1 |
6 |
AUX connector 7 for the mid drive backplane |
AUX7 |
7 |
M.2 SSD AUX connector |
M.2 AUX |
8 |
Front I/O connector |
RIGHT EAR |
9 |
Fan connector 4 |
J245 |
10 |
LCD smart management module connector |
DIAG LCD |
11 |
MCIO connector C1-P4A (for processor 1) |
C1-P4A |
12 |
MCIO connector C1-P4C (for processor 1) |
C1-P4C |
13 |
Fan connector 3 |
J104 |
14 |
MCIO connector C1-P3C (for processor 1) |
C1-P3C |
15 |
MCIO connector C1-P3A (for processor 1) |
C1-P3A |
16 |
Power connector 3 for front drive backplane |
PWR3 |
17 |
Power connector 2 for front drive backplane |
PWR2 |
18 |
Power connector 1 for front drive backplane |
PWR1 |
19 |
Fan connector 2 |
J94 |
20 |
MCIO connector C2-P4A (for processor 2) |
C2-P4A |
21 |
AUX connector 3 for front drive backplane |
AUX3 |
22 |
MCIO connector C2-P4C (for processor 2) |
C2-P4C |
23 |
MCIO connector C2-P3C (for processor 2) |
C2-P3C |
24 |
Fan connector 1 |
J96 |
25 |
AUX connector 2 for front drive backplane |
AUX2 |
26 |
AUX connector 1 for front drive backplane |
AUX1 |
27 |
MCIO connector C2-P3A (for processor 2) |
C2-P3A |
28 |
Power connector 7 for rear drive backplane |
PWR7 |
29 |
Chassis-open alarm module connector |
INTRUDER |
30 |
Power connector 6 for rear drive backplane |
PWR6 |
31 |
Front VGA and USB 2.0 connector |
LEFT EAR |
32 |
Power connector 4 for rear drive backplane |
PWR4 |
33 |
Power connector 5 for rear drive backplane |
PWR5 |
34 |
MCIO connector C2-P2C (for processor 2) |
C2-P2C |
35 |
AUX connector 4 for rear drive backplane |
AUX4 |
36 |
MCIO connector C2-P2A (for processor 2) |
C2-P2A |
37 |
MCIO connector C2-P1C (for processor 2) |
C2-P1C |
38 |
MCIO connector C2-P1A (for processor 2) |
C2-P1A |
39 |
AUX connector 5 for rear drive backplane |
AUX5 |
40 |
Riser card AUX connector 6 |
AUX6 |
41 |
PCIe riser connector 2 (for processor 2) |
RISER2 PCIe X16 |
42 |
NVMe VROC module connector |
NVMe RAID KEY |
43 |
TPM/TCM connector |
TPM |
44 |
Fan connector 2 for OCP 3.0 network adapter |
OCP2 FAN |
45 |
OCP 3.0 network adapter connector/Serial & DSD module connector |
OCP2&DSD&UART CARD |
46 |
Riser card AUX connector 6 |
AUX8 |
47 |
Embedded USB 2.0 connector |
INTERNAL USB2.0 |
48 |
System battery |
- |
49 |
MCIO connector C1-P2C (for processor 1) |
C1-P2C |
50 |
MCIO connector C1-P2A (for processor 1) |
C1-P2A |
X |
System maintenance switch |
MAINTENANCE SW |
System maintenance switch
Figure 9 shows the system maintenance switch. Table 9 describes how to use the maintenance switch.
Figure 9 System maintenance switch
Table 9 System maintenance switch description
Item |
Description |
Remarks |
1 |
· Off (default)—HDM login requires the username and password of a valid HDM user account. · On—HDM login requires the default username and password. |
For security purposes, turn off the switch after you complete tasks with the default username and password as a best practice. |
5 |
· Off (default)—Normal server startup. · On—Restores the default BIOS settings. |
To restore the default BIOS settings, turn on and then turn off the switch. The server starts up with the default BIOS settings at the next startup. The server cannot start up when the switch is turned on. To avoid service data loss, stop running services and power off the server before turning on the switch. |
6 |
· Off (default)—Normal server startup. · On—Clears all passwords from the BIOS at server startup. |
If this switch is on, the server will clear all the passwords at each startup. Make sure you turn off the switch before the next server startup if you do not need to clear all the passwords. |
2, 3, 4, 7, and 8 |
Reserved for future use. |
N/A |
DIMM slots
The system board provides eight DIMM channels per processor, and 16 channels in total, as shown in Figure 10. Each channel contains two DIMM slots.
Figure 10 System board DIMM slot layout
Appendix B Component specifications
For components compatible with the server and detailed component information, use the component compatibility lookup tool at http://www.h3c.com/en/home/qr/default.htm?id=66.
About component model names
The model name of a hardware option in this document might differ slightly from its model name label.
A model name label might add a prefix or suffix to the hardware-coded model name for purposes such as identifying the matching server brand or applicable region. For example, the DDR5-4800-32G-1Rx4 memory model represents memory module labels including UN-DDR5-4800-32G-1Rx4-R, UN-DDR5-4800-32G-1Rx4-F, and UN-DDR5-4800-32G-1Rx4-S, which have different prefixes and suffixes.
DIMMs
The server provides eight DIMM channels per processor and each channel has two DIMM slots. If the server has one processor, the total number of DIMM slots is 16. If the server has two processors, the total number of DIMM slots is 32. For the physical layout of DIMM slots, see "DIMM slots."
DRAM DIMM rank classification label
A DIMM rank is a set of memory chips that the system accesses while writing or reading from the memory. On a multi-rank DIMM, only one rank is accessible at a time.
To determine the rank classification of a DIMM, use the label attached to the DIMM, as shown in Figure 11. The meaning of the DDR DIMM rank classification labels are similar and this section uses the label of a DDR5 DIMM as an example.
Figure 11 DDR DIMM rank classification label
Table 10 DIMM rank classification label description
Callout |
Description |
Remarks |
1 |
Capacity |
Options include: · 8GB. · 16GB. · 32GB. |
2 |
Number of ranks |
Options include: · 1R—One rank (Single-Rank). · 2R—Two ranks (Dual-Rank). A 2R DIMM is equivalent to two 1R DIMMs. · 4R—Four ranks (Quad-Rank). A 4R DIMM is equivalent to two 2R DIMMs · 8R—Eight ranks (8-Rank). An 8R DIMM is equivalent to two 4R DIMMs. |
3 |
Data width |
Options include: · ×4—4 bits. · ×8—8 bits. |
4 |
DIMM generation |
DDR5 |
5 |
Data rate |
Options include: · 2666V—2666 MHz. · 2933Y—2933 MHz. · 3200AA—3200 MHz. · 4800—4800 MHz. |
6 |
DIMM type |
Options include: · L—LRDIMM. · R—RDIMM. |
HDDs and SSDs
Drive numbering
The server provides different drive numbering schemes for different drive configurations at the server front and rear.
Figure 12 Drive numbering at the server front
Figure 13 Drive numbering at the server rear
Figure 14 Drive numbering for 8E1.S drive configuration
Drive LEDs
The server supports SAS, SATA, and NVMe (including E1.S) drives, of which SAS and SATA drives support hot swapping and NVMe drives support hot insertion and managed hot removal. You can use the LEDs on a drive to identify its status after it is connected to a storage controller.
For more information about OSs that support hot insertion and managed hot removal of NVMe drives, use the component compatibility lookup tool at http://www.h3c.com/en/home/qr/default.htm?id=66.
Figure 15 shows the location of the LEDs on a drive.
(1) Fault/UID LED |
(2) Present/Active LED |
Figure 16 E1.S drive LEDs
(1) Fault/UID LED |
(2) Present/Active LED |
To identify the status of a SAS or SATA drive, use Table 11. To identify the status of an NVMe drive, use Table 12.
Table 11 SAS/SATA drive LED description
Fault/UID LED status |
Present/Active LED status |
Description |
Flashing amber (0.5 Hz) |
Steady green/Flashing green (4.0 Hz) |
A drive failure is predicted. As a best practice, replace the drive before it fails. |
Steady amber |
Steady green/Flashing green (4.0 Hz) |
The drive is faulty. Replace the drive immediately. |
Steady blue |
Steady green/Flashing green (4.0 Hz) |
The drive is operating correctly and is selected by the RAID controller. |
Off |
Flashing green (4.0 Hz) |
The drive is performing a RAID migration or rebuilding, or the system is reading or writing data to the drive. |
Off |
Steady green |
The drive is present but no data is being read or written to the drive. |
Off |
Off |
The drive is not securely installed. |
Table 12 NVMe drive LED description
Fault/UID LED status |
Present/Active LED status |
Description |
Flashing amber (4 Hz) |
Off |
The drive is in hot insertion process. |
Steady amber |
Steady green/Flashing green (4.0 Hz) |
The drive is faulty. Replace the drive immediately. |
Steady blue |
Steady green/Flashing green (4.0 Hz) |
The drive is operating correctly and selected by the RAID controller. |
Off |
Flashing green (4.0 Hz) |
The drive is performing a RAID migration or rebuilding, or the system is reading or writing data to the drive. |
Off |
Steady green |
The drive is present but no data is being read or written to the drive. |
Off |
Off |
The drive is not securely installed. |
Drive backplanes
The server supports the following types of drive backplanes:
· SAS/SATA drive backplanes—Support only SAS/SATA drives.
· UniBay drive backplanes—Support both SAS/SATA and NVMe drives. You must connect both SAS/SATA and NVMe data cables. The number of supported drives varies by drive cabling.
· X SAS/SATA+Y UniBay drive backplanes—Support SAS/SATA drives in all slots and support NVMe drives in certain slots.
¡ X: Number of slots supporting only SAS/SATA drives.
¡ Y: Number of slots supporting both SAS/SATA and NVMe drives.
For UniBay drive backplanes and X SAS/SATA+Y UniBay drive backplanes:
· The two drive types are supported only when both SAS/SATA and NVMe data cables are connected.
· The number of supported SAS/SATA drives and the number of supported NVMe drives vary by cable connection.
Front 24LFF drive backplane (18SAS/SATA+6UniBay)
The UN-BP-24LFF-P36-G6 drive backplane is installed at the server front to support up to 24 3.5-inch SAS/SATA drives. The drive backplane is integrated with two PMC Expanders. It provides not only two x8 Mini-SAS-HD uplink connectors for connecting storage controllers, but also six downlink connectors for connecting other drive backplanes to support more drives.
Figure 17 24LFF drive backplane
Table 13 24LFF SAS/SATA drive backplane description
Item |
Description |
Mark |
1 |
x8 Mini-SAS-HD uplink connector 1** |
SAS-PORT 1 |
2 |
x8 Mini-SAS-HD uplink connector 8* |
SAS-PORT 8 |
3 |
SlimSAS connector 1 (PCIe4.0 x8), supporting NVMe drives (for drives 0 and 1) |
NVMe PORT1 |
4 |
SlimSAS connector 2 (PCIe4.0 x8), supporting NVMe drives (for drives 2 and 3) |
NVMe PORT2 |
5 |
SlimSAS connector 3 (PCIe4.0 x8), supporting NVMe drives (for drives 4 and 5) |
NVMe PORT3 |
6 |
AUX connector 1 |
AUX1 |
7 |
AUX connector 2 |
AUX2 |
8 |
x4 Mini-SAS-HD downlink connector 4 |
SAS-PORT 4 |
9 |
x4 Mini-SAS-HD downlink connector 3 |
SAS-PORT 3 |
10 |
Power connector 1 |
PWR1 |
11 |
Power connector 2 |
PWR2 |
12 |
AUX connector 3 |
AUX3 |
13 |
x4 Mini-SAS-HD downlink connector 6 |
SAS-PORT 6 |
14 |
x4 Mini-SAS-HD downlink connector 7 |
SAS-PORT 7 |
15 |
x4 Mini-SAS-HD downlink connector 5 |
SAS-PORT 5 |
16 |
x4 Mini-SAS-HD downlink connector 2 |
SAS-PORT 2 |
· *: The connector can be enabled or disabled, depending on the number of storage controllers connected to the drive backplane. ¡ If only one storage controller is connected, the connector is disabled. ¡ If two storage controllers are connected, the storage controller connected to this connector manages all SAS/SATA drives connected to the other drive backplanes that are connected to the downlink connectors on this backplane through an Expander. · **: The locations of the SAS/SATA drives managed by the storage controller connected to this connector vary by number of storage controllers connected to the backplane. ¡ If one storage controller is connected, the storage controller connected to this connector can manage all SAS/SATA drives connected to this backplane as well as all SAS/SATA drives connected to other backplanes that are connected to downlink connecters on this backplane. ¡ If two storage controllers are connected, the storage controller connected to this connector can manage only all SA/SATA drives connected to this backplane. · When the number of storage controllers connected to this backplane are different, update the corresponding Expander firmware. ¡ If one storage controller is connected, use the single-version Expander firmware. ¡ If two storage controllers are connected, use the dual-version Expander firmware. · For information about drive numbering, see Figure 12. |
Front 24LFF drive backplane (18SAS/SATA+6UniBay)
The UN-BP-24LFF-P48-G6 drive backplane is installed at the server front to support up to 24 3.5-inch SAS/SATA drives. The drive backplane is integrated with one PMC Expander. It provides not only one x8 Mini-SAS-HD uplink connector for connecting a storage controller, but also four downlink connectors for connecting other drive backplanes to support more drives.
Figure 18 24LFF drive backplane
Table 14 24LFF SAS/SATA drive backplane description
Item |
Description |
Mark |
1 |
x8 Mini-SAS-HD uplink connector 1** |
SAS-PORT 1 |
2 |
SlimSAS connector 1 (PCIe3.0 x8), supporting NVMe drives (for drives 0 and 1) |
NVMe PORT1 |
3 |
SlimSAS connector 2 (PCIe3.0 x8), supporting NVMe drives (for drives 2 and 3) |
NVMe PORT2 |
4 |
SlimSAS connector 3 (PCIe3.0 x8), supporting NVMe drives (for drives 4 and 5) |
NVMe PORT3 |
5 |
AUX connector 2 |
AUX2 |
6 |
AUX connector 1 |
AUX1 |
7 |
Power connector 1 |
PWR1 |
8 |
Power connector 2 |
PWR2 |
9 |
x4 Mini-SAS-HD downlink connector 7 |
SAS-PORT 7 |
10 |
x4 Mini-SAS-HD downlink connector 6 |
SAS-PORT 6 |
11 |
AUX connector 3 |
AUX3 |
12 |
x4 Mini-SAS-HD downlink connector 5 |
SAS-PORT 5 |
13 |
x4 Mini-SAS-HD downlink connector 2 |
SAS-PORT 2 |
· **: The locations of the SAS/SATA drives managed by the storage controller connected to this connector vary by number of storage controllers connected to the backplane. ¡ If one storage controller is connected, the storage controller connected to this connector can manage all SAS/SATA drives connected to this backplane as well as all SAS/SATA drives connected to other backplanes that are connected to downlink connecters on this backplane. ¡ If two storage controllers are connected, the storage controller connected to this connector can manage only all SA/SATA drives connected to this backplane. · When the number of storage controllers connected to this backplane are different, update the corresponding Expander firmware. ¡ If one storage controller is connected, use the single-version Expander firmware. ¡ If two storage controllers are connected, use the dual-version Expander firmware. · For information about drive numbering, see Figure 12. |
Front 24LFF drive backplane (18SAS/SATA+6UniBay)
The UN-BP-24LFF-P68-G6 drive backplane is installed at the server front to support up to 24 3.5-inch SAS/SATA drives. The drive backplane is integrated with one PMC Expander. It provides not only two x8 Mini-SAS-HD uplink connectors for connecting storage controllers, but also six downlink connectors for connecting other drive backplanes to support more drives.
Figure 19 24LFF drive backplane
Table 15 24LFF SAS/SATA drive backplane description
Item |
Description |
Mark |
1 |
SlimSAS connector 1 (PCIe4.0 x8), supporting NVMe drives (for drives 0 and 1) |
NVMe PORT1 |
2 |
SlimSAS connector 2 (PCIe4.0 x8), supporting NVMe drives (for drives 2 and 3) |
NVMe PORT2 |
3 |
SlimSAS connector 3 (PCIe4.0 x8), supporting NVMe drives (for drives 4 and 5) |
NVMe PORT3 |
4 |
AUX connector 2 |
AUX2 |
5 |
x4 Mini-SAS-HD downlink connector 3 |
SAS-PORT 3 |
6 |
x4 Mini-SAS-HD downlink connector 4 |
SAS-PORT 4 |
7 |
AUX connector 1 |
AUX1 |
8 |
Power connector 1 |
PWR1 |
9 |
Power connector 2 |
PWR2 |
10 |
AUX connector 3 |
AUX3 |
11 |
x4 Mini-SAS-HD downlink connector 7 |
SAS-PORT 7 |
12 |
x4 Mini-SAS-HD downlink connector 6 |
SAS-PORT 6 |
13 |
x4 Mini-SAS-HD downlink connector 5 |
SAS-PORT 5 |
14 |
x4 Mini-SAS-HD downlink connector 2 |
SAS-PORT 2 |
15 |
x8 Mini-SAS-HD uplink connector 8* |
SAS-PORT 8 |
16 |
x8 Mini-SAS-HD uplink connector 1** |
SAS-PORT 1 |
· *: The connector can be enabled or disabled, depending on the number of storage controllers connected to the drive backplane. ¡ If only one storage controller is connected, the connector is disabled. ¡ If two storage controllers are connected, the storage controller connected to this connector manages all SAS/SATA drives connected to the other drive backplanes that are connected to the downlink connectors on this backplane through an Expander. · **: The locations of the SAS/SATA drives managed by the storage controller connected to this connector vary by number of storage controllers connected to the backplane. ¡ If one storage controller is connected, the storage controller connected to this connector can manage all SAS/SATA drives connected to this backplane as well as all SAS/SATA drives connected to other backplanes that are connected to downlink connecters on this backplane. ¡ If two storage controllers are connected, the storage controller connected to this connector can manage only all SA/SATA drives connected to this backplane. · When the number of storage controllers connected to this backplane are different, update the corresponding Expander firmware. ¡ If one storage controller is connected, use the single-version Expander firmware. ¡ If two storage controllers are connected, use the dual-version Expander firmware. · For information about drive numbering, see Figure 12. |
Rear 12LFF SAS/SATA drive backplane
The PCA-BP-12LFF-R4300 G5 SAS/SATA drive backplane can be installed at the server rear to support twelve 3.5-inch SAS/SATA drives.
Figure 20 12LFF SAS/SATA drive backplane
Figure 21 12LFF SAS/SATA drive backplane description
Item |
Description |
Mark |
1 |
AUX connector |
AUX |
2 |
x8 Mini-SAS-HD connector (for SAS/SATA drives in the last eight slots connected to the backplane) |
SAS PORT 2 |
3 |
x4 Mini-SAS-HD connector (for SAS/SATA drives in the first four slots connected to the backplane) |
SAS PORT 1 |
4 |
Power connector |
PWR |
Rear 2LFF SAS/SATA drive backplane
The PCA-BP-2LFF-2U-G6-S 2LFF SAS/SATA drive backplane is installed at the server rear to support two 3.5-inch SAS/SATA drives.
Figure 22 2LFF SAS/SATA drive backplane
Figure 23 2LFF SAS/SATA drive backplane description
Item |
Description |
Mark |
1 |
x4 Mini-SAS-HD connector |
SAS PORT1 |
2 |
AUX connector |
AUX1 |
3 |
Power connector |
PWR1 |
Rear 4LFF SAS/SATA drive backplane
The PCA-BP-4LFF-2U-G6-S 4LFF SAS/SATA drive backplane is installed at the server rear to support four 3.5-inch SAS/SATA drives.
Figure 24 4LFF SAS/SATA drive backplane
Figure 25 4LFF SAS/SATA drive backplane description
Item |
Description |
Mark |
1 |
AUX connector |
AUX1 |
2 |
Power connector |
PWR1 |
3 |
x4 Mini-SAS-HD connector |
SAS PORT1 |
Mid 4LFF SAS/SATA drive backplane
The PCA-BP-4LFF-2U-M- G6-S 4LFF SAS/SATA drive backplane is installed to the mid 4LFF drive cage in the middle of the chassis to support up to four 3.5-inch SAS/SATA drives.
Figure 26 4LFF SAS/SATA drive backplane
Figure 27 4LFF SAS/SATA drive backplane description
Item |
Description |
Mark |
1 |
AUX connector |
AUX1 |
2 |
Power connector |
PWR1 |
3 |
x4 Mini-SAS-HD connector |
SAS PORT1 |
Rear 4SFF UniBay drive backplane
The PCA-BP-4SFF-4UniBay-2U-G6-S 4SFF UniBay drive backplane is installed at the serve rear to support up to four 2.5-inch SAS/SATA/NVMe drives.
Figure 28 4SFF UniBay drive backplane
Figure 29 4SFF UniBay drive backplane description
Item |
Description |
Mark |
1 |
AUX connector |
AUX |
2 |
Power connector |
PWR |
3 |
MCIO connector B1/B2 (PCIe5.0 x8) |
NVME-B1/B2 |
4 |
MCIO connector B3/B4 (PCIe5.0 x8) |
NVME-B3/B4 |
5 |
x4 Mini-SAS-HD connector |
SAS PORT |
PCIe5.0 x8 description: · PCIe5.0: Fifth-generation signal speed. · x8: Bus bandwidth. |
Rear 2SFF UniBay drive backplane
The PCA-BP-2SFF-2UniBay-2U- G6-S UniBay drive backplane is installed at the server rear to support two 2.5-inch SAS/SATA/NVMe drives.
Figure 30 2SFF UniBay drive backplane
Figure 31 2SFF UniBay drive backplane description
Item |
Description |
Mark |
1 |
Power connector |
PWR |
2 |
x4 Mini-SAS-HD connector |
SAS PORT |
3 |
SlimSAS connector (PCIe4.0 x8) |
NVME |
4 |
AUX connector |
AUX |
PCIe4.0 x8 description: · PCIe4.0: Fourth-generation signal speed. · x8: Bus bandwidth. |
Rear 2SFF SAS/SATA drive backplane
The PCA-BP-2SFF-2U- G6-S 2SFF SAS/SATA drive backplane is installed at the server rear to support up to two 2.5-inch SAS/SATA drives.
Figure 32 2SFF SAS/SATA drive backplane
Figure 33 4SFF UniBay drive backplane description
Item |
Description |
Mark |
1 |
Power connector |
PWR |
2 |
x4 Mini-SAS-HD connector |
SAS PORT |
4 |
AUX connector |
AUX |
Rear 8E1.S drive backplane
The PCA-BP-8E1S-2U-G6-S 8E1.S drive backplane is installed at the server rear to support up to eight 15 mm E1.S drives.
Figure 34 E1.S drive backplane
Figure 35 8E1.S drive backplane description
Item |
Description |
Mark |
1 |
AUX connector |
AUX |
2 |
MCIO connector A1/A2 (PCIe5.0 x8) |
EDSFF-A1/A2 |
3 |
Power connector 1 |
PWR 1 |
4 |
MCIO connector A3/A4 (PCIe5.0 x8) |
EDSFF-A3/A4 |
5 |
MCIO connector B1/B2 (PCIe5.0 x8) |
EDSFF-B1/B2 |
6 |
MCIO connector B3/B4 (PCIe5.0 x8) |
EDSFF-B3/B4 |
PCIe5.0 x8 description: · PCIe5.0: Fifth-generation signal speed. · x8: Bus bandwidth. |
Riser cards
To expand the server with PCIe modules, install riser cards on the PCIe riser connectors.
The server supports the following riser cards:
· RC-3FHFL-2U-G6
· RC-3FHHL-2U-G6
· RC-2HHHL-R4-2U-G6
· RC-2FHHL-2U-G6
· RC-1FHHL-2U-G6
For more information about riser cards and installation guidelines for riser cards, see " Replacing riser cards and PCIe modules."
Riser card guidelines
Each PCIe slot in a riser card can supply a maximum of 75 W of power to the PCIe module. You must connect a separate power cord to the PCIe module if it requires more than 75 W of power.
If a processor is faulty or absent, the PCIe slots connected to it are unavailable.
The slot number of a PCIe slot varies by the PCIe riser connector that holds the riser card. For example, slot 1/4 represents PCIe slot 1 if the riser card is installed on connector 1 and represents PCIe slot 4 if the riser card is installed on connector 2. For information about PCIe riser connector locations, see "Rear panel view."
RC-3FHFL-2U-G6
Figure 36 RC-3FHFL-2U-G6 (1)
(1) PCIe5.0 x16 (16,8,4,2,1) slot 2/5 |
(2) PCIe5.0 x16 (16,8,4,2,1) slot 3/6 |
(3) GPU module power connector |
(4) PCIe5.0 x16 (16,8,4,2,1) slot 1/4* |
PCIe5.0 x16 (16,8,4,2,1) description: · PCIe5.0: Fifth-generation signal speed. · x16: Connector bandwidth. · (16,8,4,2,1): Compatible bus bandwidth, including x16, x8, x4, x2, and x1. |
|
NOTE: slot 1/4: When the riser card is installed in PCIe riser card slot 1, this slot corresponds to PCIe slot 1. When the riser card is installed in PCIe riser card slot 2, this slot corresponds to PCIe slot 4. This rule applies to all the other PCIe riser card slots. For information about PCIe slots, see "Rear panel view." |
Figure 37 RC-3FHFL-2U-G6 (2)
(5) MCIO connector 2-C |
(6) MCIO connector 2-A |
(7) MCIO connector 1-A |
(8) MCIO connector 1-C |
PCIe5.0 x16 (16,8,4,2,1) description: · PCIe5.0: Fifth-generation signal speed. · x16: Connector bandwidth. · (16,8,4,2,1): Compatible bus bandwidth, including x16, x8, x4, x2, and x1. |
RC-3FHHL-2U-G6
Figure 38 RC-3FHHL-2U-G6 (1)
(1) PCIe5.0 x16 (8,4,21) slot 2/5 |
(2) PCIe5.0 x16 (8,4,2,1) slot 3/6 |
(3) PCIe5.0 x16 (16,8,4,2,1) slot 1/4* |
|
PCIe5.0 x16 (16,8,4,2,1) description: · PCIe5.0: Fifth-generation signal speed. · x16: Connector bandwidth. · (16,8,4,2,1): Compatible bus bandwidth, including x16, x8, x4, x2, and x1. |
|
NOTE: slot 1/4: When the riser card is installed in PCIe riser connector 1, this slot corresponds to PCIe slot 1. When the riser card is installed in PCIe riser connector 2, this slot corresponds to PCIe slot 4. This rule applies to all the other PCIe riser card slots. For information about PCIe slots, see "Rear panel view." |
Figure 39 RC-3FHHL-2U-G6 (2)
(4) MCIO connector 1-A |
(5) MCIO connector 1-C |
PCIe5.0 x16 (16,8,4,2,1) description: · PCIe5.0: Fifth-generation signal speed. · x16: Connector bandwidth. · (16,8,4,2,1): Compatible bus bandwidth, including x16, x8, x4, x2, and x1. |
RC-2HHHL-R4-2U-G6
Figure 40 RC-2HHHL-R4-2U-G6
(1) SLOT 2 cable |
(2) AUX connector |
(3) PCIe5.0 x16 (8,4,2,1) slot 8/10 |
(4) PCIe5.0 x16 (8,4,2,1) slot 7/9 |
(5) Power connector |
(6) SLOT 1 cable |
PCIe5.0 x16 (8,4,2,1) description: · PCIe5.0: Fifth-generation signal speed. · x16: Connector bandwidth. · (8,4,2,1): Compatible bus bandwidth, including x8, x4, x2, and x1. |
|
NOTE: slot 7/9: When the riser card is installed in PCIe riser connector 3, this slot corresponds to PCIe slot 7. When the riser card is installed in PCIe riser connector 4, this slot corresponds to PCIe slot 9. This rule applies to all the other PCIe riser card slots. For information about PCIe slots, see "Rear panel view." |
RC-2FHHL-2U-G6
Figure 41 RC-2FHHL-2U-G6
(1) PCIe5.0 x16 (8,4,2,1) slot 3/6* |
(2) PCIe5.0 x16 (8,4,2,1) slot 2/5 |
PCIe5.0 x16 (16, 8,4,2,1) description: · PCIe5.0: Fifth-generation signal speed. · x16: Connector bandwidth. · (16,8,4,2,1): Compatible bus bandwidth, including x16, x8, x4, x2, and x1. |
|
NOTE: slot 3/6: When the riser card is installed in PCIe riser connector 1, this slot corresponds to PCIe slot 3. When the riser card is installed in PCIe riser connector 2, this slot corresponds to PCIe slot 6. This rule applies to all the other PCIe riser card slots. For information about PCIe slots, see "Rear panel view." |
RC-1FHHL-2U-G6
Figure 42 RC-1FHHL-2U-G6
(1) PCIe5.0 x16 (8,4,2,1) slot 3/6* |
PCIe5.0 x16 (16, 8,4,2,1) description: · PCIe5.0: Fifth-generation signal speed. · x16: Connector bandwidth. · (16,8,4,2,1): Compatible bus bandwidth, including x16, x8, x4, x2, and x1. |
|
NOTE: slot 3/6: When the riser card is installed in PCIe riser connector 1, this slot corresponds to PCIe slot 3. When the riser card is installed in PCIe riser connector 2, this slot corresponds to PCIe slot 6. This rule applies to all the other PCIe riser card slots. For information about PCIe slots, see "Rear panel view." |
Fan modules
The server supports four hot swappable fan modules. The server supports N+1 fan module redundancy. Figure 43 shows the layout of the fan modules in the chassis.
The server supports variable fan speeds. That is, the system can automatically adjust fan speed based on the actual system temperature. This strategy balances system cooling and noise, optimizing both heat dissipation and noise levels.
During system POST and operation, the server will be gracefully powered off through HDM if the temperature detected by any sensor in the server reaches the critical threshold. The server will be powered off directly if the temperature of any key components such as processors exceeds the upper threshold. For more information about the thresholds and detected temperatures, access the HDM Web interface and see HDM online help.
PCIe slot
The server supports installing riser cards and OCP network adapters. The PCIe slot numbers vary by configuration.
Figure 44 PCIe slot numbering when riser cards are installed at the server rear
PCIe modules
Typically, the PCIe modules are available in the following standard form factors:
· LP—Low profile.
· FHHL—Full height and half length.
· FHFL—Full height and full length.
· HHHL—Half height and half length.
· HHFL—Half height and full length.
The following PCIe modules require PCIe I/O resources: Storage controllers, FC HBAs, and GPU modules. Make sure the number of such PCIe modules installed does not exceed 11.
Storage controllers
The server supports the following types of storage controllers:
· Embedded VROC controller—Embedded in the server and does not require installation.
· Standard storage controller—Comes in a standard PCIe form factor and typically requires a riser card for installation.
For some storage controllers, you can order a power fail safeguard module to prevent data loss from power outages. This module provides a flash card and a supercapacitor. When a system power failure occurs, the supercapacitor provides power for a minimum of 20 seconds. During this interval, the storage controller can transfer data from DDR memory to the flash card, where the data remains indefinitely or until the controller retrieves the data. If the storage controller contains a built-in flash card, you can order only a supercapacitor.
Embedded VROC controller
Item |
Specifications |
Type |
Embedded in PCH of the system board |
Number of internal ports |
Four internal SAS ports (compatible with SATA) |
Connectors |
One onboard ×4 SlimSAS connector |
Drive interface |
6 Gbps SATA 3.0 Supports drive hot swapping |
RAID levels |
0, 1, 5, 10 |
Built-in cache memory |
N/A |
Built-in flash |
N/A |
Power fail safeguard module |
Not supported |
Firmware upgrade |
Upgrade with the BIOS |
|
NOTE: Platform Controller Hub (PCH) mainly manages and coordinates the operation of various hardware components, as well as communication with external devices |
Standard storage controllers
For more information, use the component compatibility lookup tool at http://www.h3c.com/en/home/qr/default.htm?id=66.
NVMe VROC modules
Model |
RAID levels |
Compatible NVMe SSDs |
NVMe-VROC-Key-S |
0, 1, 10 |
All NVMe drives |
NVMe-VROC-Key-P |
0, 1, 5, 10 |
All NVMe drives |
SATA M.2 expander module
Figure 45 SATA M.2 expander module front view
(1) SATA data cable connector |
(2) SATA M.2 SSD card slot 1 |
Figure 46 SATA M.2 expander module rear view
(1) SATA M.2 SSD card slot 2 |
NVMe M.2 expander module
Figure 47 NVMe M.2 expander module
(1) NVMe M.2 SSD card slot 1 |
(2) NVMe M.2 SSD card slot 2 |
Server management module
The server management module is installed on the system board to provide I/O connectors and HDM out-of-band features for the server.
Figure 48 Server management module
(1) VGA connector |
(2) Two USB 3.0 connectors |
(3) HDM dedicated management interface |
(4) UID LED |
(5) HDM serial port |
(6) iFIST module (optional) |
(7) NCSI connector |
|
Serial & DSD module
The serial & DSD module can be installed in the slot on the server rear panel. The module provides two SD slots and forms RAID 1 by default.
Figure 49 Serial & DSD module
Table 16 Component description
Item |
Description |
1 |
SD card slot 1 |
2 |
SD card slot 2 |
3 |
Serial port |
B/D/F information
You can obtain B/D/F information by using one of the following methods:
· BIOS log—Search the dumpiio keyword in the BIOS log.
· UEFI shell—Execute the pci command. For information about how to execute the command, execute the help pci command.
· Operating system—The obtaining method varies by OS.
¡ For Linux, execute the lspci command.
If Linux does not support the lspci command by default, use the software package manager supported by the operating system to obtain and install the pci-utils package.
¡ For Windows, install the pciutils package, and then execute the lspci command.
¡ For VMware, execute the lspci command.
Appendix C Managed removal of OCP network adapters
Before you begin
Before you perform a managed removal of an OCP network adapter, perform the following tasks:
· Use the OS compatibility query tool at http://www.h3c.com/en/home/qr/default.htm?id=66 to obtain operating systems that support managed removal of OCP network adapters.
· Make sure the BIOS version is 6.00.15 or higher, the HDM2 version is 1.13 or higher, and the CPLD version is V001 or higher.
Performing a hot removal
This section uses an OCP network adapter in slot 16 as an example.
To perform a hot removal:
1. Access the operating system.
2. Execute the dmidecode -t 9 command to search for the bus address of the OCP network adapter. As shown in Figure 50, the bus address of the OCP network adapter in slot 16 is 0000:31:00.0.
Figure 50 Searching for the bus address of an OCP network adapter by slot number
3. Execute the echo 0 > /sys/bus/pci/slots/slot number/power command, where slot number represents the number of the slot where the OCP network adapter resides.
Figure 51 Executing the echo 0 > /sys/bus/pci/slots/slot number/power command
4. Identify whether the OCP network adapter has been disconnected:
¡ Observe the OCP network adapter LED. If the LED is off, the OCP network adapter has been disconnected.
¡ Execute the lspci –vvv –s 0000:31:00.0 command. If no output is displayed, the OCP network adapter has been disconnected.
Figure 52 Identifying OCP network adapter status
5. Replace the OCP network adapter.
6. Identify whether the OCP network adapter has been connected:
¡ Observe the OCP network adapter LED. If the LED is on, the OCP network adapter has been connected.
¡ Execute the lspci –vvv –s 0000:31:00.0 command. If an output is displayed, the OCP network adapter has been connected.
Figure 53 Identifying OCP network adapter status
7. Identify whether any exception exists. If any exception occurred, contact H3C Support.
Appendix D Environment requirements
About environment requirements
The operating temperature requirements for the server vary depending on the server model and hardware configuration. When the general and component-based requirements conflict, use the component-based requirement.
Be aware that the actual maximum operating temperature of the server might be lower than what is stated because of poor site cooling performance. In a real data center, the server cooling performance might decrease because of adverse external factors, including poor cabinet cooling performance, high power density inside the cabinet, or insufficient spacing between devices.
General environment requirements
Item |
Specifications |
Operating temperature |
Minimum: 5°C (41°F) Maximum: 40°C (104°F) The maximum temperature varies by hardware option presence. For more information, see "Operating temperature requirements." |
Storage temperature |
–40°C to +70°C (–40°F to +158°F) |
Operating humidity |
8% to 90%, noncondensing |
Storage humidity |
5% to 95%, noncondensing |
Operating altitude |
–60 m to +3000 m (–196.85 ft to +9842.52 ft) The allowed maximum temperature decreases by 0.33°C (32.59°F) as the altitude increases by 100 m (328.08 ft) from 900 m (2952.76 ft) |
Storage altitude |
–60 m to +5000 m (–196.85 ft to +16404.20 ft) |
Operating temperature requirements
General guidelines
The cooling capability of the server depends on the power density of devices within the cabinet, the cooling capability of the cabinet, and the spacing between the server and other devices. When the server is stacked with other devices, the maximum operating temperature of the server might decrease.
When mid drives are installed, processors of 165W are not supported and smart network adapters and GPUs are not supported either.
When CRPS-195mm 2000 W Platinum AC power supplies are configured, the maximum operating temperatures are 3°C (37.4°F) lower than those described in the table.
Slot 3 does not support a GPU, a network adapter of 100G or higher, or a smart network adapter.
When a fan module fails or a single rotor of a dual-rotor fan module fails, the maximum server operating temperature decreases by 5°C (41°F). The GPU performance and performance of processors with a TDP of more than 165 W might decrease.
Table 17 Operating temperature requirements
Drive configuration |
Maximum operating temperature |
||
30°C |
35°C |
40°C |
|
24LFF+12LFF |
· The server does not processors of 165W are not supported when A2/L4 GPUs are installed · The server does not support processors of 270W when HDD drives (excluding rear 12LFF drives), rear NVMe SSDs and E1.S drives are installed · The server does not support smart network adapters |
· The server does not support processors of 205W when HDD drives (excluding rear 12LFF drives), rear NVMe SSDs and E1.S drives are installed · The server does not support processors of 300W when SATA/SAS SSD drives (excluding rear 12LFF drives) are installed · The server does not support processors of 300W when network adapters of 100G or higher are installed · Smart network adapters are not supported · Mid drives are not supported |
· GPUs are not supported · NVMe SSDs (including U.2 and AIC) are installed · Mid drives and rear drives are installed · Processors with power consumption greater than 135W are not supported · Network adapters of 100G or higher, OCP network adapters of 100G or higher, and smart network adapters are not supported |
Appendix E Product recycling
New H3C Technologies Co., Ltd. provides product recycling services for its customers to ensure that hardware at the end of its life is recycled. Vendors with product recycling qualification are contracted to New H3C to process the recycled hardware in an environmentally responsible way.
For product recycling services, contact New H3C at
· Tel: 400-810-0504
· E-mail: [email protected]
· Website: http://www.h3c.com
Appendix F Glossary
Description |
|
B |
|
BIOS |
Basic input/output system is non-volatile firmware pre-installed in a ROM chip on a server's management module. The BIOS stores basic input/output, power-on self-test, and auto startup programs to provide the most basic hardware initialization, setup and control functionality. |
C |
|
CPLD |
Complex programmable logic device is an integrated circuit used to build reconfigurable digital circuits. |
G |
|
GPU module |
Graphics processing unit module converts digital signals to analog signals for output to a display device and assists processors with image processing to improve overall system performance. |
H |
|
HDM |
Hardware Device Management is the server management control unit with which administrators can configure server settings, view component information, monitor server health status, and remotely manage the server. |
A module that supports hot swapping (a hot-swappable module) can be installed or removed while the server is running without affecting the system operation. |
|
K |
|
KVM |
KVM is a management method that allows remote users to use their local video display, keyboard, and mouse to monitor and control the server. |
N |
|
NVMe VROC module |
A module that works with Intel VMD to provide RAID capability for the server to virtualize storage resources of NVMe drives. |
R |
|
RAID |
Redundant array of independent disks (RAID) is a data storage virtualization technology that combines multiple physical hard drives into a single logical unit to improve storage and security performance. |
Redundancy |
A mechanism that ensures high availability and business continuity by providing backup modules. In redundancy mode, a backup or standby module takes over when the primary module fails. |
S |
|
Security bezel |
A locking bezel mounted to the front of a server to prevent unauthorized access to modules such as hard drives. |
U |
A unit of measure defined as 44.45 mm (1.75 in) in IEC 60297-1. It is used as a measurement of the overall height of racks, as well as equipment mounted in the racks. |
UniBay drive backplane |
A UniBay drive backplane supports both SAS/SATA and NVMe drives. |
UniSystem |
UniSystem provided by H3C for easy and extensible server management. It can guide users to configure a server quickly with ease and provide an API interface to allow users to develop their own management tools. |
V |
|
VMD |
VMD provides hot removal, management and fault-tolerance functions for NVMe drives to increase availability, reliability, and serviceability. |
Appendix G Acronyms
Acronym |
Full name |
B |
|
BIOS |
|
C |
|
CMA |
Cable Management Arm |
CPLD |
|
D |
|
DCPMM |
Data Center Persistent Memory Module |
DDR |
Double Data Rate |
DIMM |
Dual In-Line Memory Module |
DRAM |
Dynamic Random Access Memory |
DVD |
Digital Versatile Disc |
G |
|
GPU |
|
H |
|
HBA |
Host Bus Adapter |
HDD |
Hard Disk Drive |
HDM |
|
I |
|
IDC |
Internet Data Center |
iFIST |
integrated Fast Intelligent Scalable Toolkit |
K |
|
KVM |
Keyboard, Video, Mouse |
L |
|
LRDIMM |
Load Reduced Dual Inline Memory Module |
N |
|
NCSI |
Network Controller Sideband Interface |
NVMe |
Non-Volatile Memory Express |
P |
|
PCIe |
Peripheral Component Interconnect Express |
POST |
Power-On Self-Test |
R |
|
RDIMM |
Registered Dual Inline Memory Module |
S |
|
SAS |
Serial Attached Small Computer System Interface |
SATA |
Serial ATA |
SD |
Secure Digital |
SDS |
Secure Diagnosis System |
SFF |
Small Form Factor |
sLOM |
Small form factor Local Area Network on Motherboard |
SSD |
Solid State Drive |
T |
|
TCM |
Trusted Cryptography Module |
TDP |
Thermal Design Power |
TPM |
Trusted Platform Module |
U |
|
UID |
Unit Identification |
UPI |
Ultra Path Interconnect |
UPS |
Uninterruptible Power Supply |
USB |
Universal Serial Bus |
V |
|
VROC |
Virtual RAID on CPU |
VMD |
Volume Management Device |