- Table of Contents
- Related Documents
-
Title | Size | Download |
---|---|---|
02-Appendix | 7.50 MB |
Contents
Appendix A Server specifications
Server models and chassis view
Front panel view of the server
Processor mezzanine board components
Appendix B Component specifications
DRAM DIMM rank classification label
25SFF drive backplane 1 (17SAS/SATA+8UniBay)
25SFF drive backplane 2 (17SAS/SATA+8UniBay)
Appendix C Hot swapping and managed hot removal of NVMe drives
Performing a hot removal in Windows
Performing a hot removal in Linux
Performing a hot removal in VMware
Performing a managed hot removal in Windows
Performing a managed hot removal in Linux
Performing a hot installation in Windows
Performing a hot installation in Linux
Performing a hot installation in VMware
Verifying the RAID status of the installed NVMe drive
Appendix D Environment requirements
About environment requirements
General environment requirements
Operating temperature requirements
Appendix A Server specifications
The information in this document might differ from your product if it contains custom configuration options or features.
Figures in this document are for illustration only.
Server models and chassis view
H3C UniServer R6900 G5 servers are 4U rack servers that support extension of one processor mezzanine board to accommodate up to four Cedar Island processors. The servers provide improved reliability and availability, and are suitable for critical service loads, virtualization, server integration, database, service processing, and data-sensitive applications.
Figure 1 Chassis view
Technical specifications
Item |
Specifications |
Dimensions (H × W × D) |
· Without a security bezel: 174.8 × 447 × 799 mm (6.88 × 17.60 × 31.46 in) · With a security bezel: 174.8 × 447 × 830 mm (6.88 × 17.60 × 32.68 in) |
Max. weight |
55.82 kg (123.06 lb) |
Processors |
4 × Cedar Island processors (Up to 3.1 GHz base frequency, maximum 250 W power consumption, and 33 MB cache per processor) |
Power consumption |
The power consumption varies by configuration. For more information, use the server power consumption evaluation tool athttp://www.h3c.com/en/home/qr/default.htm?id=291 |
Memory |
A maximum of 48 DIMMs Supports mixture of DDR4 and PMem 200 DIMMs |
Storage controllers |
· Embedded VROC storage controller · High-performance standard storage controller · NVMe VROC module · Dual SD card extended module (supports RAID 1) |
Chipset |
Intel C621A Lewisburg chipset |
VGA chip |
Aspeed AST2500 |
Network connectors |
· 1 × embedded 1 Gbps HDM dedicated port · 1 × OCP 3.0 network adapter connector (for NCSI-capable OCP 3.0 network adapters) |
Integrated graphics card |
The graphics chip is integrated into the BMC chip (AST2500) to provide 64 MB of video memory and a maximum resolution of 1920 x 1080 @ 60Hz (32 bpp), where: · Resolution: ¡ 1920 × 1200: 1920 horizontal pixels and 1200 vertical pixels. ¡ 60Hz: Screen refresh rate, 60 times per second. ¡ 32bpp: Color depts. The higher the value, the more colors that can be displayed. · A maximum resolution of 1920 x 1200 pixels is supported only after the server is installed with a graphics card driver compatible with the operating system version. Otherwise, the server supports only the default resolution of the operating system. · If you attach monitors to both the front and rear VGA connectors, only the monitor connected to the front VGA connector is available. |
· 6 × USB connectors (two on the system board, two at the server rear, and two at the server front): · 10 × embedded SATA connectors: ¡ 1 × Mini-SAS-HD connector (x8 SATA) ¡ 2 × SATA connectors (x1 SATA) · 1 × RJ-45 HDM dedicated port (at the server rear) · 2 × VGA connectors (one at the server rear and one at the server front) · 1 × BIOS serial port (at the server rear) · 1 × dedicated HDM management interface (at the server front) |
|
Expansion slots |
19 × PCIe 3.0 slots: · 18 × standard slots · 1 × OCP 3.0 network adapter slot |
External USB optical drives |
|
Management |
· Supports HDM agentless management tool (with an independent management port) · Supports H3C iFIST and UniSystem management software · Supports an LCD managment module · Supports 64 M local memory · Supports the U-Center data center management platform (optional) |
Security |
· Supports chassis intrusion detection (chassis open alarm) · Supports TCM and TPM · Supports dual-factor authentication · Supports the silicon RoT firmware protection module (optional) · Supports the PCIe protection module (optional) · Supports firewall, IPS, anti-virus, and QoS features |
Power supplies |
4 × hot-swappable power supplies, N + N redundancy |
Standards |
CCC, SEPA |
Components
Figure 2 R6900 G5 server components
Table 1 R6900 G5 server components
Description |
|
(1) Chassis access panel |
N/A |
(2) Storage controller |
Provides RAID capability to SAS/SATA drives, including RAID configuration and RAID scale-up. It supports online upgrade of the controller firmware and remote configuration. |
(3) Standard PCIe network adapter |
Installed in a standard PCIe slot to provide network ports. |
(4) SATA M.2 SSD |
Provides data storage space for the server. |
(5) Chassis open-alarm module |
Detects if the access panel is removed. The detection result can be displayed from the HDM Web interface. |
(6) SATA M.2 SSD expander module |
Provides M.2 SSD slots. |
(7) OCP 3.0 network adapter |
Installed on the OCP adapter on the system board. |
(8) Riser card blank |
Installed on an empty PCIe riser connector to ensure good ventilation. |
(9) GPU module |
Provides computing capability for image processing and AI services. |
(10) Encryption module |
Provides encryption services for the server to enhance data security. |
(11) OCP adapter |
Installed on the system board to support an OCP 3.0 network adapter. |
(12) System battery |
Supplies power to the system clock to ensure system time correctness. |
(13) Riser card |
Provides PCIe slots. |
(14) Riser card cage |
Accommodates riser cards. |
(15) NVMe VROC module |
Works with Intel VMD to provide RAID capability for the server to virtualize storage resources of NVMe drives. |
(16) Dual SD card extended module |
Provides two SD card slots. |
(17) Power supply |
Supplies power to the server. The power supplies support hot swapping and N+N redundancy. |
(18) Chassis |
N/A |
(19) Chassis ears |
Attach the server to the rack. The right ear is integrated with the front I/O component, and the left ear is integrated with VGA and USB 2.0 connectors. The serial label pull tab on the left ear provides the HDM default login settings and document QR code. |
(20) LCD smart management module |
Displays basic server information, operating status, and fault information. Together with HDM event logs, users can fast locate faulty components and troubleshoot the server, ensuring server operation. |
(21) Drive backplane |
Provides power and data channels for drives. |
(22) Drive |
Provides data storage space. Drives support hot swapping. The server supports SSDs and HDDs and various drive interfaces, including SAS, SATA, M.2, and PCIe. |
(23) Fan module |
Helps server ventilation. Fan modules support hot swapping and N+1 redundancy. |
(24) Fan module cage |
Accommodates fan modules. |
(25) Air baffle |
Provides ventilation aisles for processor heatsinks and memory modules and provides support for the supercapacitor. |
(26) System board |
One of the most important parts of a server, on which multiple components are installed, such as processor, memory, and fan module. It is integrated with basic server components, including the BIOS chip and PCIe connectors. |
(27) Memory |
Stores computing data and data exchanged with external storage temporarily. The server supports DDR4 and PMem200 memory. |
(28) Supercapacitor holder |
Secures a supercapacitor in the chassis. |
(29) Supercapacitor |
Supplies power to the flash card on the power fail safeguard module, which enables the storage controller to back up data to the flash card for protection when power outage occurs. |
(30) Processor retaining bracket |
Attaches a processor to the heatsink. |
(31) Processor |
Integrates memory and PCIe controllers to provide data processing capabilities for the server. |
(32) Processor socket cover |
Installed over an empty processor socket to protect pins in the socket. |
(33) Processor heatsink |
Cools the processor. |
(34) Processor mezzanine board |
Provides extension slots for processors and memory modules. |
Front panel
Front panel view of the server
Figure 3 shows the front panel view.
Table 2 Front panel description
Item |
Description |
1 |
Bay 1 for 8SFF SAS/SATA drives (optional) |
2 |
Bay 2 for 8SFF SAS/SATA drives (optional) |
3 |
Bay 3 for 8SFF SAS/SATA drives (optional) NOTE: To install 8SFF SAS/SATA drives in this bay, an 8SFF drive backplane is required. |
4 |
Drive or LCD smart management module (optional) |
5 |
USB 3.0 connector |
6 |
8SFF UniBay drives (optional) NOTE: To install 8SFF UniBay drives, a 25SFF drive backplane is required. |
7 |
Drive (optional) |
8 |
Bay 6 for 8SFF UniBay drives (optional) |
9 |
Bay 5 for 8SFF UniBay drives (optional) |
10 |
Bay 4 for 8SFF UniBay drives (optional) |
11 |
Serial label pull tab |
12 |
HDM dedicated management connector |
13 |
USB 3.0 connector |
14 |
VGA connector |
|
NOTE: A drive backplane is required if you install SAS/SATA or UniBay drives. For more information about drive backplanes, see "Drive backplanes." |
LEDs and buttons
Front panel LEDs and buttons
The LED and buttons are the same on all server models. Figure 4 shows the front panel LEDs and buttons. Table 3 describes the status of the front panel LEDs.
Figure 4 Front panel LEDs and buttons
(1) Power on/standby button and system power LED |
(2) OCP network adapter Ethernet port LED |
(3) Health LED |
(4) UID button LED |
Table 3 LEDs and buttons on the front panel
Button/LED |
Status |
Power on/standby button and system power LED |
· Steady green—The system has started. · Flashing green (1 Hz)—The system is starting. · Steady amber—The system is in standby state. · Off—No power is present. Possible reasons: ¡ No power source is connected. ¡ No power supplies are present. ¡ The installed power supplies are faulty. ¡ The system power cords are not connected correctly. |
OCP network adapter Ethernet port LED |
· Steady green—A link is present on a port of the OCP 3.0 network adapter. · Flashing green (1 Hz)—A port of the OCP 3.0 network adapter is receiving or sending data. · Off—No link is present on any port of the OCP 3.0 network adapter. NOTE: A server supports a maximum of one OCP 3.0 network adapter. |
Health LED |
· Steady green—The system is operating correctly or a minor alarm is present. · Flashing green (4 Hz)—HDM is initializing. · Flashing amber (1 Hz)—A major alarm is present. · Flashing red (1 Hz)—A critical alarm is present. If a system alarm is present, log in to HDM to obtain more information about the system running status. |
UID button LED |
· Steady blue—UID LED is activated. The UID LED can be activated by using the following methods: ¡ Press the UID button LED. ¡ Activate the UID LED from HDM. · Flashing blue: ¡ 1 Hz—The firmware is being upgraded or the system is being managed from HDM. Do not power off the server. ¡ 4 Hz—HDM is restarting. To restart HDM, press the UID button LED for eight seconds. · Off—UID LED is not activated. |
Intelligent security bezel light
The intelligent security bezel provides hardened security and uses effect light to visualize operation and health status to help inspection and fault location. The default effect light is as shown in Figure 5.
Table 4 Intelligent security bezel effect light
System status |
Light status |
Standby |
Steady white: The system is in standby state. |
Startup |
· Beads turn on white from middle in turn—POST progress. · Beads turn on white from middle three times—POST has finished. |
Running |
· Breathing white (gradient at 0.2 Hz)—Normal state, indicating the system load by the percentage of beads turning on from the middle to the two sides of the security bezel. ¡ No load—Less than 10%. ¡ Light load—10% to 50%. ¡ Middle load—50% to 80%. ¡ Heavy load—More than 80%. · Breathing white (gradient at 1 Hz )—A predictive alarm is present. · Flashing amber (1 Hz)—A major alarm is present. · Flashing red (1 Hz)—A critical alarm is present. |
Remote management |
· All beads flash white (1 Hz)—The firmware is being upgraded or the system is being managed from HDM. Do not power off the server. · Some beads flash white (1 Hz)—HDM is restarting. |
Ports
Table 5 Ports on the front panel
Port |
Type |
Description |
VGA connector |
DB-15 |
Connects a display terminal, such as a monitor or KVM device. |
USB connector |
USB 3.0 |
Connects the following devices: · USB flash drive. · USB keyboard or mouse. · USB optical drive for operating system installation. |
HDM dedicated management connector |
Type-C |
Connects a Type-C to USB adapter cable, which connects to a USB Wi-Fi adapter or USB drive. |
Rear panel
Rear panel view
Figure 6 shows the rear panel view.
Figure 6 Rear panel components
Table 6 Rear panel description
Description |
||
1 |
PCIe riser bay 1: PCIe slots 1 through 6 |
|
2 |
PCIe riser bay 2: PCIe slots 7 through 12 |
|
3 |
PCIe riser bay 3: PCIe slots 13 through 18 |
|
4 |
Power supply 4 |
|
5 |
Power supply 3 |
|
6 |
VGA connector |
|
7 |
HDM dedicated network port (1Gbps, RJ-45, default IP address 192.168.1.2/24) |
|
8 |
OCP 3.0 network adapter (in slot 19)(optional) |
|
9 |
Two USB 3.0 connectors |
|
10 |
BIOS serial port |
|
11 |
Power supply 2 |
|
12 |
Power supply 1 |
|
13 |
Serial label pull tab |
|
LEDs
Figure 7 shows the rear panel LEDs. Table 7 describes the status of the rear panel LEDs.
(1) Power supply LED for power supply 1 |
(2) Power supply LED for power supply 2 |
(3) ATTN BUTTON LED |
(4) OCP network adapter power LED |
(5) UID LED |
(6) Link LED of the Ethernet port |
(7) Activity LED of the Ethernet port |
(8) Power supply LED for power supply 3 |
(9) Power supply LED for power supply 4 |
Table 7 LEDs on the rear panel
LED |
Status |
Power supply LED |
· Steady green—The power supply is operating correctly. · Flashing green (1 Hz)—Power is being input correctly but the system is not powered on. · Flashing green (0.33 Hz)—The power supply is in standby state and does not output power. · Flashing green (2 Hz)—The power supply is updating its firmware. · Steady amber—Either of the following conditions exists: ¡ The power supply is faulty. ¡ The power supply does not have power input, but another power supply has correct power input. · Flashing amber (1 Hz)—An alarm has occurred on the power supply. · Off—No power supplies have power input, which can be caused by an incorrect power cord connection or power source shutdown. |
ATTN BUTTON LED and OCP network adapter power LED |
For more information, see Table 8. |
UID LED |
· Steady blue—UID LED is activated. The UID LED can be activated by using the following methods: ¡ Press the UID button LED. ¡ Enable UID LED from HDM. · Flashing blue: ¡ 1 Hz—The firmware is being upgraded or the system is being managed from HDM. Do not power off the server. ¡ 4 Hz—HDM is restarting. To restart HDM, press the UID button LED for eight seconds. · Off—UID LED is not activated. |
Link LED of the Ethernet port |
· Steady green—A link is present on the port. · Off—No link is present on the port. |
Activity LED of the Ethernet port |
· Flashing green (1 Hz)—The port is receiving or sending data. · Off—The port is not receiving or sending data. |
Table 8 OCP network adapter LED description
ATTN BUTTON LED |
POWER LED |
Description |
Steady amber |
Off |
One of the following conditions exists: · An exception has occurred on the OCP network adapter · The OCP network adapter is not installed correctly. · The OCP network adapter is not installed. |
Flashing amber (1 Hz) |
Off |
One of the following conditions exists: · The server is in standby state. · The server is starting up. · The server is operating and the OCP adapter and OCP network adapter have been installed correctly. |
Off |
Flashing green (1.5 Hz) |
The OCP network adapter is being powered on or powered off. |
Off |
Steady green |
The OCP network adapter is operating correctly. |
Off |
Off |
The OCP network adapter has been powered off. |
Ports
Table 9 Ports on the rear panel
Port |
Type |
Description |
VGA connector |
DB-15 |
Connects a display terminal, such as a monitor or KVM device. |
BIOS serial port |
RJ-45 |
The BIOS serial port is used for the following purposes: · Log in to the server when the remote network connection to the server has failed. · Establish a GSM modem or encryption lock connection. |
USB connector |
USB 3.0 |
Connects the following devices: · USB flash drive. · USB keyboard or mouse. · USB optical drive for operating system installation. |
HDM dedicated network port |
RJ-45 |
Establishes a network connection to manage HDM from its Web interface. |
Power receptacle |
Standard single-phase |
Connects the power supply to the power source. |
System board
System board components
Figure 8 shows the system board layout.
Figure 8 System board components
Table 10 System board components
Item |
Description |
1 |
Dual SD card extended module connector |
2 |
Network adapter NCSI function connector |
3 |
PCIe riser connector 1 (for processor 1 and processor 3) |
4 |
Mini-SAS-HD port (×8 SATA)(SATA PORT) |
5 |
NVMe VROC module connector |
6 |
Rear drive backplane power connector 7 (PWR7) |
7 |
Rear drive backplane AUX connector 7 (AUX7) |
8 |
Front I/O connector |
9 |
SATA M.2 connector 1 (SATA M.2 1) |
10 |
SATA M.2 connector 2 (SATA M.2 2) |
11 |
LCD smart management module connector (DIAGLCD) |
12 |
Front drive backplane power connector 6 (PWR6) |
13 |
Front drive backplane power connector 3 (PWR3) |
14 |
Front drive backplane AUX connector 6 (AUX6) |
15 |
Front drive backplane AUX connector 3 (AUX3) |
16 |
Front drive backplane power connector 5 (PWR5) |
17 |
Front drive backplane power connector 2 (PWR2) |
18 |
Front drive backplane AUX connector 5 (AUX5) |
19 |
Front drive backplane AUX connector 2 (AUX2) |
20 |
Front drive backplane power connector 4 (PWR4) |
21 |
Front drive backplane power connector 1 (PWR1) |
22 |
Front drive backplane AUX connector 4 (AUX4) |
23 |
Front drive backplane AUX connector 1 (AUX1) |
24 |
Shared connector for the chassis-open alarm module and the front VGA and USB 3.0 cable |
25 |
System battery |
26 |
PCIe riser connector 3 (for processor 4) |
27 |
Two USB 3.0 connectors |
28 |
PCIe riser connector 2 (for processor 2) |
29 |
TPM/TCM connector |
30 |
OCP adapter connector |
X |
System maintenance switch |
System maintenance switch
Figure 9 shows the system maintenance switch. Table 11 describes how to use the maintenance switch.
Figure 9 System maintenance switch
Table 11 System maintenance switch description
Item |
Description |
Remarks |
1 |
· Off (default)—HDM login requires the username and password of a valid HDM user account. · On—HDM login requires the default username and password. |
For security purposes, turn off the switch after you complete tasks with the default username and password as a best practice. |
5 |
· Off (default)—Normal server startup. · On—Restores the default BIOS settings. |
To restore the default BIOS settings, turn on and then turn off the switch. The server starts up with the default BIOS settings at the next startup. The server cannot start up when the switch is turned on. To avoid service data loss, stop running services and power off the server before turning on the switch. |
6 |
· Off (default)—Normal server startup. · On—Clears all passwords from the BIOS at server startup. |
If this switch is on, the server will clear all the passwords at each startup. Make sure you turn off the switch before the next server startup if you do not need to clear all the passwords. |
2, 3, 4, 7, and 8 |
Reserved for future use. |
N/A |
Processor mezzanine board components
Figure 10 show the processor mezzanine board layout.
Figure 10 Processor mezzanine board components
Table 12 Processor mezzanine board components
Item |
Description |
1 |
PCIe riser connector 3 (for processor 4) |
2 |
SlimSAS connector B3/B4 (PCIe3.0 ×8, for processor 3)(NVMe-B3/B4) |
3 |
SlimSAS connector B1/B2 (PCIe3.0 ×8, for processor 3)(NVMe-B1/B2) |
4 |
SlimSAS connector A1/A2 (PCIe3.0 ×8, for processor 3)(NVMe-A1/A2) |
5 |
SlimSAS connector A3/A4 (PCIe3.0 ×8, for processor 3)(NVMe-A3/A4) |
PCIe3.0 x8 description: · PCIe3.0: Third-generation signal speed. · x8: Bus bandwidth. |
DIMM slots
The system board and processor mezzanine board each provide six DIMM channels per processor, and 12 channels in total, as shown in Figure 11 and Figure 12, respectively. Each channel contains two DIMM slots.
Figure 11 System board DIMM slot layout
Figure 12 Processor mezzanine board DIMM slot layout
Appendix B Component specifications
For components compatible with the server and detailed component information, visit the query tool at http://www.h3c.com/cn/Service/Document_Software/Document_Center/Server/.
About component model names
The model name of a hardware option in this document might differ slightly from its model name label.
A model name label might add a prefix or suffix to the hardware-coded model name for purposes such as identifying the matching server brand or applicable region. For example, the DDR4-3200-16G-2Rx8-R memory model represents memory module labels including UN-DDR4-3200-16G-2Rx8-R, UN-DDR4-3200-16G-2Rx8-R-F, and UN-DDR4-3200-16G-2Rx8-R-S, which have different prefixes and suffixes.
DIMMs
The server provides 6 DIMM channels per processor, 24 channels in total. Each DIMM channel has two DIMM slots. For the physical layout of DIMM slots, see "DIMM slots."
DRAM DIMM rank classification label
|
NOTE: For the label description, functions, and advantages of PMem 200, see H3C Servers PMem 200 User Guide. |
A DIMM rank is a set of memory chips that the system accesses while writing or reading from the memory. On a multi-rank DIMM, only one rank is accessible at a time.
To determine the rank classification of a DRAM DIMM, use the label attached to the DIMM, as shown in Figure 13.
Figure 13 DRAM DIMM rank classification label
Table 13 DIMM rank classification label description
Callout |
Description |
Remarks |
1 |
Capacity |
Options include: · 8GB. · 16GB. · 32GB. |
2 |
Number of ranks |
Options include: · 1R—One rank (Single-Rank). · 2R—Two ranks (Dual-Rank). A 2R DIMM is equivalent to two 1R DIMMs. · 4R—Four ranks. (Quad-Rank). A 4R DIMM is equivalent to two 2R DIMMs. · 8R—Eight ranks (8-Rank). An 8R DIMM is equivalent to two 4R DIMMs. |
3 |
Data width |
Options include: · ×4—4 bits. · ×8—8 bits. |
4 |
DIMM generation |
Only DDR4 is supported. |
5 |
Data rate |
Options include: · 2133P—2133 MT/s. · 2400T—2400 MT/s. · 2666V—2666 MT/s. · 2933Y—2933 MT/s. |
6 |
DIMM type |
Options include: · L—LRDIMM. · R—RDIMM. |
HDDs and SSDs
Drive numbering
For the relationship between the physical drive numbers and their numberes displayed in HDM or the BIOS, see H3C UniServer R6900 G5 Server Drive Slot Number Mapping Matrixes.
Figure 14 Drive numbering at the server front
Drive LEDs
The server supports SAS, SATA, and NVMe drives, of which SAS and SATA drives support hot swapping and NVMe drives support hot insertion and managed hot removal. You can use the LEDs on a drive to identify its status after it is connected to a storage controller.
For more information about OSs that support hot insertion and managed hot removal of NVMe drives, visit the OS compatibility query tool at http://www.h3c.com/cn/Service/Document_Software/Document_Center/Server/.
Figure 15 shows the location of the LEDs on a drive.
(1) Fault/UID LED |
(2) Present/Active LED |
To identify the status of a SAS or SATA drive, use Table 14. To identify the status of an NVMe drive, use Table 15.
Table 14 SAS/SATA drive LED description
Fault/UID LED status |
Present/Active LED status |
Description |
Flashing amber (0.5 Hz) |
Steady green/Flashing green (4.0 Hz) |
A drive failure is predicted. As a best practice, replace the drive before it fails. |
Steady amber |
Steady green/Flashing green (4.0 Hz) |
The drive is faulty. Replace the drive immediately. |
Steady blue |
Steady green/Flashing green (4.0 Hz) |
The drive is operating correctly and is selected by the RAID controller. |
Off |
Flashing green (4.0 Hz) |
The drive is performing a RAID migration or rebuilding, or the system is reading or writing data to the drive. |
Off |
Steady green |
The drive is present but no data is being read or written to the drive. |
Off |
Off |
The drive is not securely installed. |
Table 15 NVMe drive LED description
Fault/UID LED status |
Present/Active LED status |
Description |
Flashing amber (0.5 Hz) |
Off |
The managed hot removal process is completed and the drive is ready for removal. |
Flashing amber (4 Hz) |
Off |
The drive is in hot insertion process. |
Steady amber |
Steady green/Flashing green (4.0 Hz) |
The drive is faulty. Replace the drive immediately. |
Steady blue |
Steady green/Flashing green (4.0 Hz) |
The drive is operating correctly and selected by the RAID controller. |
Off |
Flashing green (4.0 Hz) |
The drive is performing a RAID migration or rebuilding, or the system is reading or writing data to the drive. |
Off |
Steady green |
The drive is present but no data is being read or written to the drive. |
Off |
Off |
The drive is not securely installed. |
Drive configurations
The server supports multiple drive configurations. For more information about drive configurations and their required storage controller and riser cards, see H3C UniServer R6900 G5 Server Drive Configurations and Cabling Guide.
Drive backplanes
The server supports the following types of drive backplanes:
· SAS/SATA drive backplanes—Support only SAS/SATA drives in any drive slots.
· UniBay drive backplanes—Support both SAS/SATA and NVMe drives in any drive slots. You must connect both SAS/SATA and NVMe data cables. The number of supported drives varies by drive cabling.
· Drive backplane (X SAS/SATA+Y UniBay)—Support SAS/SATA in any drive slots and support NVMe drives in specific drive slots. You must connect both SAS/SATA and NVMe data cables. The number of supported drives varies by drive cabling.
¡ X: Number of slots supporting SAS/SATA drives.
¡ Y: Number of slots supporting both SAS/SATA drives and NVMe drives.
8SFF SAS/SATA drive backplane
An 8SFF SAS/SATA drive backplane can be installed at the server front to support eight 2.5-inch SAS/SATA drives.
Figure 16 8SFF SAS/SATA drive backplane
(1) x8 Mini-SAS-HD connector (SAS PORT 1) |
(2) AUX connector (AUX 1) |
(3) Power connector (PWR 1) |
|
8SFF UniBay drive backplane
An 8SFF UniBay drive backplane can be installed at the server front to support eight 2.5-inch SAS/SATA/NVMe drives.
Figure 17 8SFF UniBay drive backplane
(1) x8 Mini-SAS-HD connector (SAS PORT) |
(2) AUX connector (AUX) |
(3) SlimSAS connector A1/A2 (PCIe 3.0 x8)(NVMe A1/A2) |
(4) Power connector (PWR) |
(5) SlimSAS connector A3/A4 (PCIe 3.0 x8)(NVMe A3/A4) |
|
(6) SlimSAS connector B1/B2 (PCIe 3.0 x8)(NVMe B1/B2) |
|
(7) SlimSAS connector B3/B4 (PCIe 3.0 x8)(NVMe B3/B4) |
|
The description for PCIe3.0 x8 is as follows: · PCIe3.0: Third-generation signal speed. · x8: Bus bandwidth. |
25SFF drive backplane 1 (17SAS/SATA+8UniBay)
A PCA-BP-25SFF-2U-G5 drive backplane can be installed at the server front to support 25 2.5-inch drives, including 17 SAS/SATA drives and 8 SAS/SATA/NVMe drives. The backplane is embedded with an Expander chip, which allows it to manage 25 SAS/SATA drives through an x8 Mini-SAS-HD port.
The backplane also provides three downlink connectors to connect to other backplanes and manage more drives.
Figure 18 25SFF drive backplane
(1) x4 Mini-SAS-HD downlink connector 3 (SAS EXP 3) |
|
(2) x8 Mini-SAS-HD uplink connector (SAS PORT)(controls all SAS/SATA drives attached to the backplane and backplanes connected to the downlink connectors) |
|
(3) Power connector 2 (PWR2) |
|
(4) Power connector 1 (PWR1) |
(5) AUX connector (AUX) |
(6) x8 Mini-SAS-HD downlink connector 2 (SAS EXP 2) |
|
(7) x3 Mini-SAS-HD downlink connector 1 (SAS EXP 1) |
|
(8) SlimSAS connector A1/A2 (PCIe 3.0 x8)(NVMe-A1/A2)(supports NVMe drives in slots 17 and 18 or slots 42 and 43) |
|
(9) SlimSAS connector A3/A4 (PCIe 3.0 x8)(NVMe-A3/A4)(supports NVMe drives in slots 19 and 20 or slots 44 and 45) |
|
(10) SlimSAS connector B1/B2 (PCIe 3.0 x8)(NVMe-B1/B2)(supports NVMe drives in slots 21 and 22 or slots 46 and 47) |
|
(11) Power connector 3 (PWR3) |
|
(12) SlimSAS connector B3/B4 (PCIe 3.0 x8)(NVMe-B3/B4)(supports NVMe drives in slots 23 and 24 or slots 48 and 49) |
|
PCIe3.0 x8 description: · PCIe3.0: Third-generation signal speed. · x8: Bus bandwidth. For more information about drive numbering, see Figure 14. |
25SFF drive backplane 2 (17SAS/SATA+8UniBay)
A PCA-BP-25SFF-2U-G5-1 drive backplane can be installed at the server front to support 25 2.5-inch drives, including 17 SAS/SATA drives and 8 SAS/SATA/NVMe drives. The backplane is embedded with an Expander chip, which allows it to manage 25 SAS/SATA drives through an x8 Mini-SAS-HD port.
The backplane also provides a downlink connector to connect to other backplanes and manage more drives.
Figure 19 25SFF drive backplane
(1) x8 Mini-SAS-HD uplink connector (SAS PORT)(controls all SAS/SATA drives attached to the backplane and backplanes connected to the downlink connector) |
|
(2) Power connector 2 (PWR2) |
(3) Power connector 1 (PWR1) |
(4) AUX connector (AUX) |
(5) x3 Mini-SAS-HD downlink connector (SAS EXP 1) |
(6) SlimSAS connector A1/A2 (PCIe 4.0 x8)(NVMe-A1/A2)(supports NVMe drives in slots 17 and 18 or slots 42 and 43) |
|
(7) SlimSAS connector A3/A4 (PCIe 4.0 x8)(NVMe-A3/A4)(supports NVMe drives in slots 19 and 20 or slots 44 and 45) |
|
(8) SlimSAS connector B1/B2 (PCIe 4.0 x8)(NVMe-B1/B2)(supports NVMe drives in slots 21 and 22 or slots 46 and 47) |
|
(9) Power connector 3 (PWR3) |
|
(10) SlimSAS connector B3/B4 (PCIe 3.0 x8)(NVMe-B3/B4)(supports NVMe drives in slots 23 and 24 or slots 48 and 49) |
|
PCIe4.0 x8 description: · PCIe4.0: Fourth-generation signal speed. · x8: Bus bandwidth. For more information about drive numbering, see Figure 14. |
LCD smart management module
An LCD smart management module displays basic server information, operating status, and fault information, and provides diagnostics and troubleshooting capabilities. You can locate and troubleshoot component failures by using the LCD module in conjunction with the event logs generated in HDM.
Figure 20 LCD smart management module
Table 16 LCD smart management module description
No. |
Item |
Description |
1 |
Mini-USB connector |
Used for upgrading the firmware of the LCD module. |
2 |
LCD module cable |
Connects the LCD module to the system board of the server. For information about the LCD smart management module connector on the system board, see "System board components." |
3 |
LCD module shell |
Protects and secures the LCD screen. |
4 |
LCD screen |
Displays basic server information, operating status, and fault information. |
Fan modules
The server supports four hot swappable fan modules. Each fan module includes two fans and each fan includes two rotors. The fan rotors support N+1 redundancy. Figure 21 shows the layout of the fan modules in the chassis.
The server can adjust the fan rotation speed based on the server temperature to provide optimal performance with balanced ventilation and noise.
During system POST and operation, the server will be gracefully powered off through HDM if the temperature detected by any sensor in the server reaches the critical threshold. The server will be powered off directly if the temperature of any key components such as processors exceeds the upper threshold. For more information about the thresholds and detected temperatures, access the HDM Web interface and see HDM online help.
Riser cards
To expand the server with PCIe modules, install riser cards on the PCIe riser connectors.
Riser card guidelines
Each PCIe slot in a riser card can supply a maximum of 75 W power to the PCIe module. You must connect a separate power cord to the PCIe module if it requires more than 75 W power.
If a processor is faulty or absent, the corresponding PCIe slots are unavailable.
RC-3FHHL-2U-SW-G5
Item |
Specifications |
PCIe riser connector |
Connector 1 or 2 |
PCIe slots |
· Connector 1: ¡ Slot 2: PCIe3.0 ×16 for processor 1 ¡ Slot 4: PCIe3.0 ×16 for processor 3 ¡ Slot 6: PCIe3.0 ×16 for processor 1 · Connector 2: ¡ Slot 8: PCIe3.0 ×16 for processor 2 ¡ Slot 10: PCIe3.0 ×16 for processor 2 ¡ Slot 12: PCIe3.0 ×16 for processor 2 NOTE: The numbers in parentheses represent supported link widths. |
Form factors of PCIe modules |
FHHL |
Maximum power supplied per PCIe slot |
75 W |
Figure 22 RC-3FHHL-2U-SW-G5 riser card
(1) PCIe3.0 x16 slot 6/12 |
(2) PCIe3.0 x16 slot 4/10 |
(3) GPU module power connector |
(4) PCIe3.0 x16 slot 2/8 |
RC-6FHHL-2U-SW-G5
Item |
Specifications |
PCIe riser connector |
Connector 1 or 2 |
PCIe slots |
· Connector 1: ¡ Slot 1: PCIe3.0 ×8 for processor 1 ¡ Slot 2: PCIe3.0 ×8 for processor 1 ¡ Slot 3: PCIe3.0 ×8 for processor 3 ¡ Slot 4: PCIe3.0 ×8 for processor 3 ¡ Slot 5: PCIe3.0 ×8 for processor 1 ¡ Slot 6: PCIe3.0 ×8 for processor 1 · Connector 2: ¡ Slot 7: PCIe3.0 ×8 for processor 2 ¡ Slot 8: PCIe3.0 ×8 for processor 2 ¡ Slot 9: PCIe3.0 ×8 for processor 2 ¡ Slot 10: PCIe3.0 ×8 for processor 2 ¡ Slot 11: PCIe3.0 ×8 for processor 2 ¡ Slot 12: PCIe3.0 ×8 for processor 2 NOTE: The numbers in parentheses represent supported link widths. |
Form factors of PCIe modules |
FHHL |
Maximum power supplied per PCIe slot |
75 W |
Figure 23 RC-6FHHL-2U-SW-G5 riser card
(1) PCIe3.0 x8 slot 6/12 |
(2) PCIe3.0 x8 slot 5/11 |
(3) PCIe3.0 x8 slot 4/10 |
(4) GPU module power connector 1 |
(5) GPU module power connector 2 |
(6) PCIe3.0 x8 slot 3/9 |
(7) PCIe3.0 x8 slot 2/8 |
(8) PCIe3.0 x8 slot 1/7 |
RC-3FHHL-2U-SW-G5-1
Item |
Specifications |
PCIe riser connector |
Connector 3 |
PCIe slots |
· Slot 14: PCIe3.0 ×16 for processor 4 · Slot 15: PCIe3.0 ×16 for processor 4 · Slot 16: PCIe3.0 ×16 for processor 4 NOTE: The numbers in parentheses represent supported link widths. |
Form factors of PCIe modules |
FHHL |
Maximum power supplied per PCIe slot |
75 W |
Figure 24 RC-3FHHL-2U-SW-G5-1 riser card
(1) PCIe3.0 x16 slot 16 |
(2) PCIe3.0 x16 slot 15 |
(3) GPU module power connector |
(4) PCIe3.0 x16 slot 14 |
RC-6FHHL-2U-SW-G5-1
Item |
Specifications |
PCIe riser connector |
Connector 3 |
PCIe slots |
Slots 13 through 18: PCIe3.0 ×8 for processor 4 |
Form factors of PCIe modules |
FHHL |
Maximum power supplied per PCIe slot |
75 W |
Figure 25 RC-6FHHL-2U-SW-G5-1 riser card
(1) PCIe3.0 x8 slot 18 |
(2) PCIe3.0 x8 slot 17 |
(3) PCIe3.0 x8 slot 16 |
(4) GPU module power connector 2 |
(5) GPU module power connector 1 |
(6) PCIe3.0 x8 slot 15 |
(7) PCIe3.0 x8 slot 14 |
(8) PCIe3.0 x8 slot 13 |
PCIe modules
Typically, the PCIe modules are available in the following standard form factors:
· LP—Low profile.
· FHHL—Full height and half length.
· FHFL—Full height and full length.
· HHHL—Half height and half length.
· HHFL—Half height and full length.
The following PCIe modules require PCIe I/O resources: Storage controllers, FC HBAs, and GPU modules. Make sure the number of such PCIe modules installed does not exceed 11.
Storage controllers
The server supports the following types of storage controllers:
· Embedded VROC controller—Embedded in the server and does not require installation.
· Standard storage controller—Comes in a standard PCIe form factor and typically requires a riser card for installation.
For some storage controllers, you can order a power fail safeguard module to prevent data loss from power outages. This module provides a flash card and a supercapacitor. When a system power failure occurs, the supercapacitor provides power for a minimum of 20 seconds. During this interval, the storage controller can transfer data from DDR memory to the flash card, where the data remains indefinitely or until the controller retrieves the data. If the storage controller contains a built-in flash card, you can order only a supercapacitor.
Embedded VROC controller
Item |
Specifications |
Type |
Embedded in PCH of the system board |
Number of internal ports |
10 internal SAS ports (compatible with SATA) |
Connectors |
· One onboard ×8 Mini-SAS-HD connector · Two onboard ×1 SATA connectors |
Drive interface |
6 Gbps SATA 3.0 Supports drive hot swapping |
RAID levels |
0, 1, 5, 10 |
Built-in cache memory |
N/A |
Built-in flash |
N/A |
Power fail safeguard module |
Not supported |
Supercapacitor connector |
N/A |
Firmware upgrade |
Upgrade with the BIOS |
Mezzanine and standard storage controllers
For more information, visit the query tool at http://www.h3c.com/cn/Service/Document_Software/Document_Center/Server/.
NVMe VROC modules
Model |
RAID levels |
Compatible NVMe SSDs |
NVMe-VROC-Key-S |
0, 1, 10 |
All NVMe drives |
NVMe-VROC-Key-P |
0, 1, 5, 10 |
All NVMe drives |
NVMe-VROC-Key-I |
0, 1, 5, 10 |
Intel NVMe drives |
B/D/F information
Viewing B/D/F information
Table 17 lists the default Bus/Device/Function numbers (B/D/F) used by the server when the following conditions are all met:
· All processor sockets are installed with processors.
· All PCIe riser connectors are installed with riser cards.
· All PCIe slots in riser cards are installed with PCIe modules.
· An OCP network adapter is installed in slot 19.
B/D/F information in Table 17 might change if any of the above conditions is not met or a PCIe module with a PCIe bridge is installed.
For more information about riser cards, see "Riser cards." For more information the location of slot 19, see "Rear panel view."
For information about how to obtain B/D/F information, see "Obtaining B/D/F information."
Table 17 PCIe modules and the corresponding Bus/Device/Function numbers
Riser card model |
PCIe riser connector |
PCIe slot |
Processor |
Port number |
Root port (B/D/F) |
End point (B/D/F) |
RC-3FHHL-2U-SW-G5 |
PCIe riser connector 1 |
slot 2 |
CPU 1 |
Port 1A |
15:00.0 |
16:00.0 |
slot 4 |
CPU 3 |
Port 1A |
23:00.0 |
24:00.0 |
||
slot 6 |
CPU 1 |
Port 3A |
32:00.0 |
33:00.0 |
||
PCIe riser connector 2 |
slot 8 |
CPU 2 |
Port 2A |
57:00.0 |
58:00.0 |
|
slot 10 |
CPU 2 |
Port 1A |
43:00.0 |
44:00.0 |
||
slot 12 |
CPU 2 |
Port 3A |
6c:00.0 |
6d:00.0 |
||
RC-6FHHL-2U-SW-G5 |
PCIe riser connector 1 |
slot 1 |
CPU 1 |
Port 1C |
15:02.0 |
17:00.0 |
slot 2 |
CPU 1 |
Port 1A |
15:00.0 |
16:00.0 |
||
slot 3 |
CPU 3 |
Port 1C |
23:02.0 |
25:00.0 |
||
slot 4 |
CPU 3 |
Port 1A |
23:00.0 |
24:00.0 |
||
slot 5 |
CPU 1 |
Port 3A |
32:00.0 |
33:00.0 |
||
slot 6 |
CPU 1 |
Port 3C |
32.02.0 |
34:00.0 |
||
PCIe riser connector 2 |
slot 7 |
CPU 2 |
Port 2C |
57:02.0 |
59:00.0 |
|
slot 8 |
CPU 2 |
Port 2A |
57:00.0 |
58:00.0 |
||
slot 9 |
CPU 2 |
Port 1C |
43:02.0 |
45:00.0 |
||
slot 10 |
CPU 2 |
Port 1A |
43:00.0 |
44:00.0 |
||
slot 11 |
CPU 2 |
Port 3A |
6c:00.0 |
6d:00.0 |
||
slot 12 |
CPU 2 |
Port 3C |
6c:02.0 |
6e:00.0 |
||
RC-3FHHL-2U-SW-G5-1 |
PCIe riser connector 3 |
slot 14 |
CPU 4 |
Port 1A |
c3:00.0 |
c4:00.0 |
slot 15 |
CPU 4 |
Port 3A |
ec:00.0 |
ed:00.0 |
||
slot 16 |
CPU 4 |
Port 2A |
d7:00.0 |
d8:00.0 |
||
RC-6FHHL-2U-SW-G5-1 |
PCIe riser connector 3 |
slot 13 |
CPU 4 |
Port 1C |
c3:02.0 |
c5:00.0 |
slot 14 |
CPU 4 |
Port 1A |
c3:00.0 |
c4:00.0 |
||
slot 15 |
CPU 4 |
Port 2A |
d7:00.0 |
d8:00.0 |
||
slot 16 |
CPU 4 |
Port 2C |
d7:02.0 |
d9:00.0 |
||
slot 17 |
CPU 4 |
Port 3C |
ec:02.0 |
ee:00.0 |
||
slot 18 |
CPU 4 |
Port 3A |
ec:00.0 |
ed:00.0 |
||
N/A |
OCP network adapter connector |
slot 19 |
CPU 1 |
Port 2A |
23:00.0 |
24:00.0 |
|
NOTE: · The root port (B/D/F) indicates the bus number of the PCIe root node in the processor. · The end point (B/D/F) indicates the bus number of a PCIe module in the operating system. |
Obtaining B/D/F information
The B/D/F information of the server varies by PCIe module configuration. You can obtain B/D/F information by using one of the following methods:
· BIOS log—Search the dumpiio keyword in the BIOS log.
· UEFI shell—Execute the pci command. For information about how to execute the command, execute the help pci command.
· Operating system—The obtaining method varies by OS.
¡ For Linux, execute the lspci command.
If Linux does not support the lspci command by default, you must execute the yum command to install the pci-utils package.
¡ For Windows, install the pciutils package, and then execute the lspci command.
¡ For VMware, execute the lspci command.
Appendix C Hot swapping and managed hot removal of NVMe drives
Before you begin
Before replacing an NVMe drive when the server is operating, perform the following tasks:
· To avoid data loss, stop reading data from or writing data to the NVMe drive and back up data.
· Make sure VMD is enabled. For more information about VMD, see the BIOS user guide for the server. To perform a managed hot removal of the NVMe drive when VMD is disabled, contact H3C Support.
· Go to http://www.h3c.com/cn/Service/Document_Software/Document_Center/Server/ to get information about the hot swapping methods and operating systems supported by the NVMe drive.
· Make sure the BIOS version is 5.06 or higher and the HDM version is 2.13 or higher.
· Update the CPLD of the system board and drive backplane to the latest version.
· Make sure the number of member drives to be removed from a RAID setup does not exceed the maximum allowed number of failed drives as described in Table 18.
Table 18 Number of hot-swappable drives from a RAID setup
RAID level |
Required drives |
Max. failed drives |
RAID 0 |
≥ 2 |
0 |
RAID 1 |
2 |
1 |
RAID 5 |
≥ 3 |
1 |
RAID 10 |
4 |
2 NOTE: Make sure the two failed drives are in different RAID 1 setups. |
Removing an NVMe drive
Performing a hot removal in Windows
Prerequisites
Before replacing an NVMe drive in Windows, make sure the Intel® VROC driver version is consistent with the VROC PreOS version in the BIOS.
To view the VROC PreOS version in the BIOS:
1. After the server is powered on or rebooted, press Delete or Esc at the prompt to enter the BIOS setup page.
For some servers, you can press Delete or F2 at the prompt to enter the BIOS setup page.
The BIOS setup page varies by the BIOS version. The BIOS setup page in Figure 26 is for illustration only.
2. Select Advanced > Intel(R) Virtual RAID on CPU, as shown in Figure 27.
The Intel(R) Virtual RAID on CPU option is displayed on the advanced page only when the VMD controller has been enabled. For information about enabling the VMD controller, see H3C Servers Storage Controller User Guide.
3. View the VROC PreOS version (the first two digits) on the NVMe RAID overview page. As shown in Figure 28, the VROC PreOS version is 7.0.
Figure 28 NVMe RAID overview page
Procedure
1. Run Intel® Virtual RAID on CPU to view NVMe drives.
IMPORTANT: Install and use Intel® Virtual RAID on CPU according to the guide provided with the tool kit. To obtain Intel® Virtual RAID on CPU, use one of the following methods: · Go to https://platformsw.intel.com/KitSearch.aspx to download the software. · Contact Intel Support. |
Figure 29 Viewing NVMe drives
2. Select the NVMe drive to be removed from the Devices list and identify the drive location.
This procedure removes the NVMe drive from Controller 0,Port1.
Figure 30 Removing the NVMe drive
3. Stop the services on the NVMe drive.
¡ If the RAID rebuild is complete, a hot spare becomes the member drive in the RAID, go to step 5.
Figure 31 RAID rebuild completed
¡ If the RAID rebuild is in progress, wait the RAID rebuild to complete.
CAUTION: Do not perform any operations on the NVMe drive when the RAID rebuild is in progress. |
Figure 32 RAID rebuild in progress
5. Click Activate LED for the drive. The Fault/UID LED on the physical drive will turn steady blue for 10 seconds and then turn off automatically. The Present/Active LED will be steady green.
Figure 33 Activating the LED for the NVMe drive
Performing a hot removal in Linux
1. Execute the lsblk | grep nvme command to identify the name of the NVMe drive to be removed.
This procedure uses drive nvme2n1 as an example.
Figure 34 Identifying the name of the NVMe drive to be removed
2. Stop the services on the NVMe drive.
a. Execute the df -h command to identify the mounting status of the NVMe drive. As shown in Figure 35, the drive has been mounted.
Figure 35 Viewing the mounting status of the NVMe drive
b. Execute the umount /dev/nvme2n1 command to unmount the drive. As shown in Figure 36, the drive has been unmounted.
Figure 36 Unmounting the NVMe drive
4. (Optional.) If the NVMe drive is in a RAID setup with hot spares and faulty, execute the cat /proc/mdstat command to view the RAID rebuild status.
¡ If the RAID rebuild is complete, a hot spare becomes the member drive in the RAID, go to step 5.
Figure 37 RAID rebuild completed
¡ If the RAID rebuild is in progress, wait the RAID rebuild to complete.
CAUTION: Do not perform any operations on the NVMe drive when the RAID rebuild is in progress. |
Figure 38 RAID rebuild in progress
5. (Optional.) Remove the drive from the container. Skip this step for an NVMe drive that is not in a RAID setup.
a. Execute the mdadm –r /dev/md/imsm0 /dev/nvme2n1 command to remove the drive from the container, as shown in Figure 39.
Figure 39 Removing the NVMe drive from the container
b. Execute the cat /proc/mdstat command to check whether the drive has been removed from the container. As shown in Figure 40, the drive has been removed from the container.
Figure 40 Verifying the drive removal status
6. Identify the location of the NVMe drive on the server.
a. Execute the find /sys/devices –iname nvme2n1 command to identify the bus number of the drive. As shown in Figure 41, the bus number for the drive is 10000:04:00.0.
Figure 41 Identifying the bus number
b. Execute the lspci –vvs 10000:04:00.0 command to identify the PCIe slot number based on the bus number. As shown in Figure 42, the PCIe slot is 123.
Figure 42 Identifying the PCIe slot number
c. Identify the physical slot number of the drive.
# Log in to HDM.
# Select Hardware Summary > NVMe and then identify the physical slot number of the drive based on the PCIe slot number. In this example, the physical slot is Box3-3. For more information about drive slot numbers, see "Front panel view of the server.".
Figure 43 Identifying the physical slot number
7. Remove the NVMe drive. For more information about the removal procedure, see "Removing an NVMe drive."
Performing a hot removal in VMware
1. Identify the NVMe drive to be removed. As shown in Figure 44, click the Devices tab on the VMware ESXi management GUI.
This procedure uses drive t10.NVMe__INTEL_SSDPE2KE016T8_______BTLN813609NS1P6AGN_00000001 as an example.
Figure 44 Identifying the NVMe drive to be removed
2. Stop the services on the NVMe drive to be removed.
3. Click the drive name to view its mounting status.
¡ If partitions exist, go to step 4 to unmount the drive.
¡ If no partition exists, turn on the LED on the drive. For the detailed procedure, see step 5.
Figure 45 Viewing the mounting status
4. (Optional.) Unmount the NVMe drive.
a. Click the Datastores tab to view the mounted NVMe drives.
Figure 46 Viewing the mounted NVMe drives
b. Click the drive and identify its name. Make sure it is the drive you are to remove.
Figure 47 Viewing the drive name
c. Click Actions and then select Unmount from the dropdown list. In the dialog box that opens, click Yes.
Figure 48 Unmounting the NVMe drive
Figure 49 Confirming the drive removal
d. Click the Datastores tab to view the drive removal status. As shown in Figure 50, the drive capacity is 0 B, indicating that the NVMe drive has been removed successfully.
Figure 50 Viewing the drive removal status
5. Turn on the LED for the NVMe drive to identify the location of the NVMe drive on the server.
a. Install the Intel VMD LED management command line tool on the server. To obtain the Intel VMD LED management command line tool, access https://downloadcenter.intel.com/download/28288/Intel-VMD-ESXi-Tools.
b. Execute the esxcfg-mpath –L command to view the SCSI ID for the NVMe drive. As shown in Figure 51, the VMD adapter for the drive is vmhba2 and the drive number is 1.
Figure 51 Viewing the SCSI ID for the NVMe drive
b. Execute the cd /opt/intel/bin/ command to access the directory where the Intel VMD LED management command line tool resides.
c. Execute the /intel-vmd-user set-led vmhba2 –d 1 –l identify command to turn on the LED for the drive.
Figure 52 Turning on the LED for the drive
d. Observe the LEDs on the NVMe drive. You can remove the NVMe drive after the Fault/UID LED turns steady blue and the Present/Active LED turns steady green.
6. Remove the NVMe drive. For more information about the removal procedure, see "Removing an NVMe drive."
Performing a managed hot removal in Windows
1. Run Intel® Virtual RAID on CPU to view NVMe drives. For more information, see step 1 in "Performing a hot removal in Windows."
2. Select the NVMe drive to be removed from the Devices list and identify its location on the server. For more information, see step 2 in "Performing a hot removal in Windows."
3. Stop the services on the NVMe drive.
4. (Optional.) If the NVMe drive is in a RAID setup configured with hot spares, view the RAID rebuild status. For more information, see step 4 in "Performing a hot removal in Windows."
5. Click Activate LED to turn on the Fault/UID LED on the drive, as shown by callout 1 in Figure 53. The Fault/UID LED on the physical drive will turn steady blue for 10 seconds and turn off automatically. The Present/Active LED will turn steady green.
6. Click Remove Disk, as shown by callout 2 in Figure 53.
Figure 53 Removing the NVMe drive
7. Observe the LEDs on the NVMe drive. Make sure the Fault/UID LED is steady blue and the Present/Active LED is steady green.
8. Make sure the NVMe drive is removed from the Devices list of Intel® Virtual RAID on CPU.
9. Remove the NVMe drive. For more information about the removal procedure, see "Removing an NVMe drive."
Performing a managed hot removal in Linux
1. Identify the name of the NVMe drive to be removed. For more information, see step 1 in "Performing a hot removal in Linux."
2. Stop the services on the NVMe drive.
3. (Optional.) If the NVMe drive is a pass-through drive, view the mounting status of the drive. If the drive has been mounted, first unmount it. For more information, see step 3 in "Performing a hot removal in Linux."
4. (Optional.) If the NVMe drive is in a RAID setup configured with hot spares, view the RAID rebuild status. For more information, see step 4 in "Performing a hot removal in Linux." Then remove the drive from the container. For more information, see step 5 in "Performing a hot removal in Linux."
5. (Optional.) On an SUSE15, SUSE12SP4, or RHEL 7.6 operating system, the Fault/UID LED on the server remains steady blue after you execute the unmounting command on the system. For easy location of the drive slot when the server runs such an operating system, manually create the ledmon service.
a. Execute the vim /usr/lib/systemed/system/ledmon.service command to create a ledmon service file.
Figure 54 Creating a ledmon service file
b. Add settings to the ledmon service file.
Figure 55 Adding settings to the ledmon service file
c. Start the ledmon service server.
Figure 56 Starting the ledmon service server
6. Unmount the NVMe drive and verify the NVMe drive status:
a. Execute the echo 1 > /sys/block/nvme2n1/device/device/remove command to unmount drive nvme2n1 from the operating system.
Figure 57 Unmounting the NVMe drive
b. Execute the lsblk command to view the mounted NVMe drives. Drive nvme2n1 is not in the command output, indicating that it is unmounted successfully.
Figure 58 Verifying the NVMe drive status
7. Observe the LEDs on the NVMe drive. You can remove the NVMe drive after the Fault/UID LED is steady amber and the Present/Active LED is steady green.
8. Remove the NVMe drive. For more information about the removal procedure, see "Removing an NVMe drive."
Installing an NVMe drive
Performing a hot installation in Windows
2. Observe the LEDs on the NVMe drive. The NVMe drive is present in the slot without any faults if the Present/Active LED is steady green and the Fault/UID LED is off.
3. Run Intel® Virtual RAID on CPU to view the operating status of the NVMe drive.
As shown in Figure 59, the NVMe drive is displayed in the Devices list and the drive properties is consistent with the actual drive specification, indicating that the NVMe drive is installed successfully.
Figure 59 Verifying the status of the installed NVMe drive in Windows
IMPORTANT: Install NVMe drives one after another. Only after an NVMe drive is installed and recognized by the system, can you install another one. |
Performing a hot installation in Linux
1. Install an NVMe drive. For more information about the installation procedure, see "Installing an NVMe drive."
2. Observe the LEDs on the NVMe drive. The NVMe drive is present in the slot without any faults if the Present/Active LED is steady green and the Fault/UID LED is off.
3. View the installation status of the NVMe drive in Linux.
¡ If the NVMe drive is removed by using the hot removal procedure, execute the lspci –vvs with the bus number of the drive, for example, 10000:04:00.0. As shown in Figure 60, information about the NVMe drive with bus number 10000:04:00.0 is displayed, indicating that the drive is installed successfully.
Figure 60 Viewing the NVMe drive installation status in Linux (1)
¡ If the NVMe drive is removed by using the managed hot removal procedure, execute the lsblk command to view information about the drives. As shown in Figure 61, NVMe drive nvme2n1 is displayed in the command output, indicating that the drive is installed successfully.
Figure 61 Viewing the NVMe drive installation status in Linux (2)
Performing a hot installation in VMware
1. Install an NVMe drive. For more information about the installation procedure, see "Installing an NVMe drive."
2. Observe the LEDs on the NVMe drive. The NVMe drive is present in the slot without any faults when the Present/Active LED is steady green and the Fault/UID LED is off.
3. Execute the esxcfg-mpath –L command to view the status of the installed NVMe drive in VMware.
As shown in Figure 62, drive t10.NVMe__INTEL_SSDPE2KE016T8_______BTLN813609NS1P6AGN_00000001 is displayed in the command output, so the NVMe drive is installed successfully.
Figure 62 Verifying the status of the installed NVMe drive in VMware
Verifying the RAID status of the installed NVMe drive
After an NVMe drive is installed, verify the following items:
· If the removed NVMe drive is a member drive in a RAID setup that offers redundancy, does not have any hot spares, and is enabled with RAID rebuild, the storage controller will automatically rebuild the RAID.
¡ In a Linux operating system, execute the cat /proc/mdstat command to view the RAID rebuild status.
Figure 63 RAID rebuild completed
Figure 64 RAID rebuild in progress
¡ In a Windows operating system, run Intel® Virtual RAID on CPU to view the RAID rebuild status.
Figure 65 RAID rebuild completed
Figure 66 RAID rebuild in progress
· If the removed NVMe drive is a pass-through drive, the new NVMe drive functions as a pass-through drive.
· If the removed NVMe drive is a member drive in a RAID setup that does not offer redundancy, the new NVMe drive functions as a pass-through drive. You can add the new NVMe drive to a RAID as needed.
· If the removed NVMe drive is a member drive in a RAID setup that offers redundancy, does not have hot spares, and is disabled with RAID rebuild, the new NVMe drive functions as a pass-through drive. You can add the new NVMe drive to a RAID as needed.
· If the removed NVMe drive is a member drive in a RAID setup that offers redundancy and is configured with hot spares, the new NVMe drive functions as a pass-through drive. You can add the new NVMe drive to a RAID as needed.
For more information about RAID, see the storage controller user guide.
Appendix D Environment requirements
About environment requirements
The operating temperature requirements for the server vary depending on the server model and hardware configuration. When the general and component-based requirements conflict, use the component-based requirement.
Be aware that the actual maximum operating temperature of the server might be lower than what is stated because of poor site cooling performance. In a real data center, the server cooling performance might decrease because of adverse external factors, including poor cabinet cooling performance, high power density inside the cabinet, or insufficient spacing between devices.
General environment requirements
Item |
Specifications |
Operating temperature |
Minimum: 5°C (41°F) Maximum: 45°C (113°F) The maximum temperature varies by hardware option presence. For more information, see "Operating temperature requirements." |
Storage temperature |
–40°C to +70°C (–40°F to +158°F) |
Operating humidity |
8% to 90%, noncondensing |
Storage humidity |
5% to 95%, noncondensing |
Operating altitude |
–60 m to +3000 m (–196.85 ft to +9842.52 ft) The allowed maximum temperature decreases by 0.33 °C (32.59°F) as the altitude increases by 100 m (328.08 ft) from 900 m (2952.76 ft) |
Storage altitude |
–60 m to +5000 m (–196.85 ft to +16404.20 ft) |
Operating temperature requirements
General guidelines
When a motor fails, the maximum server operating temperature decreases by 5°C (41°F).
If you install two TDPs of 205 W or higher in the server, the server performance might decrease.
The DPS-1600AB-13 R power supply operates correctly only at 30°C (86°F).
The GPU-V100S-32G module can be installed only in the server that uses 8SFF drive configuration.
With GPU modules installed in the server, you cannot use drive HDD-2.4T-SAS3-10K-SFF, HDD-2.4T-SAS-12G-10K-SFF, HDD-1.8T-SAS3-10K-SFF, or HDD-1.8T-SAS-12G-10K-SFF in the server. Otherwise, the drive performance might decrease.
Any drive configuration except for 8SFF drive configuration
Table 19 Operating temperature requirements
Maximum server operating temperature |
Hardware options |
30°C (86°F) |
All hardware options are supported. |
35°C (95°F) |
DPS-1600AB-13 R power supply is not supported. |
40°C (104°F) |
The following hardware options are not supported: · Processors with a TDP of more than 205 W. · DCPMMs. · NVMe SSD PCIe accelerator modules. · NVMe drives. · SATA M.2 SSDs. · GPU modules. · DPS-1600AB-13 R power supply. |
8SFF drive configuration
The 8SFF drives installed in slots 0 to 7. For more information about drive slots, see "Front panel view of the server."
Table 20 Operating temperature requirements
Maximum server operating temperature |
Hardware options |
30°C (86°F) |
All hardware options are supported. |
35°C (95°F) |
With a GPU-V100S-32G module installed in the server, only processors with a TDP of 165 W or lower are supported. |
40°C (104°F) |
The following hardware options are not supported: · DCPMMs. · NVMe SSD PCIe accelerator modules. · NVMe drives. · SATA M.2 SSDs. · GPU modules. · DPS-1600AB-13 R power supply. |
45°C (113°F) |
The following hardware options are not supported: · DCPMMs. · NVMe SSD PCIe accelerator modules. · NVMe drives. · SATA M.2 SSDs. · GPU modules. · DPS-1600AB-13 R power supply. · Processors with a TDP of 165 W or lower. |
Appendix E Product recycling
New H3C Technologies Co., Ltd. provides product recycling services for its customers to ensure that hardware at the end of its life is recycled. Vendors with product recycling qualification are contracted to New H3C to process the recycled hardware in an environmentally responsible way.
For product recycling services, contact New H3C at
· Tel: 400-810-0504
· E-mail: [email protected]
· Website: http://www.h3c.com
Appendix F Glossary
Description |
|
B |
|
BIOS |
Basic input/output system is non-volatile firmware pre-installed in a ROM chip on a server's management module. The BIOS stores basic input/output, power-on self-test, and auto startup programs to provide the most basic hardware initialization, setup and control functionality. |
C |
|
CPLD |
Complex programmable logic device is an integrated circuit used to build reconfigurable digital circuits. |
E |
|
Ethernet adapter |
An Ethernet adapter, also called a network interface card (NIC), connects the server to the network. |
F |
|
FIST |
Fast Intelligent Scalable Toolkit provided by H3C for easy and extensible server management. It can guide users to configure a server quickly with ease and provide an API interface to allow users to develop their own management tools. |
G |
|
GPU module |
Graphics processing unit module converts digital signals to analog signals for output to a display device and assists processors with image processing to improve overall system performance. |
H |
|
HDM |
Hardware Device Management is the server management control unit with which administrators can configure server settings, view component information, monitor server health status, and remotely manage the server. |
A module that supports hot swapping (a hot-swappable module) can be installed or removed while the server is running without affecting the system operation. |
|
K |
|
KVM |
KVM is a management method that allows remote users to use their local video display, keyboard, and mouse to monitor and control the server. |
N |
|
NVMe VROC module |
A module that works with Intel VMD to provide RAID capability for the server to virtualize storage resources of NVMe drives. |
R |
|
RAID |
Redundant array of independent disks (RAID) is a data storage virtualization technology that combines multiple physical hard drives into a single logical unit to improve storage and security performance. |
Redundancy |
A mechanism that ensures high availability and business continuity by providing backup modules. In redundancy mode, a backup or standby module takes over when the primary module fails. |
S |
|
Security bezel |
A locking bezel mounted to the front of a server to prevent unauthorized access to modules such as hard drives. |
U |
A unit of measure defined as 44.45 mm (1.75 in) in IEC 60297-1. It is used as a measurement of the overall height of racks, as well as equipment mounted in the racks. |
UniBay drive backplane |
A UniBay drive backplane supports both SAS/SATA and NVMe drives. |
V |
|
VMD |
VMD provides hot removal, management and fault-tolerance functions for NVMe drives to increase availability, reliability, and serviceability. |
Appendix G Acronyms
Acronym |
Full name |
B |
|
BIOS |
|
C |
|
CMA |
Cable Management Arm |
CPLD |
|
D |
|
DCPMM |
Data Center Persistent Memory Module |
DDR |
Double Data Rate |
DIMM |
Dual In-Line Memory Module |
DRAM |
Dynamic Random Access Memory |
F |
|
FIST |
|
G |
|
GPU |
|
H |
|
HBA |
Host Bus Adapter |
HDD |
Hard Disk Drive |
HDM |
|
I |
|
IDC |
Internet Data Center |
iFIST |
integrated Fast Intelligent Scalable Toolkit |
K |
|
KVM |
Keyboard, Video, Mouse |
L |
|
LRDIMM |
Load Reduced Dual Inline Memory Module |
N |
|
NCSI |
Network Controller Sideband Interface |
NVMe |
Non-Volatile Memory Express |
P |
|
PCIe |
Peripheral Component Interconnect Express |
POST |
Power-On Self-Test |
R |
|
RDIMM |
Registered Dual Inline Memory Module |
S |
|
SAS |
Serial Attached Small Computer System Interface |
SATA |
Serial ATA |
SD |
Secure Digital |
SDS |
Secure Diagnosis System |
SFF |
Small Form Factor |
sLOM |
Small form factor Local Area Network on Motherboard |
SSD |
Solid State Drive |
T |
|
TCM |
Trusted Cryptography Module |
TDP |
Thermal Design Power |
TPM |
Trusted Platform Module |
U |
|
UID |
Unit Identification |
UPI |
Ultra Path Interconnect |
UPS |
Uninterruptible Power Supply |
USB |
Universal Serial Bus |
V |
|
VROC |
Virtual RAID on CPU |
VMD |
Volume Management Device |