01-AD-Campus 6.2 Solution Unified Platform Deployment Guide

HomeSupportAD-NET(SDN)H3C AD-CampusConfigure & DeployConfiguration GuidesAD-Campus 6.2 Configuration Guide-5W10001-AD-Campus 6.2 Solution Unified Platform Deployment Guide
Download Book

 

 

AD-Campus 6.2 Solution

Unified Platform and Components

Deployment Guide

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Document version: 5W100-20230221

 

Copyright © 2023 New H3C Technologies Co., Ltd. All rights reserved.

No part of this manual may be reproduced or transmitted in any form or by any means without prior written consent of New H3C Technologies Co., Ltd.

Except for the trademarks of New H3C Technologies Co., Ltd., any trademarks that may be mentioned in this document are the property of their respective owners.

The information in this document is subject to change without notice.



Overview

This document describes the deployment process for Unified Platform.

Terms

·     H3LinuxH3C Linux operating system.

·     Matrix—Docker containers-orchestration platform based on Kubernetes. On this platform, you can build Kubernetes clusters, deploy microservices, and implement O&M monitoring of systems, Docker containers, and microservices.

·     Kubernetes (K8s)—An open-source container-orchestration platform that automates deployment, scaling, and management of containerized applications.

·     Docker—An open-source application container platform that allows developers to package their applications and dependencies into a portable container. It uses the OS-level virtualization.

·     Redundant Arrays of Independent Disks (RAID)—A data storage virtualization technology that combines many small-capacity disk drives into one large-capacity logical drive unit to store large amounts of data and provide increased reliability and redundancy.

·     Graphical User Interface (GUI)A type of user interface through which users interact with electronic devices via graphical icons and other visual indicators.


Unified Platform and components deployment flowchart

Figure 1 provides the Unified Platform and components deployment flowchart for the AD-Campus scenario.

Figure 1 Unified Platform and components deployment flowchart for the AD-Campus scenario

 


Preparing for deployment

Server requirements

Hardware requirements

Unified Platform is deployed on Matrix. It can be deployed on a single master node or a cluster with three master nodes and N (≥ 0) worker nodes. You can add worker nodes to the cluster. Typically, SeerAnalyzer is not deployed in the campus scenario so you are not required to configure worker nodes in the campus scenario.

Unified Platform can be deployed on physical servers or virtual machines (VMs).

For the hardware requirements for Unified Platform deployment in a specific application scenario, see AD-NET Solution Hardware Configuration Guide.

 

CAUTION

CAUTION:

·     Each vCPU allocated from a VM to Unified Platform must have an exclusive use of a physical CPU core.

·     The CPU requirements are the same for deploying Unified Platform on a physical server or VM.

·     Allocate memories and disks in sizes as recommended to Unified Platform and make sure sufficient physical resources are available for the allocation. To ensure Unified Platform stability, do not overcommit hardware resources such as memory and drive.

 

CAUTION

CAUTION:

As a best practice, install etcd on a separate physical drive. If you cannot do this, you must use at least HDD (SSD recommended) physical disks, and configure the HDD physical disks with a 1 G RAID controller and 7200 RPM or above.

 

CAUTION

CAUTION:

To deploy Unified Platform and components on a VMware-managed VM, enable the network card hybrid mode and pseudo transmission on the host where the VM resides. To divide the component network into VLANs, configure the network port on the host to permit all VLAN packets.

 

Table 1 describes the disk partitioning scheme for the AD-Campus solution. The required partitions will be generated automatically with the minimum capacity during Unified Platform deployment. You can also manually adjust the partition size as needed.

Table 1 2.4TB system disk and etcd partitioning scheme (physical server, cluster deployment mode)

Mount point

Minimum capacity

Applicable mode

Remarks

/var/lib/docker

400 GiB

BIOS mode/UEFI mode

Capacity expandable.

/boot

1024 MiB

BIOS mode/UEFI mode

N/A

swap

1024 MiB

BIOS mode/UEFI mode

N/A

/var/lib/ssdata

450 GiB

BIOS mode/UEFI mode

Capacity expandable.

/

400 GiB

BIOS mode/UEFI mode

Capacity expandable.

As a best practice, do not save service data in the / directory.

/boot/efi

200 GiB

UEFI mode

N/A

biosboot

2048 KiB

BIOS mode

N/A

/var/lib/etcd

50 MiB

BIOS mode/UEFI mode

Required to be mounted on a separate disk

Reserved disk space

1205 GiB

N/A

Used for GlusterFS.

The total capacity of system disks is 2.4 TB + 50 GB. The capacity of the above mounting points is 1.2 TB + 50 GB, and the remaining space is reserved for GlusterFS.

 

 

NOTE:

Follow these guidelines to set the capacity for the partitions:

·     /var/lib/docker/—The capacity depends on the Docker operation conditions and the specific application scenario.

·     /var/lib/ssdata/—Used by PXC, Kafka, and ZooKeeper. In theory, only Unified Platform uses this partition. If other components use this partition, increase the partition capacity as required.

·     /—Used by Matrix, including the images of the components such as K8s and Harbor. The capacity of the partition depends on the size of uploaded component images. You can increase the partition capacity as required.

·     GlusterFS—250 GB of this partition is used for Unified Platform. If other components use this partition, increase the partition capacity as required. 50 GB space is required for deployment of each of the controller, EIA, WSM, and GlusterFS, 85 GB space is required for deployment of EPS, and 385 GB space is required for deployment of SeerAnalyzer. Therefore, to deploy the controller, EIA, WSM, EPS, and SeerAnalyzer, you must reserve an empty disk or partition that has a size of at least 785 GB.

 

Software requirements

Unified Platform must be deployed on the H3Linux operating system. The H3Linux image file contains the H3Linux operating system and component packages such as Matrix and dependencies. After installation of the H3Linux operating system, Matrix and the required dependencies will be installed automatically. You are not required to install the dependencies or Matrix manually.

Client requirements

You can access Unified Platform from a Web browser without installing any client. As a best practice, use Google Chrome 70, Firefox 78, or a higher version with a minimum resolution width of 1600.

Pre-installation checklist

IMPORTANT

IMPORTANT:

The AD-Campus solution supports deployment in Chinese and English environments. As a best practice, select the language when you start the deployment. You cannot change the language after the deployment.

 

Table 2 Pre-installation checklist

Item

Requirements

Server

Hardware

The CPU, memory, disk (also called drive in this document), and NIC settings are as required.

Software

·     The operating system meets the requirements.

·     The system time settings are configured correctly. As a best practice, configure NTP on each node to ensure time synchronization on the network.

·     The drives have been configured in a RAID setup.

Client

Google Chrome 70, Firefox 78, or a higher version is installed on the client.


Network planning and installation packages

IP and resources planning

To deploy Unified Platform and AD-Campus solution components, plan IP addresses as described in Table 3 in advance. To deploy also SeerAnalyzer, see IP address planning for SeerAnalyzer in H3C SeerAnalyzer Deployment Guide.

Unified Platform supports single-stack and dual-stack deployment. For more information about the configuration, see H3C Unified Platform Deployment Guide-Exxxx.

Table 3 IP addresses

IP address

Description

Remarks

IPv4 address of master node 1

IPv4 address assigned to master node 1 running the H3Linux operating system

Required in the IPv4 networking environment.

If Unified Platform is deployed on a single server, you only need to plan an IPv4 address for one master node.

The IPv4 addresses of master nodes added to one cluster must be on the same network segment.

IPv4 address of master node 2

IPv4 address assigned to master node 2 running the H3Linux operating system

IPv4 address of master node 3

IPv4 address assigned to master node 3 running the H3Linux operating system

IPv6 address of master node 1

IPv6 address assigned to master node 1 running the H3Linux operating system

Required in the IPv6 networking environment.

If Unified Platform is deployed on a single server, you only need to plan an IPv6 address for one master node.

The IPv6 addresses of master nodes added to one cluster must be on the same network segment.

IPv6 address of master node 2

IPv6 address assigned to master node 2 running the H3Linux operating system

IPv6 address of master node 3

IPv6 address assigned to master node 3 running the H3Linux operating system

Virtual IPv4 address for the northbound service

IPv4 address for cluster northbound services. You can log in to the Web interface of the cluster from this address.

Required in the IPv4 networking environment.

This address must be in the same network segment as those of the master nodes.

Virtual IPv6 address for the northbound service

IPv6 address for cluster northbound services. You can log in to the Web interface of the cluster from this address.

Required in the IPv6 networking environment.

This address must be in the same network segment as those of the master nodes.

This address takes effect in the dual-stack environment.

Cluster internal virtual IP

IP address for communication inside the cluster

Required.

This address must be in the same network segment as those of the master nodes.

Worker node IP

IP address assigned to a worker node running the H3Linux operating system

Optional.

This address must be on the same network segment as those of the master nodes.

SeerEngine-Campus node IPs

Node IPs of SeerEngine-Campus

IPs of the three nodes in the SeerEngine-Campus cluster.

SeerEngine-Campus cluster IP

SeerEngine-Campus cluster IP, used for providing TFTP services.

N/A

vDHCP cluster  IP

vDHCP cluster IP address

Not used in the actual networking environment.

vDHCP node IPs

Node IP addresses of the DHCP server

Two node IPv4 addresses used by the vDHCP server.

EIA IPv4 address

IPv4 address of the EIA server

Uses the virtual IPv4 address for the cluster northbound service.

EIA IPv6 address

IPv6 address of the EIA server

Uses the cluster virtual IPv6 address for the cluster northbound service.

 

Table 4 IP address examples in the cluster environment

IP address

Example

Description

Master mode IPv4 network segment (gateway)

100.1.0.0/24(100.1.0.1)

IPv4 network segment used by master nodes in the cluster

Master node 1 IPv4 address

100.1.0.10

IPv4 address of cluster node 1

Master node 2 IPv4 address

100.1.0.11

IPv4 address of cluster node 2

Master node 3 IPv4 address

100.1.0.12

IPv4 address of cluster node 3

Master node IPv6 network segment (gateway)

190::/64(190::1)

IPv6 network segment used by master nodes in the cluster

Master node 1 IPv6 address

190::10

IPv6 address of cluster node 1

Master node 2 IPv6 address

190::11

IPv6 address of cluster node 2

Master node 3 IPv6 address

190::12

IPv6 address of cluster node 3

Virtual IPv4 address for the northbound service

100.1.0.100

IPv4 address of the cluster for external communication

Virtual IPv6 address for the northbound service

190::195

IPv6 address of the cluster for external communication

Cluster internal virtual IP

100.1.0.98

IP address for communication inside the cluster

SeerEngine-Campus node IP

Node 1: 110.1.0.101

Node 2: 110.1.0.102

Node 3: 110.1.0.103

IP addresses of the three nodes in the SeerEngine-Campus cluster.

SeerEngine-Campus cluster IP

110.1.0.100

SeerEngine-Campus cluster IP

vDHCP cluster IP

110.1.0.104

vDHCP cluster IP, not used in the actual networking environment

vDHCP

Node 1: 110.1.0.105

Node 2: 110.1.0.106

Two node IPv4 addresses used by the vDHCP server

EIA

100.1.0.100

EIA server IPv4 address

EIA IPv6

190::195

EIA server IPv6 address

 

Installation packages

IMPORTANT

IMPORTANT:

·     E0706 is a release version, and provides the ISO and installation packages for all components.

·     The GlusterFS, Portal, and Kernel components have been built into the ISO. You can deploy these components in bulk after cluster deployment. If you upgrade your system to E0706, you only need to upgrade Matrix and all application components. If your system is a new system, you must use the ISO to install the operating system. For other components, you can upload the installation packages in bulk on Matrix, and then deploy them on Unified Platform.

 

For information about Unified Platform component packages, see H3C Unified Platform Deployment Guide. Table 5 describes Unified Platform component Installation packages required for SeerEngine-Campus deployment

Table 5 Unified Platform component Installation packages required for SeerEngine-Campus deployment

Installation package

Description

Remarks

Dependencies

common_H3Linux-<version>.iso

Installation package for the H3Linux operating system.

Required

N/A

common_PLAT_GlusterFS_2.0_<version>.zip

Provides local shared storage functionalities.

Required

N/A

general_PLAT_portal_2.0_<version>.zip

Provides portal, unified authentication, user management, service gateway, and help center functionalities.

Required

N/A

general_PLAT_kernel_2.0_<version>.zip

Provides access control, resource identification, license, configuration center, resource group, and log functionalities.

Required

N/A

general_PLAT_kernel-base_2.0_<version>.zip

Provides alarm, access parameter template, monitoring template, report, email, and SMS forwarding functionalities.

Optional

N/A

general_PLAT_network_2.0_<version>.zip

Provides basic network management functions, including network resources, network performance, network topology, and iCC.

Required

kernel-base

general_PLAT_Dashboard_2.0_<version>.zip

Provides the dashboard framework.

Required

kernel-base

general_PLAT_widget_2.0_<version>.zip

Provides dashboard widget management.

Required

Dashboard

general_PLAT_websocket_2.0_<version>.zip

Provides the southbound WebSocket function.

Optional

 

 


Deploying Unified Platform

Unified Platform is deployed on Matrix. It can be deployed on a single master node or a cluster with three master nodes. For the deployment procedure, see H3C Unified Platform Deployment Guide-Exxxx.

You must deploy Unified Platform components including kernel-base, network, Dashboard, widget, and websocket for the AD-Campus solution and these components support one-click deployment:

1.     After deploying the GlusterFS, Portal, and Kernel components, upload the following components on Matrix.

¡     general_PLAT_kernel-base_2.0_<version>.zipRequired.

¡     general_PLAT_network_2.0_<version>.zipRequired.

¡     general_PLAT_Dashboard_2.0_<version>.zipRequired.

¡     general_PLAT_widget_2.0_<version>.zipRequired.

¡     general_PLAT_websocket_2.0_<version>.zipRequired.

¡     SeerEngine_CAMPUS-<version>-MATRIX.zipRequired.

¡     vDHCPS_H3C-<version>-X64.zipRequired. It can be used as a DHCP server to assign IP addresses to endpoints.

¡     EIA-<version>.zipRequired, providing intelligent endpoint access management.

¡     EAD-<version>.zipOptional, providing endpoint admission defense management.

¡     SMP-<version>.zipOptional, providing security service management.

¡     oasis-<version>.zipOptional, a public service platform. It must be installed if SeerAnalyzer is deployed.

¡     WSM-<version>.zipOptional, providing wireless service management. It must be installed if the solution has wireless services.

¡     EPS-<version>.zipOptional, providing endpoint detection management.

¡     SEERANALYZER-<version>_X86_64.zipOptional, providing intelligent analysis management. For the application packages required for the specific campus scenario, see H3C SeerAnalyzer Deployment Guide.

2.     Access Unified Platform and deploy AD-Campus components. Then the components uploaded in the preceding step will be deployed by one click.

3.     Other Unified Platform component packages are optional components for the AD-Campus solution. To deploy them, upload the installation packages on Matrix and install them one by one. For more information, see H3C Unified Platform Deployment Guide.

 

IMPORTANT

IMPORTANT:

·     The document referenced must match the product version of the solution.

·     To determine whether an upgrade is supported, see H3C PLAT 2.0 (Exxxx) Release Notes.

 


Deploying AD-Campus components

Preparing for deployment

Enabling the NICs

Perform this task if the server uses multiple NICs. As a best practice, configure SeerEngine-Campus and vDHCP to use a different NIC from Matrix. If SeerEngine-Campus and Unified Platform share a network card, SeerEngine-Campus, vDHCP, and EIA can use the IP addresses on the same network segment. You can also configure sub-addresses on the VLAN interface of the Layer 3 switch to enable SeerEngine-Campus and Unified Platform to use IP addresses on different network segments.

In this example, NIC ens192 is used for Matrix and NIC ens224 is used for SeerEngine-Campus and vDHCP.

To enable a NIC:

1.     Log in to the server on which Unified Platform is deployed remotely and edit the NIC configuration file. This example edits the configuration file for NIC ens224.

a.     Open the NIC configuration file.

[root@matrix01 /]# vi /etc/sysconfig/network-scripts/ifcfg-ens224

b.     Set the BOOTPROTO field to none to remove NIC startup protocols, and set the ONBOOT field to yes to enable automatic NIC connection at server startup.

Figure 2 Configuring the NIC settings

 

2.     Restart the NIC.

[root@matrix01 /]# ifdown ens224

[root@matrix01 /]# ifup ens224

3.     Use the ifconfig command to display network information and verify that the NIC is in up state.

Planning the network and IP addresses

The solution deploys the following networks:

·     Calico networkNetwork for containers to communicate with each other. The Calico network uses the IP address pool (177.177.0.0 by default) specified at Unified Platform cluster deployment. You do not need to configure addresses for the Calico network at component deployment. The network can share the same NIC as the MACVLAN network.

·     MACVLAN networkManagement network for the SeerEngine-Campus and the vDHCP components. You must plan network address pools for the MACVLAN network before deploying a component.

As a best practice, use Table 6 to calculate the number of required IP addresses in the subnet assigned to the MACVLAN network. For example, if the SeerEngine-Campus cluster has three members and the vDHCP cluster has two members, the required number of IP addresses is: (1*3+1) + (1*2+1)=7.

Table 6 IP address plan (1)

Component name

Max cluster members

Default cluster members

Required addresses for SeerEngine-Campus or vDHCP

SeerEngine-Campus

32

3

1*Member quantity + 1

The additional address is reserved as the cluster IP address.

vDHCP

2

2

 

This document uses the IP address plan in Table 7 for configuration. SeerEngine-Campus and vDHCP use NIC ens224.

Table 7 IP address plan (2)

IP address

Remarks

SeerEngine-Campus node IP

Node1: 110.1.0.101

Node2: 110.1.0.102

Node3: 110.1.0.103

IP addresses of the three nodes in the SeerEngine-Campus cluster.

SeerEngine-Campus cluster IP

110.1.0.100

SeerEngine-Campus cluster IP address.

vDHCP cluster IP

110.1.0.104

vDHCP cluster IP address, which is not used in the actual networking environment.

vDHCP

Node1: 110.1.0.105

Node2: 110.1.0.106

IPv4 addresses of two nodes used by the vDHCP server.

EIA

100.1.0.100

IPv4 address of the EIA server, using the northbound service IP of the cluster.

 

Deploying the components

1.     Log in to Unified Platform at http://ip_address:30000/central.

2.     On the top navigation bar, click System, and then select Deployment from the left navigation pane.

All components uploaded in bulk during Unified Platform deployment will be displayed on this page. To deploy other components, click Upload and select the installation packages to upload them to Unified Platform.

Figure 3 Upload Package page

 

3.     Select components to deploy, and then click Next.

¡     Components you must deploy in the AD-Campus scenario:

-     SeerEngine-Campus—Campus controller. It can be deployed on Unified Platform in standalone mode or in three-node cluster mode. You are required to specify a version for SeerEngine-Campus.

To deploy EIA V9, select Converged EIA. To deploy security services, select Converged SMP and specify the SMP version.

-     vDHCP server—DHCP server that assigns IP addresses to endpoints and devices started up with default configuration to achieve automated deployment of the devices. You can deploy the vDHCP server on Unified Platform in stateful failover mode (active/standby). You are required to specify a version for vDHCP server.

-     EIAEndpoint intelligent access management component. It manages endpoint authentication and access. You are required to specify a version for EIA.

To implement endpoint admission control, select Converged EAD and specify the EAD version.

¡     Optional components for the AD-Campus scenario:

-     EADProvides endpoint admission control.

-     SMPManages security device services.

-     Oasis—A public service platform that provides NETCONF network. This component is required if SeerAnalyzer is installed in the scenario.

-     WSMProvides wireless service management such as wireless device monitoring and configuration.

-     EPSManage various endpoints across the network.

-     SeerAnalyzerCollects device performance, user access, and service traffic data in real time, visualizes network operation through big data analysis and artificial intelligence algorithms, and predicts potential network risks and generates notifications

 

 

 

NOTE:

·     This section provides brief parameter configuration for SeerEngine-Campus deployment. For more information about parameter configuration, see H3C SeerEngine-Campus Component Deployment Guide-EXXXX.

·     The installation of EAD depends on EIA.

·     The installation of SeerAnalyzer depends on the Oasis component.

·     When the WSM and Oasis components are deployed separately, deploy the Oasis components first.

·     To install all of the SeerEngine-Campus, WSM, and SeerAnalyzer components separated from each other in the environment, install SeerEngine-Campus and WSM before SeerAnalyzer.

·     This section describes only scenario selection for SeerAnalyzer deployment. For the SeerAnalyzer deployment procedure, see H3C SeerAnalyzer Deployment Guide.

·     To determine whether a component supports upgrade, see the release notes for the product.

 

In this example, all AD-Campus-related components are selected.

Figure 4 Selecting components (1)

 

Figure 5 Selecting components (2)

Figure 6 Selecting components (3)

 

 

NOTE:

To deploy SeerAnalyzer, select Analyzer and the Campus scenario. For the SeerAnalyzer deployment procedure, see H3C SeerAnalyzer Deployment Guide.

 

Figure 7 Selecting components (4)

 

4.     Retain default parameter settings and click Next.

Figure 8 Settings page

 

5.     Specify network information, create subnets, configure host information, and then click Next.

The controller uses the management network to manage southbound devices. Configure the following parameters as needed:

¡     VLANIf multiple networks use the same uplink interface on a host, configure VLANs to isolate the networks. By default, no VLAN is specified.

-     By default, this field is unconfigured, and the packets sent by the server do not carry tags. You are required to configure the access switch interface connected to the network card of the server as an access port.

-     If you specify a VLAN ID, the packets sent by the server carry the VLAN tag. (To avoid the tag from being removed, make sure the PVID is different from the VLAN ID.) You are required to configure the access switch interface connected to the server network card as a trunk port.

¡     Subnet CIDR, Gateway, Address PoolThe platform uses the subnet and address pool to assign IP addresses to components and uses the gateway as the default gateway for containers.

¡     Uplink InterfaceHosts use their uplink interface for providing services to SeerEngine-Campus and vDHCP Server containers.

Figure 9 Network Configuration

 

 

NOTE:

Address pool settings cannot be edited once applied. As a best practice, configure a minimum of 32 IP addresses in each address pool.

 

Figure 10 Subnet configuration

 

Figure 11 Selecting an uplink interface for the host

 

6.     Skip node binding and click Next.

Figure 12 Binding to nodes

 

7.     Bind networks and subnets to SeerEngine-Campus and vDHCP, and then click Next.

Figure 13 Binding networks and subnets to components

 

8.     Confirm parameters and then click Deploy.

¡     Cluster IP—The platform sets the cluster IP address for each component based on address pool configuration. To edit the cluster IP address for a component, click Reset. Make sure the manually specified address is within the specified subnet for the component.

¡     VRRP Group Number—Specify a VRRP group number for vDHCP, in the range of 1 to 255. Specify different VRRP group numbers for vDHCP servers in the same network.

¡     EIA parameters—The EIA, WSM, EPS, EAD, and SMP component uses the northbound service virtual IP as the server address by default. You do not need to confirm parameters of these components.

Figure 14 Confirming campus network parameters

 

Figure 15 Confirming SeerAnalzyer parameters

 

Figure 16 Confirming public service parameters

 

Figure 17 Confirming EIA parameters

 

Figure 18 Confirming wireless system management parameters

 

Figure 19 Confirming endpoint profiling system parameters

 

Figure 20 Confirming endpoint admission defense parameters

 

Figure 21 Confirming security management platform parameters

 

 

NOTE:

In the campus network scenario, the system will automatically identify and install component dependencies when installing SeerEngine-Campus, vDHCP Server, and EIA.

 

Figure 22 Deployment progress

 

IMPORTANT

IMPORTANT:

·     The document referenced must match the product version of the solution.

·     To determine whether SeerEngine-Campus supports an upgrade, see H3C SeerEngine-Campus Exxxx  Release Notes. To determine whether vDHCP supports an upgrade, see H3C vDHCPS_H3C-Rxxxx Release Notes. To determine whether EIA supports an upgrade, see EIA  (Exxxx) Release Notes.

 


Installing and deploying SeerAnalyzer

See H3C SeerAnalyzer Deployment Guide.


FAQ

How to prepare a disk space for GlusterFS?

Unified Platform in E0609 and later support automatic partition allocation for GlusterFS. To manually configure a partition for GlusterFS or edit the GlusterFS partition, see H3C Unified Platform Deployment Guide-Exxxx.

What is and how to configure NIC bonding?

NIC bonding allows you to bind multiple NICs to form a logical NIC for NIC redundancy, bandwidth expansion, and load balancing.

Seven NIC bonding modes are available for a Linux system. As a best practice, use mode 2 or mode 4 in Unified Platform deployment.

·     Mode 2 (XOR)—Transmits packets based on the specified transmit hash policy and works in conjunction with the static aggregation mode on a switch.

·     Mode 4 (802.3ad)—Implements the 802.3 ad dynamic link aggregation mode and works in conjunction with the dynamic link aggregation group on a switch.

This example describes how to configure NIC bonding mode 2 on the servers after operating system installation.

To configure the mode 2 NIC redundancy mode, perform the following steps on each of the three servers:

1.     Create and configure the bonding interface.

a.     Execute the vim /etc/sysconfig/network-scripts/ifcfg-bond0 command to create bonding interface bond0.

b.     Access the ifcfg-bond0 configuration file and configure the following parameters based on the actual networking plan. All these parameters must be set.

Set the NIC binding mode to mode 2.

Sample settings:

DEVICE=bond0

IPADDR=192.168.15.99

NETMASK=255.255.0.0

GATEWAY=192.168.15.1

ONBOOT=yes

BOOTPROTO=none

USERCTL=no

NM_CONTROLLED=no

BONDING_OPTS="mode=2 miimon=120"

DEVICE represents the name of the vNIC, and miimon represents the link state detection interval.

2.     Execute the vim /etc/modprobe.d/bonding.conf command to access the bonding configuration file, and then add configuration alias bond0 bonding.

3.     Configure the physical NICs.

a.     Create a directory and back up the files of the physical NICs to the directory.

b.     Add the two network ports to the bonding interface.

c.     Configure the NIC settings.

Use the ens32 NIC as an example. Execute the vim /etc/sysconfig/network-scripts/ifcfg-ens32 command to access the NIC configuration file and configure the following parameters based on the actual networking plan. All these parameters must be set.

Sample settings:

TYPE=Ethernet

DEVICE=ens32

BOOTPROTO=none

ONBOOT=yes

MASTER=bond0

SLAVE=yes

USERCTL=no

NM_CONTROLLED=no

DEVICE represents the name of the NIC, and MASTER represents the name of the vNIC.

4.     Execute the modprobe bonding command to load the bonding module.

5.     Execute the service network restart command to restart the services. If you have modified the bonding configuration multiple times, you might need to restart the server.

6.     Verify that the configuration has taken effect.

¡     Execute the cat /sys/class/net/bond0/bonding/mode command to verify that the bonding mode has taken effect.

Figure 23 Verifying the bonding mode

 

¡     Execute the cat /proc/net/bonding/bond0 command to verify bonding interface information.

Figure 24 Verifying bonding interface information

 

7.     Execute the vim /etc/rc.d/rc.local command, and add configuration ifenslave bond0 ens32 ens33 ens34 to the configuration file.

I configured a static IP for a NIC when I installed H3Linux, but the system prompted NIC configuration failure after OS installation. What should I do?

1.     Execute the vi /etc/sysconfig/network-scripts/ifcfg-NIC name command to edit NIC configuration. If a parameter does not exist, you must manually add that parameter.

[root@node01 ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens192

TYPE=Ethernet 

PROXY_METHOD=none

BROWSER_ONLY=no

BOOTPROTO=none                  ## Change the value to none

DEFROUTE=yes                    ## Change the value to yes

IPV4_FAILURE_FATAL=no

IPV6INIT=yes

IPV6_AUTOCONF=yes

IPV6_DEFROUTE=yes

IPV6_FAILURE_FATAL=no

IPV6_ADDR_GEN_MODE=stable-privacy

NAME=ens192

UUID=78961223-bc09-4a0e-87d6-90fbd56117f5

DEVICE=ens192

ONBOOT=yes                        ## Change the value to yes

IPADDR=172.21.3.50                ## Edit the value as needed. Add this parameter if it does not exist.

PREFIX=24                         ## Edit the value as needed. Add this parameter if it does not exist.

GATEWAY=172.21.3.1                ## Edit the value as needed. Add this parameter if it does not exist.

IPV6_PRIVACY=no

2.     Save the configuration and exit.

3.     Execute the systemctl restart network command to restart the network.

How can I set the time zone to Asia/Shanghai after operating system installation?

Execute the timedatectl set-timezone Asia/Shanghai command to change the time zone to Asia/Shanghai at the CLI.

Figure 25 Changing the time zone to Asia/Shanghai

 

  • Cloud & AI
  • InterConnect
  • Intelligent Computing
  • Intelligent Storage
  • Security
  • SMB Products
  • Intelligent Terminal Products
  • Product Support Services
  • Technical Service Solutions
All Services
  • Resource Center
  • Policy
  • Online Help
  • Technical Blogs
All Support
  • Become A Partner
  • Partner Policy & Program
  • Global Learning
  • Partner Sales Resources
  • Partner Business Management
  • Service Business
All Partners
  • Profile
  • News & Events
  • Online Exhibition Center
  • Contact Us
All About Us
新华三官网