- Table of Contents
-
- H3C SecPath AFC2000-EX0-G Series Abnormal Traffic Cleaning System Configuration Examples-5W100
- 00-Preface
- 01-Series Deployment Single-Machine Single-Channel and Multi-Channel Configuration Example.
- 02-BGP Layer 3 Bypass Return Path Configuration Example
- 03-BGP Auto-Diversion Deployment with Bypass and Abnormal Traffic Detection System Example
- 04-TCP Port Protection Configuration Example
- 05-AFC Comprehensive Protection Configuration Example
- 06-Typical Configuration Examples of Traction Management Example
- 07-OSPF Layer 2 Reintroduction Configuration Example
- 08-Cascaded Cluster and Dual-Node Active-Standby Configuration Example
- 09-Bypass BGP Layer 2 Return Traffic Configuration Example
- 10-OSPF-Based Three-Layer Return Injection Configuration Example
- 11-BGP-Based Three-Layer Injection Configuration Example for Bypass Single-Device Multi-Channel Deployment Example
- 12-BGP-Based Three-Layer Injection Configuration Example for Bypass Multi-Device Cluster Deployment Example
- 13-Bypass GRE Layer 3 Return Injection Configuration Example
- 14-Typical Configuration for HTTPS CC Protection Example
- Related Documents
-
| Title | Size | Download |
|---|---|---|
| 08-Cascaded Cluster and Dual-Node Active-Standby Configuration Example | 859.45 KB |
Traffic Scrubbing Service Configuration Guide
Example Configuration of AFC Series Deployment in a Dual-Server Cluster
Example Configuration of Serial Dual-Server Cluster
Applicable Products and Versions
Typical Configuration Example for AFC Series Redundant Deployment (Active-Standby Mode)
Example Configuration for Series Deployment with Active-Standby Redundancy
Applicable Products and Versions
Feature Overview
AFC supports multiple deployment modes to accommodate traffic scrubbing needs across various scenarios, which can be classified into inline deployment and bypass deployment.
This chapter focuses on inline dual-machine cluster deployment and inline dual-machine active-standby configuration. For bypass deployment, please refer to the relevant documentation.
In inline deployment, AFC operates in transparent mode, where the abnormal traffic scrubbing device is deployed in-line at the network egress of the protected environment. This ensures that attack traffic is filtered before reaching the servers, while legitimate traffic continues uninterrupted.
Feature Usage
This document is not strictly version-bound to specific software or hardware versions. If discrepancies arise between the document content and the actual product during use, the product’s actual status shall prevail.
All configurations described in this document were performed and validated in a laboratory environment,with all device parameters set to factory default values. If you have already configured the device, to ensure configuration effectiveness, please verify that your existing configuration does not conflict with the examples provided below.
This document assumes that you have prior knowledge of VLAN and link aggregation features.
Configuration Guide
H3C's abnormal traffic scrubbing and detection system consists of AFD devices, AFC devices, and switch devices. The basic configuration of switch devices is performed via the command line interface (CLI). For AFC devices, both basic configuration and service-related configurations are implemented through the web-based management interface. This configuration guide uses a standalone AFC deployment scenario for traffic scrubbing as an example.
Traffic Scrubbing Service Configuration Guide
· Cluster Deployment
To deploy AFC devices in a cluster mode, configure link aggregation on the core switch R3 first. Then connect all input interfaces of the AFC cluster devices to the aggregated port group on the upstream switch, and connect all output interfaces of the AFC cluster devices to the aggregated port group on the core switch R3
· Active-Standby Deployment
To deploy AFC devices in active-standby mode, connect the WAN-facing interfaces of both primary and standby units to the same VLAN on the upstream switch, and link their LAN-facing interfaces to the corresponding VLAN on the downstream switch, enabling the system to comprehensively scrub and filter all inbound and outbound network traffic through seamless failover protection.
Precautions
· Ensure AFC devices in active-standby or cluster deployment share identical hardware models and software versions.
· The upstream core switch R3 used for AFC inline cluster deployment must support port aggregation functionality
· Configuration commands vary across switches/routers from different vendors and models. Always refer to the specific device's operation manual for configuration procedures.
For AFC inline cluster deployment, the port aggregation type (static or dynamic) between upper and lower switches is not restricted as it depends on the switches' own capabilities, and static aggregation is recommended.
Example Configuration of AFC Series Deployment in a Dual-Server Cluster
Introduction
This chapter describes the configuration procedures for setting up an AFC series deployment in a dual-server cluster.
Usage Restrictions
The web application security feature is prohibited in serial-mode dual-server cluster deployments.
Example Configuration of Serial Dual-Server Cluster
Applicable Products and Versions
Software Version: H3C i-Ware Software, Version 7.1, ESS 6401P02.
Networking Requirements
To enable traffic scrubbing for the protected IP 171.0.3.21, the AFC devices are deployed in-line within the customer's network. The serial external interfaces of the AFC cluster connect to the port aggregation group on Core Switch R2, while the serial internal interfaces of all AFC devices connect to the port aggregation group on Downstream Switch R3. The AFC system inspects and filters the mixed traffic before forwarding it to the downstream network. The networking topology is illustrated in Figure 3-1.
Figure 3-1 AFC Serial Deployment – Multi-Channel Device Configuration Topology
Implementation Details:
· All external serial interfaces of the AFC cluster devices connect to Port Aggregation Group 8 (aggregation 8) on the upstream switch.All internal serial interfaces of the AFC cluster devices connect to Port Aggregation Group 8 (aggregation 8) on the core switch R3.
Table 3-1 VLAN Assignment List
|
VLAN ID |
Function Description |
IP Address |
|
1711 |
· Core Switch R2 Interface Connected to the AFC Serial External Network Ports; · VLAN of the Port Aggregation Group on Core Switch R3 · Default Gateway for the Downstream Network |
171.0.1.1/24 |
Table 3-2 AFC Interface IP Address Assignment List
|
Interface |
Function Description |
IP Address |
|
GE1/0 |
· All cluster devices' serial external network ports connect to the upstream network. · In serial mode, no configuration is required. If manual configuration is performed, ensure no IP address conflicts exist with the network. |
|
|
GE1/1 |
· Serial Internal Network Ports of All Cluster Devices (Connected to the Downstream Networ) · In serial mode, no configuration is required. If manual configuration is performed, ensure there are no IP address conflicts with the network. |
|
|
GE1/2 |
All Cluster Devices' Synchronization Interfaces |
|
|
GE0/0 |
AFC Management Interface |
Device1:192.168.0.1/24 Device2: 192.168.0.2/24 |
[AFC] Interface Naming (Reference Only)
The specific interface names are determined by the actual device model. This section provides guidance only.
Configuration Approach
To implement the AFC serial deployment cluster configuration, follow the steps below:
(1) Core Switch R2 Basic Network Configuration
Configure port aggregation groups with identical attributes (e.g., VLAN settings assigned to VLAN 1711).
(2) Downstream Switch R3 Basic Network Configuration
Configure port aggregation groups with identical attributes (e.g., VLAN settings assigned to VLAN 1711).
(3) AFC Device 1 Service Port Configuration
Configure GE1/0 of all AFC cluster devices as serial external network ports and connect them to the aggregation group of Core Switch R2.
Configure GE1/1 of all AFC cluster devices as serial internal network ports and connect them to the aggregation group of Downstream Switch R3.
(4) AFC Cluster Management Configuration
Use the web interface to designate the second device as a node.
On the first device's web interface, add the second device to form the cluster management.
Configuration Steps
Core Switch R2 Basic Network Configuration
Create VLAN 1711, mapping the IP segment 171.0.1.0/24 to VLAN 1711. This ensures all AFC cluster devices' serial external and internal network ports reside in the same VLAN.
# Create VLAN
[R2]vlan 1711
[R2]quit
# Configure VLAN IP Address
[R2]interface Vlan-interface1711
[R2-Vlan-interface1711]ip address 171.0.1.1 255.255.255.0
[R2-Vlan-interface1711]quit
# Create Port Aggregation Group
[R2]int Bridge-Aggregation 8
[R2-Bridge-Aggregation8]quit
# Add Interfaces G1/0/10 and G1/0/11 to the Aggregation Group
[R2]int GigabitEthernet 1/0/10
[R2-GigabitEthernet1/0/10]port link-aggregation group 8
[R2-GigabitEthernet1/0/10]quit
[R2]int GigabitEthernet 1/0/11
[R2-GigabitEthernet1/0/11]port link-aggregation group 8
[R2-GigabitEthernet1/0/11]quit
# Configure VLAN Settings for the Aggregation Group
[R2]int Bridge-Aggregation 8
[R2-Bridge-Aggregation8]port access vlan 1711
[R2-Bridge-Aggregation8]quit
# View Current Interface Configuration
[R2]int GigabitEthernet 1/0/10
[R2-GigabitEthernet1/0/10]dis this
#
interface GigabitEthernet1/0/10
port link-mode bridge
port access vlan 1711
port link-aggregation group 8
[R2-GigabitEthernet1/0/10]quit
[R2]int GigabitEthernet 1/0/11
[R2-GigabitEthernet1/0/11]dis this
#
interface GigabitEthernet1/0/11
port link-mode bridge
port access vlan 1711
port link-aggregation group 8
# Check Aggregation Group Status (Default: Layer 2 Static Aggregation)
[R2]display link-aggregation verbose
Loadsharing Type: Shar -- Loadsharing, NonS -- Non-Loadsharing
Port Status: S -- Selected, U -- Unselected
Flags: A -- LACP_Activity, B -- LACP_Timeout, C -- Aggregation,
D -- Synchronization, E -- Collecting, F -- Distributing,
G -- Defaulted, H -- Expired
Aggregation Interface: Bridge-Aggregation8
Aggregation Mode: Static
Loadsharing Type: Shar
Port Status Oper-Key
--------------------------------------------------------------------------------
GE1/0/10 S 1
GE1/0/11 S 1
# Verify Load Balancing Method (Default: Source/Destination IP Load Sharing)
[R2]display link-aggregation load-sharing mode interface
Bridge-Aggregation8 Load-Sharing Mode:
Layer 2 traffic: ingress-port, destination-mac address,
source-mac address
Layer 3 traffic: destination-ip address, source-ip address
Downstream Switch R3 Basic Network Configuration
Create VLAN 1711 to ensure all AFC cluster devices' serial external and internal network ports reside in the same VLAN.
# Create VLAN
[R3]vlan 1711
# Add Switch Ports Connected to Hosts
to VLAN
[R3]interface GigabitEthernet1/0/13
# Connect Protected Host
[R3-GigabitEthernet1/0/13]port link-mode bridge
[R3-GigabitEthernet1/0/13]port access vlan 1711
[R3-GigabitEthernet1/0/13]quit
# Configure Aggregation Group
[R3]interface Bridge-Aggregation 8
[R3- Bridge-Aggregation 8]port access vlan 1711
# Add Interfaces G1/0/10 and G1/0/11 to Aggregation Group
[R3]int GigabitEthernet 1/0/10
[R3-GigabitEthernet1/0/10]port link-aggregation group 8
[R3-GigabitEthernet1/0/10]quit
[R3]int GigabitEthernet 1/0/11
[R3-GigabitEthernet1/0/11]port link-aggregation group 8
[R3-GigabitEthernet1/0/11]quit
# Configure VLAN Settings for Aggregation Group
[R3]int Bridge-Aggregation 8
[R3-Bridge-Aggregation8]port access vlan 1711
[R3-Bridge-Aggregation8]quit
# View Current Interface Configuration
[R3]int GigabitEthernet 1/0/10
[R3-GigabitEthernet1/0/10]dis this
#
interface GigabitEthernet1/0/10
port link-mode bridge
port access vlan 1711
port link-aggregation group 8
[R3-GigabitEthernet1/0/10]quit
[R3]int GigabitEthernet 1/0/11
[R3-GigabitEthernet1/0/11]dis this
#
interface GigabitEthernet1/0/11
port link-mode bridge
port access vlan 1711
port link-aggregation group 8
#
return
# Check Aggregation Status
[R3]display link-aggregation verbose
Loadsharing Type: Shar -- Loadsharing, NonS -- Non-Loadsharing
Port Status: S -- Selected, U -- Unselected
Flags: A -- LACP_Activity, B -- LACP_Timeout, C -- Aggregation,
D -- Synchronization, E -- Collecting, F -- Distributing,
G -- Defaulted, H -- Expired
Aggregation Interface: Bridge-Aggregation8
Aggregation Mode: Static
Loadsharing Type: Shar
Port Status Oper-Key
--------------------------------------------------------------------------------
GE1/0/10 S 1
GE1/0/11 S 1
# Verify Load Balancing Method
[R3]display link-aggregation load-sharing mode interface
Bridge-Aggregation8 Load-Sharing Mode:
Layer 2 traffic: ingress-port, destination-mac address,
source-mac address
Layer 3 traffic: destination-ip address, source-ip address
The upstream and downstream switches must support port aggregation.
AFC Device Service Port Configuration
To implement AFC serial cluster deployment, follow the configuration steps below:
Note! For configuration steps containing the [Apply Config] button, you must click it to activate the settings. This will not be reiterated in subsequent steps.
Ø Log in to the AFC system page
Access the login page via web browser: https://192.168.0.1/ (Username: admin, Password: admin).
Figure 3-1: AFC System Login Interface
Ø Configure AFC Address and Port Type
Navigate to [System] → [Device] → [Device Management], click the [Setup] button on the right side of the target device, then select [Port Settings] in the left navigation pane and click the [Modify] button to configure: GE0/0 as the management interface (set IP address, subnet mask, and default gateway); GE1/2 as the synchronous interface; GE1/0 as the serial external network port and GE1/1 as the serial internal network port, with data port binding applied between them.
Figure 3-1: Configuration of GE0/0 Interface
Figure 3-2: Configuration of GE1/0 Interface
Figure 3-3: Configuration of GE1/1 Interface
Figure 3-4: Configuration of GE1/2 Interface
Note: For configuration steps containing the [Apply Configuration] button, you must click it to activate the settings. The same configuration procedure applies to Cluster Device 2 as for the primary Cluster Device 1.
AFC Device Cluster Management Configuration
To facilitate unified management of multiple AFC devices in a cluster, this document designates the first device as the primary web management node and the second device as a subordinate node. The primary device's web interface is used to add the IP management address of the second device, enabling centralized management of the cluster.
Ø Configure AFC Cluster Device 2 as Subordinate Node
To set Device 2 as a subordinate node in the AFC cluster: First, log in to its web management interface, then manually modify the URL to https://192.168.0.2/role and press Enter. On the role configuration page, click the [Node] button to switch the device role, using the same web login credentials (username/password) as currently authenticated. Refer to the following figure for the operation interface.
Ø Add Subordinate Node to AFC Cluster Device 1
To integrate Cluster Device 2 into the AFC cluster, first log in to the primary device (Device 1) via https://192.168.0.1, navigate through the menu path [System] → [Device] → [Device Management], then input the management IP address 192.168.0.2 of Node 2 in the designated field to establish unified cluster management. Refer to the corresponding interface diagram for visual guidance.
After completing the addition, wait for 60 seconds to allow the device activation process to complete.
Verify Configuration
Verify Communication Between Client and Traffic Redirection Server
Use the command line to execute ping tests, checking whether the client can establish connectivity with the service routing interface.
<h3c>ping 171.0.3.21
PING 171.0.3.21 (171.0.3.21) 56(84) bytes of data.
64 bytes from 171.0.3.21: icmp_seq=1 ttl=124 time=0.799 ms
64 bytes from 171.0.3.21: icmp_seq=2 ttl=124 time=0.736 ms
64 bytes from 171.0.3.21: icmp_seq=3 ttl=124 time=0.862 ms
64 bytes from 171.0.3.21: icmp_seq=4 ttl=124 time=1.47 ms
64 bytes from 171.0.3.21: icmp_seq=5 ttl=124 time=1.02 ms
--- 171.0.1.21 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3004ms
rtt min/avg/max/mdev = 1.835/1.962/2.265/0.181 ms
Typical Configuration Example for AFC Series Redundant Deployment (Active-Standby Mode)
Introduction
This chapter describes the configuration procedures for implementing active-standby redundancy in AFC series deployment.
Usage Restrictions
1、 The web application security function is prohibited in series deployment with active-standby redundancy.
2、 Series deployment applies to scenarios where customers require real-time full-link inspection and cleansing of inbound/outbound network traffic, or when there is no Layer 3 routing device in the customer's network infrastructure to enable bypass deployment. The active-standby redundancy deployment ensures uninterrupted protection and network operation in case of primary device failure.
3、 After successfully enabling active-standby mode, administrators must log in to the management system via the floating IP address; otherwise, system operations of both active and standby units may be affected.
4、 Synchronization ports must be configured before enabling active-standby redundancy; otherwise, data synchronization between primary and standby devices cannot be performed.
Example Configuration for Series Deployment with Active-Standby Redundancy
Applicable Products and Versions
Software Version: H3C i-Ware Software, Version 7.1, ESS 6401P02.
Network Requirements
Figure 4-1 Network Topology Diagram for AFC Series Deployment with Active-Standby Redundancy Configuration
The specific implementation is as follows:
· Connect the active/standby AFC devices' serial external interfaces GE0/1 to the upstream switch's VLAN 1711.
· Connect the active/standby AFC devices' serial external interfaces GE0/2 to the downstream switch's VLAN 1711.
Table 4-1 VLAN Allocation List
|
VLAN ID |
Function Description |
IP Address |
|
1711 |
· The connection interface between the core switch and the AFC's cascaded external network port; · The gateway address of the lower-layer network · The VLAN IDs of the upper and lower layer networks, and the VLAN where the protected host resides |
|
The VLAN IDs configured on the uplink/downlink switches connected to the active/standby AFC devices must match. Mismatched VLAN IDs may cause active-standby failover failure.
Table 4-1 AFC Interface IP Address Allocation List
|
Interface |
Function Description |
IP Address |
|
GE1/0 |
· Serial External Interfaces of Active/Standby Devices: · In series deployment mode: |
|
|
GE1/1 |
· Serial Internal Interfaces of Active/Standby
Devices: · In series deployment mode: |
|
|
GE1/2 |
· Synchronization Interfaces of Active/Standby
Devices: |
|
|
GE0/0 |
· AFC Management Interface: |
Active Device:192.168.0.1/24 Standby Device:192.168.0.2/24 |
[AFC] Interface Naming (Reference Only)
The specific interface names are determined by the actual device model. This section provides guidance only.
Configuration Design Approach
To implement the AFC series deployment with active-standby redundancy, follow the configuration guidelines below:
(1) Core Switch R2 Basic Network Configuration
Configure port aggregation groups with identical attributes (e.g., all ports in the same VLAN 1711).
(2) Downstream Switch R3 Basic Network Configuration
Configure port aggregation groups with identical attributes (e.g., all ports in the same VLAN 1711).
(3) AFC Device add api authentication whitelist(Bidirectional IP Address Assignment Between Two Network Devices)
(4) AFC Device 1 Service Port Configuration
Set GE1/0 as the serial external interface and connect it to the aggregation group of core switch R2.
Configure GE1/1 of all AFC cluster devices as serial internal interfaces and connect them to the aggregation group of downstream switch R3.
(5) AFC Device 2 Service Port Configuration
Set GE1/0 as the serial external interface and connect it to the aggregation group of core switch R2.
Configure GE1/1 of all AFC cluster devices as serial internal interfaces and connect them to the aggregation group of downstream switch R3.
(6) AFC Device 1 & 2 Synchronization Port Configuration
Designate GE1/2 on both active and standby AFC devices as synchronization ports.
Use a gigabit Ethernet cable to directly connect the synchronization ports of the two devices.
(7) Active-Standby Configuration for AFC Devices
Access the "System > Platform Config > Active and Satandby"
interface to configure primary/standby management (refer to Section 4.3.4, Step
4 for detailed procedures).
Configuration Procedures
Core Switch R2 Basic Network Configuration
Create VLAN 1711, which corresponds to the IP subnet 171.0.1.0/24. This configuration ensures that the serial external interfaces (GE0/1) of the active/standby AFC devices and their cascade ports operate within the same VLAN.
# Create VLAN
[R2]vlan 1711
[R2-vlan1711]quit
# Create Port Aggregation Group
[R2]int Bridge-Aggregation 100
[R2-Bridge-Aggregation100]port access vlan 1711
[R2-Bridge-Aggregation100]quit
# Add Member Ports to the Aggregation Group
[R2]int GigabitEthernet 1/0/10
[R2-GigabitEthernet1/0/10] port link-aggregation group 100
[R2-GigabitEthernet1/0/10]quit
[R2]int GigabitEthernet 1/0/11
[R2-GigabitEthernet1/0/11] port link-aggregation group 100
[R2-GigabitEthernet1/0/11]quit
# Configure VLAN IP Address
[R2]interface Vlan-interface1711
[R2-Vlan-interface1711]ip address 171.0.1.1 255.255.255.0
[R2-Vlan-interface1711]quit
# Add Interfaces G1/0/10 and G1/0/11 to the Aggregation Group
[R2]int GigabitEthernet 1/0/10
[R2-GigabitEthernet1/0/10]port access vlan 1711
[R2-GigabitEthernet1/0/10]quit
[R2]int GigabitEthernet 1/0/11
[R2-GigabitEthernet1/0/11]port access vlan 1711
[R2-GigabitEthernet1/0/11]quit
Downstream Switch R3 Basic Network Configuration
Create VLAN 1711, which corresponds to the IP subnet 171.0.1.0/24. This configuration ensures that the serial external interfaces (GE0/1) of the active/standby AFC devices and their cascade ports operate within the same VLAN.
# Create VLAN
[R3]vlan 1711
[R3-vlan1711]quit
# Create Port Aggregation Group
[R3]int Bridge-Aggregation 100
[R3-Bridge-Aggregation100]port access vlan 1711
[R3-Bridge-Aggregation100]quit
# Add Member Ports to the Aggregation Group
[R3]int GigabitEthernet 1/0/10
[R3-GigabitEthernet1/0/10] port link-aggregation group 100
[R3-GigabitEthernet1/0/10]quit
[R3]int GigabitEthernet 1/0/11
[R3-GigabitEthernet1/0/11] port link-aggregation group 100
[R3-GigabitEthernet1/0/11]quit
# Assign the switch ports connected to hosts to VLAN.
[R3]interface GigabitEthernet1/0/13
# Connect to the protected host
[R3-GigabitEthernet1/0/13]port link-mode bridge
[R3-GigabitEthernet1/0/13]port access vlan 1711
[R3-GigabitEthernet1/0/13]quit
[R3]int GigabitEthernet 1/0/10
#Assign interfaces G1/0/10 and G1/0/11 to VLAN 1711
[R3-GigabitEthernet1/0/10]port access vlan 1711
[R3-GigabitEthernet1/0/10]quit
[R3]int GigabitEthernet 1/0/11
[R3-GigabitEthernet1/0/11]port access vlan 1711
[R3-GigabitEthernet1/0/11]quit
AFC Device Service Port Configuration
To implement the AFC series cluster deployment, you can configure according to the following steps:
Important Note:For configuration steps involving an [Apply Config] button, you must click this button to activate the settings. This requirement will not be reiterated in subsequent steps.
Ø Access the AFC System Web Interface
Open a web browser, enter the management IP address https://192.168.0.1/ to access the AFC system login page, log in with the default username "admin" and password "admin" (note that the default password must be changed immediately upon first login for security reasons).
Figure 4-1: AFC System Login Page
Ø Configure AFC device address and port types
Navigate to[System] → [Device] → [Device Management], click the [Setup] button on the right side of the device, select [Port Settings] from the left navigation bar, click the [Modify] button to set GE0/0 as the management port (configure management IP address, subnet mask and gateway), set GE1/2 as synchronous interface, configure GE1/0 as the external cascade network port and GE1/1 as the internal cascade network port, and enable data port binding between them.
Figure 4-1:Configuration of GE0/0 Interface
Figure 4-1:Configuration of GE1/0 Interface
Figure 4-1:Configuration of GE1/1 Interface
Figure 4-1:Configuration of GE1/2 Interface
Important Note:
For steps involving the [Apply Config] button, you must click it to
activate the settings. Repeat the same configuration process for Standby Device
2 and Primary Device
AFC Device Primary/Standby Configuration
(1) Time Synchronization
1. Navigate to [System] →[Platform Config] →[NTP Time Syn] and manually configure the time.
2. Select the time zone: "Beijing, Chongqing, Hong Kong SAR, Urumqi".
3. Ensure the configured time matches the local PC time.
4. When activating the standby device, its time must be synchronized with the primary device, with a maximum allowable time difference of 30 minutes
(2) Primary Device Add API Authentication Whitelist
Log in to Cluster Device 1 at https://192.168.0.1/ , navigate to [System] → [Platform Config] → [Login Security Config], add API authentication whitelist and open
(3) Standby Device Add API Authentication Whitelist
Log in to Cluster Device 2 at https://192.168.0.1/ , navigate to [System] → [Platform Config] → [Login Security Config], add API authentication whitelist and open
(4) Primary Device Configuration
Log in to Cluster Device 1 at https://192.168.0.1/ , navigate to [System] → [Platform Config] → [Active and standby], enable the active-standby switch, configure the floating management IP and floating management gateway, set the group ID, designate the local device role as primary, enter the standby device IP as 192.168.0.2, optionally enable preemption mode, automatic switchover for link exceptions, and automatic switchover for forwarding exceptions, then click [Apply].
Note:
For the two devices configured in active-standby mode, the floating management
IP, floating management gateway, and group ID must be identical;
inconsistencies will cause the active-standby activation to fail.
Recommendation:
Configure the floating IP within the same subnet as the management interface to
ensure seamless communicatio
(5) Standby Device Configuration
Log in to Cluster Device 2 at https://192.168.0.2/ , navigate to[System] → [Platform Config] → [Active and standby], enable the active-standby switch, configure the floating management IP and floating management gateway, set the group ID, designate the local device role as standby, enter the primary device IP as 192.168.0.1, optionally enable automatic switchover for link exceptions and forwarding exceptions, then click [Apply].
Note:
For the two devices configured in active-standby mode, the floating management
IP, floating management gateway, and group ID must be identical; inconsistencies
will cause the active-standby activation to fail.
Configuration Validation
Click the [Refresh] button to display real-time device status information.
Verify Communication Between Client and Traffic Redirect Server
Execute the ping command to verify network connectivity between the client and the service PING 171.0.3.21 (171.0.3.21) 56(84) bytes of data.
routing device
<h3c>ping 171.0.3.21
64 bytes from 171.0.3.21: icmp_seq=1 ttl=124 time=0.799 ms
64 bytes from 171.0.3.21: icmp_seq=2 ttl=124 time=0.736 ms
64 bytes from 171.0.3.21: icmp_seq=3 ttl=124 time=0.862 ms
64 bytes from 171.0.3.21: icmp_seq=4 ttl=124 time=1.47 ms
64 bytes from 171.0.3.21: icmp_seq=5 ttl=124 time=1.02 ms
--- 171.0.1.21 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3004ms
rtt min/avg/max/mdev = 1.835/1.962/2.265/0.181 ms




















