- Table of Contents
-
- H3C SecPath AFC2000-EX0-G Series Abnormal Traffic Cleaning System Configuration Examples-5W100
- 00-Preface
- 01-Series Deployment Single-Machine Single-Channel and Multi-Channel Configuration Example.
- 02-BGP Layer 3 Bypass Return Path Configuration Example
- 03-BGP Auto-Diversion Deployment with Bypass and Abnormal Traffic Detection System Example
- 04-TCP Port Protection Configuration Example
- 05-AFC Comprehensive Protection Configuration Example
- 06-Typical Configuration Examples of Traction Management Example
- 07-OSPF Layer 2 Reintroduction Configuration Example
- 08-Cascaded Cluster and Dual-Node Active-Standby Configuration Example
- 09-Bypass BGP Layer 2 Return Traffic Configuration Example
- 10-OSPF-Based Three-Layer Return Injection Configuration Example
- 11-BGP-Based Three-Layer Injection Configuration Example for Bypass Single-Device Multi-Channel Deployment Example
- 12-BGP-Based Three-Layer Injection Configuration Example for Bypass Multi-Device Cluster Deployment Example
- 13-Bypass GRE Layer 3 Return Injection Configuration Example
- 14-Typical Configuration for HTTPS CC Protection Example
- Related Documents
-
| Title | Size | Download |
|---|---|---|
| 12-BGP-Based Three-Layer Injection Configuration Example for Bypass Multi-Device Cluster Deployment Example | 552.51 KB |
Traffic Cleaning Service Configuration Guide
Typical Configuration Example: AFC Bypass Cluster Deployment with BGP-Based Layer 3 Return Traffic
Example Configuration for BGP Layer 3 Return Mode
Applicable Products and Versions
Network Deployment Requirements
3.3.5 Verify the configuration
Feature Introduction
The H3C SecPath is typically deployed in bypass mode alongside core network devices. While ensuring uninterrupted normal business operations, it filters out DDoS attack traffic emerging from the underlying network, thereby safeguarding both the underlying network and high-priority customer network services.
The H3C SecPath consists of two main components: the H3C SecPath AFC (AFC) and the H3C SecPath AFD (AFD), which is a specialized abnormal traffic cleaning device.
The H3C SecPath AFD performs real-time attack detection and abnormal traffic analysis on user traffic replicated via mirroring or optical splitting methods
The H3C SecPath AFC tows attack traffic through OSPF route advertisement, filters attack packets, and performs reinjection of the cleaned "clean" traffic back to users. For route advertisement methods, one is to statically advertise OSPF routes manually, and the other is to dynamically advertise detailed routes of attacked hosts through linkage with the H3C SecPath AFD.
The H3C SecPath AFD and H3C SecPath AFC can both be deployed independently, and both can provide users with detailed traffic log analysis reports, attack incident handling reports, and more.
Feature Usage
This document is not strictly version-specific to any particular software or hardware release. If discrepancies arise between the document and the actual product behavior during use, the device's actual status shall prevail.
All configurations presented in this document were performed and verified in a laboratory environment, with all device parameters initialized to factory default settings prior to configuration. If you have already configured the device, to ensure configuration effectiveness, please verify that your existing configuration does not conflict with the examples provided herein.
This document assumes you have prior knowledge of data communication technologies including VLAN, BGP, policy-based routing, and ACL configurations.
Configuration Guide
The H3C SecPath configuration includes basic setups and service-specific configurations for both AFD and AFC devices, all performed via the WEB interface. Switch basic configurations are implemented through the command line.This configuration takes the bypass cluster deployment of abnormal traffic cleaning equipment with BGP-based traffic diversion as an example.
Traffic Cleaning Service Configuration Guide
· The AFC cluster devices establish BGP neighbor relationships with the core device. The core device enables BGP equal-cost multi-path routing and advertises 32-bit static routes for the protected IPs to the core device, achieving traffic diversion to the AFC.
· Based on the BGP load-balancing method of the core device, the AFC cleanses the traffic diverted from the core device. Simultaneously, it reinjects the cleaned legitimate user traffic back into the underlying network and ultimately forwards it to the intended destination devices.
· When reinjecting cleaned traffic to the downstream network via Layer 3 routing, there are two methods: Single-arm Re-injection Interface and Traffic Diversion Interface、Traffic Re-injection Interface .
Precautions
Configuration commands vary across switches/routers from different vendors and models. Please refer to the device operation manual for specific configuration procedures.
Typical Configuration Example: AFC Bypass Cluster Deployment with BGP-Based Layer 3 Return Traffic
Introduction
This chapter describes the typical configuration example of using BGP equal-cost multi-path (ECMP) routing protocol in AFC bypass cluster deployment to form the cluster. After traffic steering is implemented, the cleaned traffic is directly returned to the downstream Layer 3 devices through the steering return mode. The downstream Layer 3 devices then forward the traffic according to their local routing tables.
Usage Restrictions
The application scenario for the Layer 3 injection mode is as follows: The core device performing route diversion with the AFC is connected downstream to network devices that are either Layer 3 switches or routers, and these devices support the BGP routing protocol.
Example Configuration for BGP Layer 3 Return Mode
Applicable Products and Versions
This configuration applies to H3C SecPath AFC devices.
Software Version: H3C i-Ware Software, Version 7.1, ESS 6401.
Network Deployment Requirements
To implement traffic cleaning for the protected IP 171.0.3.21 against attacks, two AFC devices are deployed in bypass mode on the core switch. The core switch establishes BGP neighbor relationships with AFC Cluster Device 1 and AFC Cluster Device 2 through interfaces G1/0/18 and G1/0/19 respectively, which connect to the GE1/0 interfaces of the AFC devices. This setup forms BGP equal-cost multi-path (ECMP) routing to enable traffic diversion and cleaning.
The cleaned traffic is directly returned to the downstream Layer 3 switch through the GE1/1 interfaces of the AFC devices. The downstream switch then forwards the traffic based on its local routing table.
Figure 0-1AFC Network Diagram for AFC Bypass Cluster Deployment with Layer 3 Return Mode Configuration
The specific implementation steps are as follows:
· Host Route Advertisement: The AFC cluster devices establish BGP neighbor relationships with the core switch through the GE1/0 interfaces, and advertise a /32 route for the protected IP to the core switch.
· Host Traffic Cleaning: The core switch diverts the protected host's traffic to the AFC device, which then filters out abnormal traffic from the host traffic using cleaning policies.
· Traffic Redirection: The AFC device injects the cleaned traffic back into the downstream network via the GE1/1 interface, and the downstream devices forward the traffic according to their local routing tables.
Table 0-1 VLAN Allocation List
|
VLAN ID |
Function Description |
IP Address |
|
1710 |
The core switch establishes a BGP neighbor relationship with AFC Cluster Device 1. |
171.0.0.1/24 |
|
1714 |
The core switch establishes a BGP neighbor relationship with AFC Cluster Device 2. |
171.0.4.1/24 |
|
1711 |
· The core switch's Layer 3 VLAN interface connected to the downstream network · The downstream switch's Layer 3 VLAN interface connected to the core switch |
Upstream Device171.0.1.1/24 Downstream Device:171.0.1.2/24 |
|
1712 |
The downstream switch establishes a Layer 3 connection with the channel output port of AFC Cluster Device 1. |
171.0.2.1/24 |
|
1715 |
The downstream switch establishes a Layer 3 connection with the channel output port of AFC Cluster Device 2. |
171.0.5.1/24 |
|
1713 |
· The VLAN where the protected host resides · The gateway address of the protected host |
171.0.3.1/24 |
Table 0-2 AFC Interface IP Assignment Table
|
接口 |
Function Description |
IP Address |
|
GE1/0 |
The core switch R2 establishes BGP neighbor relationships with the AFC cluster devices. |
Cluster Device1:171.0.0.2/24 Cluster Device2:171.0.4.2/24 |
|
GE1/1 |
The core switch R3 establishes a routing channel with the AFC cluster devices. |
Cluster Device1:171.0.2.2/24 Cluster Device2:171.0.5.2/24 |
|
GE1/2 |
The synchronization port for interconnecting the two AFC devices |
|
|
GE0/0 |
AFC Management Port |
Cluster Device1:192.168.0.1/24 Cluster Device2:192.168.0.2/24 |
Configuration Approach
To implement the BGP Layer 3 return mode configuration for AFC bypass cluster deployment, you can follow the configuration approach outlined below:
Configure basic network settings on the core switch R2
Configure the interconnection between GigabitEthernet1/0/17 on core switch R2 and GigabitEthernet1/0/17 on downstream switch R3.
Configure BGP neighbor relationships on the core switch R2
Enable BGP processes on both the AFC cluster devices and the core switch R2, establish mutual neighbor relationships, and configure equal-cost multipath routing on the core switch's BGP process to achieve load balancing of redirected traffic across the AFC cluster devices.
Configure basic network parameters on the downstream switch R3.
Configure the interconnection between GigabitEthernet1/0/17 on downstream switch R3 and GigabitEthernet1/0/17 on core switch R2.
Configure service ports on the AFC devices.
Set IP addresses and port types for the interfaces connecting to core switch R2 to enable communication,Configure the service ports as Traffic Diversion Interface and Traffic Re-injection Interface, ensuring they are different physical ports
Configure BGP routing on the AFC devices:
Configure BGP neighbor relationships on the AFC devices to establish mutual adjacency with the core switch.
Configure AFC device cluster management
Access the web interface of the second device and set it as a node,Then log in to the web interface of the first device and add the second device to form the cluster management
Configure AFC host route steering and traffic cleaning
Advertise the next-hop routes of protected hosts to AFC
The core device steers traffic to the AFC cluster in load-sharing mode based on BGP equal-cost routes
AFC cleans the host traffic according to defense policies
Reinject the cleaned traffic back to the downstream network
Configuration Procedures
Configure basic network settings on the core switch R2.
Create VLANs 1710, 1711, and 1714 on the core switch R2, with the following configurations:
Purpose: Direct connection and route steering between R2's Layer 3 switch port GE1/0/18 and AFC channel input port
Purpose: Interconnection and routing with downstream devices
Purpose: Direct connection and route steering between R2's Layer 3 switch port GE1/0/19 and AFC channel input port
# Create VLAN
[R2]vlan 1710
[R2-vlan1710]quit
[R2]vlan 1711
[R2-vlan1711]quit
[R2]vlan 1714
[R2-vlan1711]quit
# Configure VLAN IP
[R2]interface Vlan-interface1710
[R2-Vlan-interface1710]IP address 171.0.0.1 255.255.255.0
[R2-Vlan-interface1710]quit
[R2]interface Vlan-interface1711
[R2-Vlan-interface1711]IP address 171.0.1.1 255.255.255.0
[R2]interface Vlan-interface1714
[R2-Vlan-interface1711]IP address 171.0.4.1 255.255.255.0
# Check the interface configuration for interconnection with AFC
[R2]int GigabitEthernet 1/0/18
[R2-GigabitEthernet1/0/18]dis this
#
interface GigabitEthernet1/0/18
port link-mode bridge
port access vlan 1710
[R2]int GigabitEthernet 1/0/19
[R2-GigabitEthernet1/0/19]dis this
#
interface GigabitEthernet1/0/19
port link-mode bridge
port access vlan 1714
Configure BGP neighbor relationships on the core switch R2.
# Configure BGP process (establish separate BGP sessions for two AFC devices, with configuration example provided only for the first BGP session):
[R2]#bgp 65535
# Configure BGP with AS number 65535
[R2-bgp]router-id 171.0.0.1
# Configure the router ID for the BGP process
[R2-bgp]undo synchronization
[R2-bgp] address-family IPv4
[R2-bgp-IPv4]peer 171.0.0.2 enable
# Enable IPv4 unicast with the peer, allowing the local router to exchange IPv4 unicast routing information with the specified peer device
[R2-bgp]peer 171.0.0.2 as-number 65534
# Configure BGP neighbor with peer AS number 65534
[R2-bgp]peer 171.0.0.2 descrIPtion afc_01
# Configure peer description as "afc_01"
[R2-bgp]peer 171.0.0.2 preferred-value 1
# Assign preference values to routes received from peers (lower values indicate higher preference)
[R2-bgp]peer 171.0.0.2 keep-all-routes
# Preserve all original routing information received from peers/peer groups, even if the routes do not pass the configured inbound policies
If configuring BGP IPv6 protocol, enter the BGP IPv6 unicast address family view.
Configure basic network settings on the downstream switch R3.
Create VLAN 1711, 1712, 1713 and 1715 on the downstream switch R3 with the following configurations:
VLAN 1711:
Purpose: Direct routing connection between core switch R2 and downstream network
VLAN 1712:
Purpose: Direct routing communication between R3's GE1/0/18 port and AFC channel output
VLAN 1713:
Purpose: Subnet for the downstream network
VLAN 1715:
Purpose: Direct routing communication between R3's GE1/0/19 port and AFC channel output
# Create VLAN
[R3]vlan 1711
[R3-vlan1711]quit
[R3]vlan 1712
[R3-vlan1712]quit
[R3]vlan 1713
[R3-vlan1713]quit
[R3]vlan 1715
[R3-vlan1715]quit
# Configure VLAN IP
[R3]int Vlan-interface 1711
[R3-Vlan-interface1711]IP address 171.0.1.2 24
[R3-Vlan-interface1711]quit
[R3]int Vlan-interface 1712
[R3-Vlan-interface1712]IP address 171.0.2.1 24
[R3-Vlan-interface1712]quit
[R3]int Vlan-interface 1713
[R3-Vlan-interface1713]IP address 171.0.3.1 24
[R3-Vlan-interface1713]quit
[R3]int Vlan-interface 1715
[R3-Vlan-interface1715]IP address 171.0.5.1 24
[R3-Vlan-interface1715]quit
# Check interface configuration
[R3]int GigabitEthernet 1/0/18
[R3-GigabitEthernet1/0/18]dis this
#
interface GigabitEthernet1/0/18
port link-mode bridge
port access vlan 1712
[R3]int GigabitEthernet 1/0/19
[R3-GigabitEthernet1/0/19]dis this
#
interface GigabitEthernet1/0/19
port link-mode bridge
port access vlan 1715
Configure the Management Port and Service Ports of the AFC equipment
To implement BGP Layer 3 return injection mode for AFC bypass single-unit multi-channel deployment, follow the configuration steps below. (Configuration example provided for the first AFC device only, with identical configuration required for the second AFC device.)
Important Note: For configuration steps that include an [Apply Config] button, you must click this button to activate the configuration. This will not be reiterated in subsequent steps.
Log in to the AFC system web interface.
Access the login page via web browser: https://192.168.0.1 (Username: admin, Password: admin)
Figure 0-2 AFC System Login Page
Configure the AFC management IP address.
Navigate to [System] → [Device] → [Device Manage], click the [Setup] button on the right side of the target device, select [Port Settings] from the left navigation menu, then click the [Edit] button to configure the management interface's IP address, subnet mask, and gateway.
Configure the AFC service IP address and port type.
Navigate to [System] → [Device] → [Device Management], click the [Setup] button on the right side of the target device
Select [Port Settings]from the left navigation menu
Click [Edit] to configure:
Set IP address, subnet mask, and gateway for GE1/0 and GE1/1
Configure port binding settings for these interfaces
Set GE1/2 as [Synchronization Interface] for inter-cluster data synchronization
The IP of GE1/0 is 171.0.0.2, the port type is Traffic Diversion Interface, the data port is GE1/1, and the IPv4 next hop is the address of the interconnection switch port in the inbound direction and ip is 171.0.0.1
Figure 0-3 Configuration of GE1/0
The IP of GE1/1 is 171.0.2.2, the port type is Traffic Re-injection Interface & Main Link, the data port is GE1/0, and the IPv4 next hop is the address of the interconnection switch port in the outbound direction 171.0.2.1.
Figure 0-4Configuration of GE1/1
Configure GE1/2 as a Synchronization Interface. A Synchronous Port cannot have a gateway configured.
Figure 0-5Configuration of GE1/2
BGP Routing Configuration for AFC Equipment
After completing the address and port type configurations, click the [Routing Configuration] menu at the bottom, select [BGP Config], check Enable BGP, and click [Apply Config]. Then follow the steps below to complete the configuration.
Local BGP Configuration:
Navigate to [System] → [Device Management], click the [Setup] button in the row corresponding to device 127.0.0.1, then access [Routing Configuration] → [BGP Config] to perform the following operations:
Check [Start BGP]
· Local AS: 65534 // AFC device AS number
· Local Port: 179 // Default port 179
Click [Save] to apply the configuration.
AFC Device Local BGP Configuration
Figure 0-6Enabling BGP
Peer BGP Configuration
Click the [Add] button to configure BGP peer information:
· Peer AS: 65535 // Enter the core switch's AS number when BGP is already running on it
· Peer Port: 179 // Default port 179
· LocalPref/MED: 100 // Default value 100
· Peer IP: 171.0.0.1 (IPv4 next-hop address of GE1/0 interface)
Click [Save] to complete the peer address configuration.
Figure 0-7BGP Peer Configuration for AFC Equipment
Apply BGP Configuration
Click [Apply Config] to activate the BGP configuration.
AFC Equipment Cluster Management Configuration
To facilitate unified management of multiple AFC devices in a cluster, this document designates the first device as the web primary management node and the second device as a cluster node. The web interface of the primary device is used to add the management IP address of the second device for centralized management.
Switch AFC Cluster Device 2 to Node Mode
First, log in to AFC Cluster Device 2 via the web. After logging in, modify the URL to: ** https://192.168.0.2/role** and access it. Click [Node] to switch the role. Use the same account and password as the current web login credentials. See the figure below for reference.
Figure 0-8 Node Switching
Add Node to AFC Cluster Device 1
Log in to Cluster Device 1 at: ** https://192.168.0.1** , then navigate to [System] → [Device] → [Add Device]. Enter the management address of Node 2 (192.168.0.2) to add it for unified management. See the figure below for reference.。如下图;
Figure 0-9 Add Node
After adding the device, wait for 60 seconds for the device activation to complete. See the figure below for reference.
Figure 0-10 Device Activation
AFC Equipment Route Steering and Traffic Cleaning
Log in to the AFC device, navigate to [Steer Config] → [Traffic Steering Status], click [Manual Steering], and perform diversion operations on the user's internal test address. In this example, the diversion address is 171.0.3.21; select the diversion operation "Diversion Traction" (literal: "Traction Traction"), and click [Ensure] to complete the diversion operation.
Figure 0-11 As shown in Figure 3-4, divert the user service address 171.0.3.21.
After the traffic is directed into the AFC device, it can automatically employ default policies to mitigate and defend against DDoS attacks.
3.3.5 Verify the configuration
Verify connectivity between the core switch R2 and the AFC device's cleaning service port
Test connectivity between the core switch R2 and the AFC device via ping
[R2]ping -a 171.0.0.1 171.0.0.2
PING 171.0.0.2: 56 data bytes, press CTRL_C to break
Reply from 171.0.0.2: bytes=56 Sequence=1 ttl=64 time=3 ms
Reply from 171.0.0.2: bytes=56 Sequence=2 ttl=64 time=3 ms
Reply from 171.0.0.2: bytes=56 Sequence=3 ttl=64 time=3 ms
Reply from 171.0.0.2: bytes=56 Sequence=4 ttl=64 time=3 ms
Reply from 171.0.0.2: bytes=56 Sequence=5 ttl=64 time=3 ms
--- 171.0.0.2 ping statistics ---
5 packet(s) transmitted
5 packet(s) received
0.00% packet loss
round-trIP min/avg/max = 3/3/3 ms
Verify whether the BGP neighbor relationship is established between the core device and the AFC device
Log in to the core device and execute the "display bgp peer" command to check the BGP establishment status.
[Sysname] display bgp peer
BGP local router ID : 171.0.0.1
Local AS number : 65535
Total number of peers : 1 Peers in established state : 1
Peer AS MsgRcvd MsgSent OutQ PrefRcv Up/Down State
171.0.0.2 65534 5 3 0 0 00:01:59 Established
Verify whether the route steering between the core switch R2 and the AFC device is successful. A successful steering will result in a 32-bit route for this host.
Check the routing table of the core switch R2.
[R2]display bgp routing-table
Total Number of Routes: 1
BGP Local router ID is 171.0.0.1
Status codes: * - valid, ^ - VPNv4 best, > - best, d - damped,
h - history, i - internal, s - suppressed, S - Stale
Origin : i - IGP, e - EGP, ? - incomplete
Network NextHop MED LocPrf PrefVal Path/Ogn
* > 171.0.3.21/32 171.0.0.2 0 1 65534i
Verify whether communication between the client and the drainage server is normal
Test whether the client can communicate with the service route via ping
[root@AFCTest_Client ~]# ifconfig eth0
eth0 Link encap:Ethernet HWaddr 00:0C:29:9D:1B:7A
inet addr:184.0.0.75 Bcast:184.0.0.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe9d:1b7a/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:257120 errors:0 dropped:0 overruns:0 frame:0
TX packets:47273087 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:28882056 (27.5 MiB) TX bytes:3460908912 (3.2 GiB)
[root@AFCTest_Client ~]# ping -c 5 171.0.3.21
PING 171.0.3.21 (171.0.3.21) 56(84) bytes of data.
64 bytes from 171.0.3.21: icmp_seq=1 ttl=124 time=0.799 ms
64 bytes from 171.0.3.21: icmp_seq=2 ttl=124 time=0.736 ms
64 bytes from 171.0.3.21: icmp_seq=3 ttl=124 time=0.862 ms
64 bytes from 171.0.3.21: icmp_seq=4 ttl=124 time=1.47 ms
64 bytes from 171.0.3.21: icmp_seq=5 ttl=124 time=1.02 ms
--- 171.0.1.21 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4006ms
rtt min/avg/max/mdev = 0.736/0.977/1.470/0.266 ms











