H3C SecPath AFC2000-EX0-G Series Abnormal Traffic Cleaning System Configuration Examples-5W100

HomeSupportSecurityH3C SecPath AFC2000H3C SecPath AFC2000Technical DocumentsConfigure & DeployConfiguration ExamplesH3C SecPath AFC2000-EX0-G Series Abnormal Traffic Cleaning System Configuration Examples-5W100
11-BGP-Based Three-Layer Injection Configuration Example for Bypass Single-Device Multi-Channel Deployment Example

Feature Introduction

The H3C SecPath is typically deployed in bypass mode alongside core network devices. While ensuring uninterrupted normal business operations, it filters out DDoS attack traffic emerging from the underlying network, thereby safeguarding both the underlying network and high-priority customer network services.

The H3C SecPath consists of two main components: the H3C SecPath AFC (AFC) and the H3C SecPath AFD (AFD), which is a specialized abnormal traffic cleaning device.

The H3C SecPath AFD performs real-time attack detection and abnormal traffic analysis on user traffic replicated via mirroring or optical splitting methods

The H3C SecPath AFC tows attack traffic through OSPF route advertisement, filters attack packets, and performs reinjection of the cleaned "clean" traffic back to users. For route advertisement methods, one is to statically advertise OSPF routes manually, and the other is to dynamically advertise detailed routes of attacked hosts through linkage with the H3C SecPath AFD.

The H3C SecPath AFD and H3C SecPath AFC can both be deployed independently, and both can provide users with detailed traffic log analysis reports, attack incident handling reports, and more.

  Feature Usage

This document is not strictly version-specific to any particular software or hardware release. If discrepancies arise between the document and the actual product behavior during use, the device's actual status shall prevail.

All configurations presented in this document were performed and verified in a laboratory environment, with all device parameters initialized to factory default settings prior to configuration. If you have already configured the device, to ensure configuration effectiveness, please verify that your existing configuration does not conflict with the examples provided herein.

This document assumes that you are already familiar with VLAN, BGP, and link aggregation features.

Configuration Guide

The H3C SecPath configuration includes basic setups and service-specific configurations for both AFD and AFC devices, all performed via the WEB interface. Switch basic configurations are implemented through the command line. This configuration example demonstrates a bypass deployment of the AFC device with BGP-based traffic diversion for detection.

 Traffic Cleaning Service Configuration Guide

·     The AFC cluster devices establish BGP neighbor relationships with the core device. The core device enables BGP equal-cost multi-path routing and advertises 32-bit static routes for the protected IPs to the core device, achieving traffic diversion to the AFC.

·     Based on the BGP load-balancing method of the core device, the AFC cleanses the traffic diverted from the core device. Simultaneously, it reinjects the cleaned legitimate user traffic back into the underlying network and ultimately forwards it to the intended destination devices.

·     When reinjecting cleaned traffic to the downstream network via Layer 3 routing, there are two methods: Single-arm Re-injection Interface and Traffic Diversion InterfaceTraffic Re-injection Interface .

 Precautions

·     Configuration commands vary across switches/routers from different vendors and models. Please refer to the device operation manual for specific configuration procedures.

Typical Configuration Example for AFC Bypass Single-Device Multi-Channel Deployment with BGP Layer 3 Injection

 Introduction

This chapter describes the configuration of AFC bypass single-device multi-channel deployment using port aggregation. A Layer 3 address is configured on the inbound aggregation port to establish a BGP neighbor relationship with the core switch R2's port aggregation group for traffic diversion. The cleaning mode directly reinjects the cleaned traffic through the outbound aggregation port back to the downstream Layer 3 device's aggregation group, which then forwards the traffic based on its local routing table.

 Usage Restrictions

The application scenario for the Layer 3 injection mode is as follows: The core device performing route diversion with the AFC is connected downstream to network devices that are either Layer 3 switches or routers, and these devices support the BGP routing protocol.

Configuration Example for BGP Single-Device Multi-Channel Layer 3 Injection Mode

Applicable Products and Versions

This configuration applies to H3C SecPath AFC devices.

Software Version: H3C i-Ware Software, Version 7.1, ESS 6401.

Network Deployment Requirements

To enable traffic cleaning for attack traffic targeting the protected IP 171.0.0.21, one AFC device is deployed in bypass mode on the core switch. The core switch R2 aggregates interfaces G1/0/18 and G1/0/19 into Port Aggregation Group 8, while the AFC aggregates its input ports GE1/0 and GE1/1. A BGP neighbor relationship is established between the core switch R2's port aggregation group and the AFC's input port aggregation group to divert traffic for cleaning.

The AFC aggregates its output ports GE1/2 and GE1/3, establishing a direct routing relationship with the port aggregation group formed by the downstream switch R3's G1/0/18 and G1/0/19. The AFC reinjects cleaned traffic through its output port aggregation group directly back to the downstream Layer 3 switch's aggregation group. The downstream switch R3 then forwards the traffic based on its local routing table.

The network topology is illustrated in Figure 1.

Figure 0-1 Three-layer Return Injection Mode Configuration Topology for AFC Bypass Single-Device Multi-Channel Deployment

 

The specific implementation steps are as follows:

·     Enable aggregation on all AFC channel input ports and establish a BGP neighbor relationship with the core switch R2's port aggregation group. The AFC advertises 32-bit routes for the protected IP to the core switch R2.

·     The core switch R2 diverts the protected host traffic to the AFC, which then removes abnormal traffic from the host traffic flow through predefined cleaning policies.

·     Enable aggregation on all AFC channel output ports. The AFC reinjects the cleaned traffic back to the downstream network's port aggregation group through its output aggregation port. The downstream devices then forward the traffic based on their local routing tables.

Table 0-1VLAN Allocation List

VLAN ID

Function Description

IPAddress

1710

The core switch R2's port aggregation group establishes a BGP neighbor relationship with the AFC's input port aggregation group.

171.0.0.1/24

1711

·     The core switch R2 connects to the downstream network through a Layer 3 VLAN interface.

·     The downstream switch R3 establishes a Layer 3 VLAN interface connection with the core switch R2.

IP address:171.0.1.1/24

IP address:171.0.1.2/24

1712

The downstream switch R3's port aggregation group establishes a Layer 3 connection with the AFC's output port aggregation group.

171.0.2.1/24

1713

·     The VLAN where the protected host resides.

·     The gateway IP address configured for the protected host.

171.0.3.1/24

          

Table 0-2 AFC Interface IP Assignment Table

接口

Function Description

IPAddress

BAGG1

·     Aggregate all inbound ports across AFC channels to form a unified aggregation interface.

·     The core switch R2 establishes a BGP neighbor relationship with the AFC's inbound aggregation port group.

Inbound Aggregation Port Configuration:

171.0.0.2/24

BAGG2

·     AFC Outbound Aggregation Group Formation:

·     Routing Channel Establishment Between Downstream Switch R3 Aggregation Group and AFC Outbound Aggregation Port:

Outbound Aggregation Port

Configuration

171.0.2.2/24

GE0/0

AFC Management Port

192.168.0.1/24

 Configuration Approach

To implement BGP Layer 3 reinjection mode for AFC bypass single-device multi-channel deployment, follow this structured configuration approach:

Core Switch R2 Basic Network Configuration

Configure Interconnection Between Core Switch R2 (G1/0/17) and Downstream Switch R3 (G1/0/17)

Core Switch R2 Port Aggregation Configuration

VLAN Configure port aggregation groups on both switches and configure them to join Layer 3 VLANs.

Configure BGP neighbor on Core Switch R2

Enable BGP process on both AFC and Core Switch R2 and establish neighbor relationship between them.

Configure basic network on downstream switch R3

Configure the G1/0/17 interface on downstream switch R3 to interconnect with the G1/0/17 interface on core switch R2.

Configure port aggregation on AFC device

Configure the input interfaces of AFC's two channels with the same IP and connect them to the upstream switch aggregation group, and configure the output interfaces of the two channels with the same IP and connect them to the downstream switch R3 aggregation group.

Configure service ports on AFC device

Configure IP addresses and port types for AFC device's internet ports to enable communication with core switch R2, and set the service port types as diversion port and reinjection port.

Configure BGP routing on AFC device

Configure BGP adjacency between AFC device and core switch to establish mutual neighbor relationship.

Configure AFC host route diversion and traffic cleaning

The AFC device diverts user business traffic, cleans the traffic based on defense policies, and sends the cleaned traffic back to the core device.

Configuration Procedures

Configure basic network on core switch R2

Create VLAN 1710 and VLAN 1711, where VLAN 1710 corresponds to the 171.0.0.0/24 subnet for direct communication and route diversion between R2's Layer 3 switch port aggregation group 8 and AFC channel input port aggregation group; VLAN 1711 corresponds to the 171.0.1.0/24 subnet for routing with downstream devices.

# Create VLAN

[R2]vlan 1710

[R2-vlan1710]quit

[R2]vlan 1711

[R2-vlan1711]quit

# Configure VLAN IP

[R2]interface Vlan-interface1710

[R2-Vlan-interface1710]IP address 171.0.0.1 255.255.255.0

[R2-Vlan-interface1710]quit

[R2]interface Vlan-interface1711

[R2-Vlan-interface1711]IP address 171.0.1.1 255.255.255.0

Configure port aggregation on core switch R2

# Create port aggregation group and add interfaces to the aggregation group

[R2]int Bridge-Aggregation 8

Add interfaces G1/0/18 and G1/0/19 to the port aggregation group

[R2]int GigabitEthernet 1/0/18

[R2-GigabitEthernet1/0/18]port link-aggregation group 8

[R2]int GigabitEthernet 1/0/19

[R2-GigabitEthernet1/0/19]port link-aggregation group 8

# Configure VLAN information for the port aggregation group.

[R2]int Bridge-Aggregation 8

[R2-Bridge-Aggregation8]port access vlan 1710

# Check the configuration of interfaces in the port aggregation group.

[R2]int GigabitEthernet 1/0/18

[R2-GigabitEthernet1/0/18]dis this

#

interface GigabitEthernet1/0/18

 port link-mode bridge

 port access vlan 1710

 port link-aggregation group 8

[R2]int GigabitEthernet 1/0/19

[R2-GigabitEthernet1/0/19]dis this

#

interface GigabitEthernet1/0/19

 port link-mode bridge

 port access vlan 1710

 port link-aggregation group 8

 

Configure BGP neighbor on core switch R2

# Configure BGP process on core switch R2

[R2]#bgp 65535

#  Configure BGP with AS number 65535: router bgp 65535

[R2-bgp]router-id 171.0.0.1

#  IDConfigure Router ID for the router

[R2-bgp]undo synchronization

[R2-bgp] address-family IPv4

[R2-bgp-IPv4]peer 171.0.0.2 enable

# Activate IPv4 unicast address family and allow local router to exchange IPv4 unicast routing information with specified peer:

[R2-bgp]peer 171.0.0.2 as-number 65534

# Configure peer with AS number 65534:

 [R2-bgp]peer 171.0.0.2 descrIPtion afc2100_01

# Set peer description as "afc2100":

[R2-bgp]peer 171.0.0.2 preferred-value 1

# Set route preference for received routes,Lower values indicate higher priority:

[R2-bgp]peer 171.0.0.2 keep-all-routes

# Save all raw routing information from peers/peer groups, even if these routes have not passed the configured ingress policy

IMG_256

f configured with BGP IPv6 protocol, it is necessary to enter BGP IPv6 unicast view.

 

Configure basic network on access switch R3

Configure VLANs 1711, 1712, and 1713 with corresponding IP segments: vlan 1711 name CORE-R2-ACCESS ip address 171.0.1.1 255.255.255.0 vlan 1712 name R3-AGG-AFC ip address 171.0.2.1 255.255.255.0 vlan 1713 name ACCESS-NETWORK ip address 171.0.3.1 255.255.255.0

# Create VLAN

[R3]vlan 1711

[R3-vlan1711]quit

[R3]vlan 1712

[R3-vlan1712]quit

[R3]vlan 1713

[R3-vlan1713]quit

# Configure VLAN IP

[R3]int Vlan-interface 1711

[R3-Vlan-interface1711]IP address 171.0.1.2 24

[R3-Vlan-interface1711]quit

[R3]int Vlan-interface 1712

[R3-Vlan-interface1712]IP address 171.0.2.1 24

[R3-Vlan-interface1712]quit

[R3]int Vlan-interface 1713

[R3-Vlan-interface1713]IP address 171.0.3.1 24

[R3-Vlan-interface1713]quit

# Create a port aggregation group and add the interface to the aggregation group

[R2]int Bridge-Aggregation 8

# Add interfaces G1/0/18 and G1/0/19 to the aggregation group

[R3]int GigabitEthernet 1/0/18

[R3-GigabitEthernet1/0/18]port link-aggregation group 8

[R3]int GigabitEthernet 1/0/19

[R3-GigabitEthernet1/0/19]port link-aggregation group 8

# Configure VLAN settings for port channel

[R3]int Bridge-Aggregation 8

[R3-Bridge-Aggregation8]port access vlan 1712

# Check interface configuration

[R3]int GigabitEthernet 1/0/18

[R3-GigabitEthernet1/0/18]dis this

#

interface GigabitEthernet1/0/18

 port link-mode bridge

 port access vlan 1712

 port link-aggregation group 8

[R3]int GigabitEthernet 1/0/19

[R3-GigabitEthernet1/0/19]dis this

#

interface GigabitEthernet1/0/19

 port link-mode bridge

 port access vlan 1712

 port link-aggregation group 8

 

Configure port channel on AFC device

Log in to the AFC system page

Access the login page via browser: https://192.168.0.1 , username: admin, password: admin.

Figure 0-2 Log in to the AFC system page

 

Configure AFC port aggregation

 Navigate to [System] [Device] [Device Management], click the [Setup] button on the right side of the target device. In the left navigation pane, select [Link Aggregation], then click the [Add] button to bind:XGE2/0 and XGE2/1 as aggregated port BAGG1,XGE3/0 and XGE3/1 as aggregated port BAGG2

Figure 0-3 Portchannel Configure

 

Configure Business Ports on AFC Device

To achieve AFC bypass single-machine multi-channel deployment in BGP Layer 3 backhaul mode configuration, follow the steps below for configuration:

Important Note: For configuration steps that include an Apply Config button, you must click this button to activate the configuration. This will not be reiterated in subsequent steps.

Configure the AFC address and port type

Enter System-Device-Device Manage, click the Setup button on the right side of the device, select Port Settings in the left navigation bar, and click the Edit button to modify the IP, mask, and port binding information of BAGG1 and BAGG2.

The IP of BAGG1 is 171.0.0.2, the port type is Traffic Diversion Interface, the data port is BAGG2, the IP of the upstream core switch R2 port aggregation group is 171.0.0.1, and the IPv4 next hop is the upstream interconnection switch port address, which is 171.0.0.1.

Figure 0-4 BAGG1 Configuration

 

The IP address of BAGG2 is 171.0.2.2, the port type is  Traffic Re-injection Interface& Primary Link, the data port is BAGG1, the IP address of the downstream switch R3 port aggregation group in the outbound direction is 171.0.2.1, and the IPv4 next hop is the upstream interconnection switch port address, which is 171.0.2.1.

Figure 0-5 BAGG2 Configuration

 

Configure BGP Routing for AFC Device

After completing the address and port type configuration, click the Route Configure menu at the bottom, select BGP Config, check to enable BGP, then click Apply Config and follow the steps below for configuration.

Local BGP Configuration

Navigate to System Device Management, click the Setup button in the row corresponding to device 127.0.0.1, then access Route Configure BGP Config to perform the following operations:

Check theStart BGP

·     Local AS: 65534 // AS number of the AFC device

·     Local Port: 179 // Default port 179

Click the Save button. For configuration details, refer to Figure 3-6.

AFC Device Local BGP Configuration

 

Figure 0-6 Launch BGP

 

BGP Peer Configuration

·     Click the Add button to configure BGP peer information:

·     Peer AS: 65535 // Enter the core switch's AS number when BGP is already running on it

·     Peer Port: 179 // Default port 179

·     LocalPref/MED: 100 // Default value 100

·     Peer IP: 171.0.0.1 (IPv4 next-hop address of GE1/0 interface)

·     Click Save to complete the neighbor address addition.

·     For configuration details, refer to Figure 3-7.

 

Figure 0-7 AFC Peer BGP Configuration

 

Apply BGP Configuration

Click the Apply Config button to activate the BGP configuration.

AFC Device Route Steering and Traffic Cleaning

Log in to the AFC device, enter Steer Config-Traffic Steering Status, click Manual Steering, perform traffic diversion operation on the test address within the user's network. In this example, the diversion address is 171.0.3.21, select the diversion operation "Tow", click Save to complete the diversion operation. Refer to Figure 3-8.

Figure 0-8 Tow User service address: 171.0.3.21

 

After diverting traffic to the AFC device, it can automatically employ default policies to mitigate and defend against DDoS attacks.

Configuration Verification

Verify connectivity between the core switch R2 and the AFC device's cleaning service port

Test whether the core switch R2 can reach the AFC device's routing interface via ping.

[R2]ping -a 171.0.0.1 171.0.0.2

  PING 171.0.0.2: 56  data bytes, press CTRL_C to break

    Reply from 171.0.0.2: bytes=56 Sequence=1 ttl=64 time=3 ms

    Reply from 171.0.0.2: bytes=56 Sequence=2 ttl=64 time=3 ms

    Reply from 171.0.0.2: bytes=56 Sequence=3 ttl=64 time=3 ms

    Reply from 171.0.0.2: bytes=56 Sequence=4 ttl=64 time=3 ms

    Reply from 171.0.0.2: bytes=56 Sequence=5 ttl=64 time=3 ms

  --- 171.0.0.2 ping statistics ---

    5 packet(s) transmitted

    5 packet(s) received

    0.00% packet loss

round-trIP min/avg/max = 3/3/3 ms

Verify whether the BGP neighbor relationship is established between the core device and the AFC device Log in to the core device and execute the "display BGP peer" command to check the BGP session status.

 [Sysname] display bgp peer

 BGP local router ID : 171.0.0.1

 Local AS number : 65535

 Total number of peers : 1        Peers in established state : 1

  Peer                    AS  MsgRcvd  MsgSent OutQ PrefRcv Up/Down  State

     171.0.0.2          65534        5        3    0       0 00:01:59 Established

Verify whether traffic steering from the core switch R2 to the AFC device is successful.A successfully established steering will result in a /32 route entry for the target host in the routing table.

Check the routing table of the core switch R2.

[R2]display bgp routing-table

 Total Number of Routes: 1

 BGP Local router ID is 171.0.0.1

 Status codes: * - valid, ^ - VPNv4 best, > - best, d - damped,

               h - history,  i - internal, s - suppressed, S - Stale

               Origin : i - IGP, e - EGP, ? - incomplete

     Network            NextHop         MED        LocPrf     PrefVal Path/Ogn

* >  171.0.3.21/32      171.0.0.2       0                     1       65534i

Verify the communication between the client and the diversion server

Test whether the client can reach the service route via ping

[root@AFCTest_Client ~]# ifconfig eth0

eth0      Link encap:Ethernet  HWaddr 00:0C:29:9D:1B:7A 

          inet addr:184.0.0.75  Bcast:184.0.0.255  Mask:255.255.255.0

          inet6 addr: fe80::20c:29ff:fe9d:1b7a/64 Scope:Link

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

          RX packets:257120 errors:0 dropped:0 overruns:0 frame:0

          TX packets:47273087 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:1000

          RX bytes:28882056 (27.5 MiB)  TX bytes:3460908912 (3.2 GiB)

[root@AFCTest_Client ~]# ping -c 5 171.0.3.21

PING 171.0.3.21 (171.0.3.21) 56(84) bytes of data.

64 bytes from 171.0.3.21: icmp_seq=1 ttl=124 time=0.799 ms

64 bytes from 171.0.3.21: icmp_seq=2 ttl=124 time=0.736 ms

64 bytes from 171.0.3.21: icmp_seq=3 ttl=124 time=0.862 ms

64 bytes from 171.0.3.21: icmp_seq=4 ttl=124 time=1.47 ms

64 bytes from 171.0.3.21: icmp_seq=5 ttl=124 time=1.02 ms

--- 171.0.1.21 ping statistics ---

5 packets transmitted, 5 received, 0% packet loss, time 4006ms

rtt min/avg/max/mdev = 0.736/0.977/1.470/0.266 ms

 

  • Cloud & AI
  • InterConnect
  • Intelligent Computing
  • Intelligent Storage
  • Security
  • SMB Products
  • Intelligent Terminal Products
  • Product Support Services
  • Technical Service Solutions
All Services
  • Resource Center
  • Policy
  • Online Help
  • Technical Blogs
All Support
  • Become A Partner
  • Partner Policy & Program
  • Global Learning
  • Partner Sales Resources
  • Partner Business Management
  • Service Business
All Partners
  • Profile
  • News & Events
  • Online Exhibition Center
  • Contact Us
All About Us