- Table of Contents
- Related Documents
-
Title | Size | Download |
---|---|---|
04-Inbound link load balancing configuration | 209.55 KB |
Configuring inbound link load balancing
About inbound link load balancing
Inbound link load balancing load balances traffic among the links from the external network to the internal network.
Application scenario
As shown in Figure 1, an enterprise provides services for extranet users through link 1 of ISP 1, link 2 of ISP 2, and link 3 of ISP 3. Inbound link load balancing evenly distributes traffic from extranet users to the internal server on multiple links. This feature can prevent link congestion and implement link switchover upon a link failure.
Workflow
Inbound link load balancing is implemented based on DNS resolution. The LB device acts as the authoritative DNS server to process DNS requests from extranet users and select the best link for extranet users. Figure 2 shows the inbound link load balancing workflow.
Figure 2 Inbound link load balancing workflow
Inbound link load balancing uses the following procedure:
1. The client sends a DNS request to the local DNS server.
2. The local DNS server forwards the DNS request to the device.
3. The device selects a virtual server on the optimal link by using scheduling algorithms, bandwidth limit, and health monitoring method.
4. The device sends the virtual IP/virtual server IP to the local DNS server in a DNS response.
5. The local DNS server sends the virtual IP/virtual server IP to the client.
6. The client initiates a connection request to the virtual IP/virtual server IP. (The request is forwarded to the device.)
7. The device initiates a connection request to the internal server.
8. The internal server responds to the device.
9. The device responds to the client.
Service processing on the device
The device implements inbound link load balancing through intelligently resolving DNS requests.
Figure 3 Inbound link load balancing on the LB device
VS pool |
Virtual server pool |
VIP |
· Virtual server IP · Virtual IP |
As shown in Figure 3, the device contains the following elements:
· DNS listener—Listens to DNS requests. When the destination IP address of a DNS request matches that of the DNS listener, the DNS request is processed by inbound link load balancing.
· DNS mapping—Associates a domain name with a virtual server pool. The LB device looks up the DNS mappings for the virtual server pool associated with a domain name.
· Link—Physical link provided by an ISP.
· Virtual server pool—Associates virtual servers with links. The availability of a virtual server and its associated link determines whether the virtual server participates in scheduling.
· Virtual IP/Virtual server IP—A virtual entity for processing user services. When user traffic needs to be load-balanced across multiple internal servers, a virtual server can be selected as the virtual carrier for user-facing services. In this case, the virtual server IP acts both as the DNS resolution result in inbound link load balancing and as the entry for server load balancing.
If the destination IP address of a DNS request matches that of a DNS listener, the device processes the DNS request as follows:
1. The device looks up the DNS mappings for the virtual server pool associated with the domain name in the DNS request.
2. The device selects the virtual IP/virtual server IP associated with the best link according to the scheduling algorithm configured for the virtual server pool.
3. The device sends the virtual IP/virtual server IP in a DNS response to the extranet user.
The extranet user uses the virtual IP/virtual server IP as the destination IP address and accesses the internal server through the link associated with the virtual server.
Restrictions and guidelines: Inbound link load balancing configuration
When you configure inbound link load balancing, you must specify the device as the authoritative DNS server to process DNS requests for the specified domain. Typically, you can perform that configuration on the domain registrar's website.
Inbound link load balancing tasks at a glance
To configure inbound link load balancing, perform the following tasks:
3. Configuring a virtual server pool
5. (Optional.) Configuring a DNS zone
¡ Configuring a DNS forward zone
¡ Configuring a DNS reverse zone
6. (Optional.) Configuring a topology
7. (Optional.) Configuring a region
8. (Optional.) Configuring a topology
9. (Optional.) Enabling load balancing link busy state logging
10. (Optional.) Performing a load balancing test
11. (Optional.) Setting the DNS request parse failures to be recorded
Configuring a DNS listener
About this task
Configure a DNS listener to specify an IP address and a port number for the device to provide DNS services externally.
When a DNS listener fails to find the requested resource record, it can take one of following actions:
· Respond to the request through a DNS proxy.
· Do not respond to the DNS request.
· Return a DNS reject packet.
Procedure
1. Enter system view.
system-view
2. Create a DNS listener and enter DNS listener view.
loadbalance dns-listener dns-listener-name
3. Specify an IP address and a port number for the DNS listener.
IPv4:
ip address ipv4-address [ port port-number ]
IPv6:
ipv6 address ipv6-address [ port port-number ]
By default, a DNS listener does not have an IP address or port number.
4. (Optional.) Specify a VPN instance for the DNS listener.
vpn-instance vpn-instance-name
By default, a DNS listener belongs to the public network.
5. Enable the DNS listening feature.
service enable
By default, the DNS listening feature is disabled.
6. (Optional.) Specify the processing method for DNS mapping search failure.
fallback { dns-proxy | no-response | reject }
By default, the processing method is reject.
Configuring a DNS mapping
About this task
By configuring a DNS mapping, you can associate a domain name with a virtual server pool.
Restrictions and guidelines
You can specify multiple domains names for a DNS mapping.
Procedure
1. Enter system view.
system-view
2. Create a DNS mapping and enter DNS mapping view.
loadbalance dns-map dns-map-name
3. Specify a domain name for the DNS mapping.
domain-name domain-name
By default, a DNS mapping does not contain domain names.
4. Specify a virtual server pool for the DNS mapping.
virtual-server-pool pool-name
By default, no virtual server pool is specified for a DNS mapping.
5. (Optional.) Set the TTL for DNS records.
ttl ttl-value
By default, the TTL for DNS records is 3600 seconds.
6. Enable the DNS mapping feature.
service enable
By default, the DNS mapping feature is disabled.
Configuring an LB link
About this task
Link availability is one of the factors that determines whether a virtual IP/virtual server can participate in scheduling. Link availability depends on health monitoring, maximum expected bandwidth, and bandwidth ratio.
The outbound next hop is the IP address of the peer device on the link. You can perform health monitoring and bandwidth limiting on a link specified by the outbound next hop.
You can enable health monitoring to detect link quality and status and to ensure link availability. The health monitoring configuration uses NQA templates. For more information about NQA template configuration, see NQA configuration in Network Management and Monitoring Configuration Guide.
When the traffic exceeds the maximum expected bandwidth multiplied by the bandwidth ratio of a link, new traffic is not distributed to the link. When the traffic drops below the maximum expected bandwidth multiplied by the bandwidth recovery ratio of the link, the link participates in scheduling again.
In addition to being used for link protection, the maximum expected bandwidth is used for remaining bandwidth calculation in the bandwidth algorithm and maximum bandwidth algorithm.
Procedure
1. Enter system view.
system-view
2. Create an LB link and enter LB link view.
loadbalance link link-name
3. Specify the outbound next hop for the LB link.
router ip ipv4-address
By default, the outbound next hop is not specified for the LB link.
4. (Optional.) Configure health monitoring settings for the LB link.
a. Specify a health monitoring method for the LB link.
probe template-name
By default, no health monitoring method is specified for an LB link.
b. Specify the health monitoring success criteria for the LB link.
success-criteria { all | at-least min-number }
By default, the health monitoring succeeds only when all the specified health monitoring methods succeed.
5. (Optional.) Set the bandwidth ratio.
bandwidth [ inbound | outbound ] busy-rate busy-rate-number [ recovery recovery-rate-number ]
By default, the total bandwidth ratio is 70.
6. (Optional.) Set the maximum expected bandwidth.
max-bandwidth [ inbound | outbound ] bandwidth-value kbps
By default, the maximum expected bandwidth, maximum inbound expected bandwidth, and maximum outbound expected bandwidth are 0. The bandwidths are not limited.
Configuring a virtual server pool
About this task
Perform this task to facilitate management of virtual IPs or virtual servers with similar functions.
You can specify one preferred scheduling algorithm, one alternative scheduling algorithm, and one backup scheduling algorithm for a virtual server pool. If no virtual server can be selected by using the preferred scheduling algorithm, the alternative scheduling algorithm is used. If no virtual server can be selected by using the alternative scheduling algorithm, the backup scheduling algorithm is used.
The link protection feature enables a virtual server pool to select a virtual IP or virtual server based on the configured scheduling method. If the bandwidth ratio of a link associated with the selected virtual IP or virtual server is exceeded, the virtual IP or virtual server is not selected. For more information about configuring the bandwidth ratio, see "Configuring an LB link."
1. Enter system view.
system-view
2. Create a virtual server pool enter virtual server pool view.
loadbalance virtual-server-pool name
3. Add a virtual IP address or virtual server.
¡ Add a virtual IP address.
IPv4:
virtual-ip ipv4-address link link-name [ weight weight-value ]
IPv6:
virtual-ipv6 ipv6-address link link-name [ weight weight-value ]
By default, no virtual IP addresses are added to a virtual server pool.
¡ Add a virtual server.
virtual-server virtual-server-name link link-name [ weight weigth-name ]
By default, no virtual servers are added to a virtual server pool.
4. Specify a scheduling algorithm for the virtual server pool.
predictor { alternate | fallback | preferred } { least-connection | proximity | random | round-robin | topology | { bandwidth | max-bandwidth } [ inbound | outbound ] | hash address { source | source-ip-port | destination } [ mask mask-length | prefix prefix-length ] }
By default, the preferred scheduling algorithm for the virtual server pool is round robin. No alternative or backup scheduling algorithm is specified.
5. (Optional.) Enable the link protection feature.
bandwidth busy-protection enable
By default, the link protection feature is disabled.
Configuring a DNS forward zone
About this task
During DNS resolution, an LB device looks up the resource records configured in a DNS forward zone for the host name corresponding to the target domain name. DNS resource records are used by an LB device to resolve DNS requests and have the following types:
· Start of authority (SOA)—The SOA resource record contains basic information about a DNS zone. This record defines the starting point of the zone and acts as the authoritative declaration for the zone's data.
· Canonical name (CNAME)—Maps multiple aliases to one host name (server). For example, an enterprise intranet has a server with host name host.example.com. The server provides both Web service and mail service. You can configure two aliases (www.example.com and mail.example.com) in a CNAME resource record for this server. When a user requests Web service, the user accesses www.example.com. When a user requests mail service, the user accesses mail.example.com. Actually, the user accesses host.example.com in both cases.
· Mail exchanger (MX)—Specifies the mail server for a DNS forward zone.
· Name server (NS)—Specifies the authoritative DNS server for a DNS forward zone.
· Service (SRV)—Specifies the services for a DNS forward zone and the servers that provide these services.
· Text (TXT)—Specifies a description for a DNS forward zone.
As shown in the following figure, the LB device is configured with a DNS forward zone. After receiving a DNS request, the LB device first looks up the resource records in the DNS forward zone for the host name corresponding to the target domain name. Then the LB device looks up the DNS mappings for the virtual IP/virtual server IP address associated with the host name.
Figure 4 DNS forward zone workflow on the LB device
Procedure
1. Create a DNS forward zone and enter DNS forward zone view.
loadbalance zone domain-name
2. Configure an SOA resource record.
a. Create an SOA resource record and enter SOA view.
soa
b. Configure the host name for the primary DNS server.
primary-nameserver host-name
By default, no host name is configured for the primary DNS server.
c. Specify the email address of the administrator.
responsible-mail mail-address
By default, the email address of the administrator is not specified.
d. Configure the serial number for the DNS forward zone.
serial number
By default, the serial number for a DNS forward zone is 1.
e. Set the refresh interval.
refresh refresh-interval
By default, the refresh interval is 3600 seconds.
f. Set the retry interval.
retry retry-interval
By default, the retry interval is 600 seconds.
g. Set the expiration time.
expire expire-time
By default, the expiration time is 86400 seconds.
3. Configure a resource record of the specified type.
record { cname alias alias-name canonical canonical-name | mx [ host hostname ] exchanger exchanger-name preference preference | ns [ sub subname ] authority ns-name | srv [ service service-name ] host-offering-service hostname priority priority weight weight port port-number | txt [ sub subname ] describe-txt description } [ ttl ttl-value ]
The device supports configuring the CNAME, MX, NS, SRV, and TXT resource records.
4. Set the global TTL for resource records.
ttl ttl-value
By default, the global TTL for resource records is 3600 seconds.
Configuring a DNS reverse zone
About this task
The LB device performs reverse DNS resolution according to the DNS reverse zone configuration. Reverse DNS resolution searches for a domain name according to an IP address. The pointer record (PTR) resource records configured in a DNS reverse zone record mappings between domain names and IP addresses.
Reverse DNS resolution is used to address spam attacks by verifying the validity of the email sender. When a mail server receives an email from an external user, it sends a reverse DNS resolution request to the LB device. The LB device resolves the source IP address of the sender into a domain name according to PTR resource records and sends the domain name to the mail server. The mail server compares the received domain name with the actual domain name of the sender. If the two domain names match, the mail server accepts the email. If not, the mail server considers the email as a spam email and discards it.
Procedure
1. Enter system view.
system-view
2. Create a DNS reverse zone and enter DNS reverse zone view.
loadbalance reverse-zone { ip ipv4-address mask-length | ipv6 ipv6-address prefix-length }
3. Configure a PTR resource record.
record ptr { ip ipv4-address | ipv6 ipv6-address } domain-name [ ttl ttl-value ]
By default, a DNS reverse zone does not contain PTR resource records.
Configuring ISP information
About configuring ISP information
Perform this task to configure IP address information for an ISP. The IP address information can be used by an ISP match rule. When the destination IP address of packets matches the ISP match rule of an LB class, the LB device takes the action associated with the class. The device supports the following methods to configure IP address information:
Manual configuration—The administrator manually specifies IP address information.
ISP auto update—With ISP auto update enabled, the device regularly queries IP address information from the whois server according to the whois maintainer object of the ISP.
ISP file import—The administrator manually imports an ISP file in .tp format. The ISP file can be obtained from the official website.
Restrictions and guidelines
You can configure ISP information manually, by importing an ISP file, by auto update, or use the combination of these methods..
Configuring ISP information manually
1. Enter system view.
system-view
2. Create an ISP and enter ISP view.
loadbalance isp name isp-name
3. Specify the IP address for the ISP.
IPv4:
ip address ipv4-address { mask-length | mask }
IPv6:
ipv6 address ipv6-address prefix-length
By default, an ISP does not contain IPv4 or IPv6 addresses.
An ISP does not allow overlapping network segments.
4. (Optional.) Configure a description for the ISP.
description text
By default, no description is configured for the ISP.
Configuring ISP auto update
Prerequisites
Before you configure this feature, you must complete the WHOIS server settings.
Procedure
1. Enter system view.
system-view
2. Create an ISP and enter ISP view.
loadbalance isp name isp-name
3. Specify a whois maintainer object for the ISP.
whois-mntner mntner-name
By default, no whois maintainer object is specified.
You can specify a maximum of 10 whois maintainer objects for an ISP.
4. Return to system view.
quit
5. Enable ISP auto update.
loadbalance isp auto-update enable
By default, ISP auto update is disabled.
6. Configure the ISP auto update frequency.
loadbalance isp auto-update frequency { per-day | per-week | per-month }
By default, the ISP auto update is performed once per week.
7. Specify the whois server to be queried for ISP auto update.
loadbalance isp auto-update whois-server { domain domain-name | ip ip-address }
By default, no whois server is specified for ISP auto update.
Importing an ISP file
Prerequisites
Before you configure this feature, you must prepare the ISP file.
Restrictions and guidelines
When the imported file does not exist, has an invalid file name, or fails the decryption, the system will retain the existing imported content without modification.
If the import operation is aborted due to failed IP address parsing in the imported file, the system will clear all previously imported data and reserve only the successfully imported content from the current operation.
Procedure
1. Enter system view.
system-view
2. Import an ISP file.
loadbalance isp file isp-file-name
Configuring a region
About this task
A region contains network segments corresponding to different ISPs.
Procedure
1. Enter system view.
system-view
2. Create a region and enter region view.
loadbalance region region-name
3. Add an ISP to the region.
isp isp-name
By default, a region does not contain any ISPs.
Configuring a topology
About this task
A topology associates the region where the local DNS server resides with the IP address of a virtual server.
When the static proximity algorithm (topology) is specified for the virtual server pool, you must configure a topology. For more information about specifying a scheduling algorithm for a virtual server pool, see "Configuring a virtual server pool."
When a DNS request matches multiple topology records, the topology record with the highest priority is selected.
Procedure
1. Enter system view.
system-view
2. Configure a topology.
topology region region-name { ip ipv4-address { mask-length | mask } | ipv6 ipv6-address prefix-length } [ priority priority ]
Enabling load balancing link busy state logging
About this task
Perform this task to record busy states for all links.
Procedure
1. Enter system view.
system-view
2. Enable load balancing link busy state logging.
loadbalance log enable bandwidth-busy
By default, load balancing link busy state logging is disabled.
Performing a load balancing test
About performing a load balancing test
Perform this task in any view to test the load balancing result.
Performing an IPv4 load balancing test
To perform an IPv4 load balancing test, execute the following command in any view:
In standalone mode:
loadbalance local-dns-server schedule-test ip [ vpn-instance vpn-instance-name ] destination destination-address [ destination-port destination-port ] source source-address source-port source-port type { { a | aaaa | cname | mx | ns | soa | srv | txt } domain domain-name | ptr ip address { ipv4-address | ipv6-address } } [ slot slot-number ]
In IRF mode:
loadbalance local-dns-server schedule-test ip [ vpn-instance vpn-instance-name ] destination destination-address [ destination-port destination-port ] source source-address source-port source-port type { { a | aaaa | cname | mx | ns | soa | srv | txt } domain domain-name | ptr ip address { ipv4-address | ipv6-address } } [ chassis chassis-number slot slot-number ]
Performing an IPv6 load balancing test
To perform an IPv6 load balancing test, execute the following command in any view:
In standalone mode:
loadbalance local-dns-server schedule-test ipv6 [ vpn-instance vpn-instance-name ] destination destination-address [ destination-port destination-port ] source source-address source-port source-port type { { a | aaaa | cname | mx | ns | soa | srv | txt } domain domain-name | ptr ip address ipv4-address } [ slot slot-number ]
In IRF mode:
loadbalance local-dns-server schedule-test ipv6 [ vpn-instance vpn-instance-name ] destination destination-address [ destination-port destination-port ] source source-address source-port source-port type { { a | aaaa | cname | mx | ns | soa | srv | txt } domain domain-name | ptr ip address { ipv4-address | ipv6-address } } [ chassis chassis-number slot slot-number ]
Setting the DNS request parse failures to be recorded
1. Enter system view.
system-view
2. Configure the types of DNS request parse failures that can be recorded.
loadbalance local-dns-server parse-fail-record type { a | aaaa | all-disable | all-enable | cname | mx | ns | ptr | soa | srv | txt }
By default, all types of DNS request parse failures are recorded.
3. Set the maximum number of DNS request parse failures that can be recorded.
loadbalance local-dns-server parse-fail-record max-number max-number
The default setting is 10000.
Displaying and maintaining inbound link load balancing
Execute display commands in any view and reset commands in user view.
Task |
Command |
Display DNS listener information. |
display loadbalance dns-listener [ name listener-name ] |
Display DNS listener statistics. |
In standalone mode: display loadbalance dns-listener statistics [ name dns-listener-name ] [ slot slot-number ] In IRF mode: display loadbalance dns-listener statistics [ name dns-listener-name ] [ chassis chassis-number slot slot-number ] |
Display DNS mapping information. |
display loadbalance dns-map [ name dns-map-name ] |
Display DNS mapping statistics. |
In standalone mode: display loadbalance dns-map statistics [ name dns-map-name ] [ slot slot-number ] In IRF mode: display loadbalance dns-map statistics [ name dns-map-name ] [ chassis chassis-number slot slot-number ] |
Display virtual server pool information. |
display loadbalance virtual-server-pool [ brief | name pool-name ] |
Display LB link information. |
display loadbalance link [ brief | name link-name ] |
Display DNS forward zone information. |
display loadbalance zone [ name domain-name ] |
Display DNS reverse zone information. |
display loadbalance reverse-zone { ip [ ipv4-address mask-length ] | ipv6 [ ipv6-address prefix-length ] } |
Display ISP information. |
display loadbalance isp [ ip ipv4-address | ipv6 ipv6-address | name isp-name ] |
Display DNS request parse failures. |
In standalone mode: display loadbalance local-dns-server parse-fail-record [ type { a | aaaa | cname | mx | ns | soa | srv | txt } ] [ domain domain-name ] | ptr [ ip address { ipv4-address | ipv6-address } ] ] [ vpn-instance vpn-instance-name ] [ slot slot-number ] In IRF mode: display loadbalance local-dns-server parse-fail-record [ type { a | aaaa | cname | mx | ns | soa | srv | txt } ] [ domain domain-name ] | ptr [ ip address { ipv4-address | ipv6-address } ] ] [ vpn-instance vpn-instance-name ] [ chassis chassis-number slot slot-number ] |
Clear DNS listener statistics. |
reset loadbalance dns-listener statistics [ dns-listener-name ] |
Clear DNS mapping statistics. |
reset loadbalance dns-map statistics [ dns-map-name ] |
Clear DNS request parse failures. |
reset loadbalance local-dns-server parse-fail-record |
Inbound link load balancing configuration examples
Example: Configuring inbound link load balancing
Network configuration
In Figure 5, ISP 1 and ISP 2 provide two links, Link 1 and Link 2, with the same router hop count, bandwidth, and cost. The internal server uses domain name l.example.com to provide services. The actual host name of the internal server is www.example.com.
Configure inbound link load balancing for the device to select an available link for traffic from the client host to the internal server when a link fails.
Procedure
1. Assign IP addresses to interfaces:
# Assign an IP address to interface GigabitEthernet 1/0/1.
<Device> system-view
[Device] interface gigabitethernet 1/0/1
[Device-GigabitEthernet1/0/1] ip address 10.1.1.1 255.255.255.0
[Device-GigabitEthernet1/0/1] quit
# Assign IP addresses to other interfaces in the same way. (Details not shown.)
2. Add interfaces to security zones.
[Device] security-zone name untrust
[Device-security-zone-Untrust] import interface gigabitethernet 1/0/1
[Device-security-zone-Untrust] import interface gigabitethernet 1/0/2
[Device-security-zone-Untrust] quit
[Device] security-zone name trust
[Device-security-zone-Trust] import interface gigabitethernet 1/0/3
[Device-security-zone-Trust] quit
3. Configure a security policy:
Configure rules to permit traffic from the Untrust security zone to the Trust security zone and traffic between the Untrust and Local security zones, so the users can access the server:
# Configure a rule named lbrule1 to allow the users to access the server.
[Device] security-policy ip
[Device-security-policy-ip] rule name lbrule1
[Device-security-policy-ip-1-lbrule1] source-zone untrust
[Device-security-policy-ip-1-lbrule1] destination-zone trust
[Device-security-policy-ip-1-lbrule1] destination-ip-subnet 192.168.1.0 255.255.255.0
[Device-security-policy-ip-1-lbrule1] action pass
[Device-security-policy-ip-1-lbrule1] quit
# Configure a rule named lblocalin to allow the users to access the DNS listener.
[Device-security-policy-ip] rule name lblocalin
[Device-security-policy-ip-2-lblocalout] source-zone untrust
[Device-security-policy-ip-2-lblocalout] destination-zone local
[Device-security-policy-ip-2-lblocalout] destination-ip-subnet 10.1.1.1 255.255.255.255
[Device-security-policy-ip-2-lblocalout] destination-ip-subnet 20.1.1.1 255.255.255.255
[Device-security-policy-ip-2-lblocalout] action pass
[Device-security-policy-ip-2-lblocalout] quit
[Device-security-policy-ip] quit
# Configure a rule named lblocalout to allow the device to send probe packets to the next hop.
[Device-security-policy-ip] rule name lblocalout
[Device-security-policy-ip-3-lblocalout] source-zone local
[Device-security-policy-ip-3-lblocalout] destination-zone untrust
[Device-security-policy-ip-3-lblocalout] destination-ip-subnet 10.1.1.0 255.255.255.0
[Device-security-policy-ip-3-lblocalout] destination-ip-subnet 20.1.1.0 255.255.255.0
[Device-security-policy-ip-3-lblocalout] action pass
[Device-security-policy-ip-3-lblocalout] quit
[Device-security-policy-ip] quit
4. Configure LB links:
# Create the ICMP-type NQA template t1.
[Device] nqa template icmp t1
[Device-nqatplt-icmp-t1] quit
# Create the LB link link1, and specify the outbound next hop as 10.1.1.2 and health monitoring method as t1 for the LB link.
[Device] loadbalance link link1
[Device-lb-link-link1] router ip 10.1.1.2
[Device-lb-link-link1] probe t1
[Device-lb-link-link1] quit
# Create the LB link link2, and specify the outbound next hop as 20.1.1.2 and health monitoring method as t1 for the LB link.
[Device] loadbalance link link2
[Device-lb-link-link2] router ip 20.1.1.2
[Device-lb-link-link2] probe t1
[Device-lb-link-link2] quit
5. Create the server farm sf.
[Device] server-farm sf
[Device-sfarm-sf] quit
6. Create the real server rs with the IPv4 address 192.168.1.10, and add it to the server farm sf.
[Device] real-server rs
[Device-rserver-rs] ip address 192.168.1.10
[Device-rserver-rs] server-farm sf
[Device-rserver-rs] quit
7. Configure virtual servers:
# Create the HTTP virtual server vs1 with the VSIP 10.1.1.3 and port number 80, specify its default master server farm sf, and enable the virtual server.
[Device] virtual-server vs1 type http
[Device-vs-http-vs1] virtual ip address 10.1.1.3
[Device-vs-http-vs1] port 80
[Device-vs-http-vs1] default server-farm sf
[Device-vs-http-vs1] service enable
[Device-vs-http-vs1] quit
# Create the HTTP virtual server vs2 with the VSIP 20.1.1.3 and port number 80, specify its default master server farm sf, and enable the virtual server.
[Device] virtual-server vs2 type http
[Device-vs-http-vs2] virtual ip address 20.1.1.3
[Device-vs-http-vs2] port 80
[Device-vs-http-vs2] default server-farm sf
[Device-vs-http-vs2] service enable
[Device-vs-http-vs2] quit
8. Create the virtual server pool vsp, and add the virtual servers vs1 and vs2 associated with the LB links link1 and link2 to the virtual server pool.
[Device] loadbalance virtual-server-pool vsp
[Device-lb-vspool-vsp] virtual-server vs1 link link1
[Device-lb-vspool-vsp] virtual-server vs2 link link2
9. Configure DNS listeners:
# Create the DNS listener dl1 with the IP address 10.1.1.1, and enable the DNS listener feature.
[Device] loadbalance dns-listener dl1
[Device-lb-dl-dl1] ip address 10.1.1.1
[Device-lb-dl-dl1] service enable
[Device-lb-dl-dl1] quit
# Create the DNS listener dl2 with the IP address 20.1.1.1, and enable the DNS listener feature.
[Device] loadbalance dns-listener dl2
[Device-lb-dl-dl2] ip address 20.1.1.1
[Device-lb-dl-dl2] service enable
[Device-lb-dl-dl2] quit
10. Create the DNS mapping dm, specify the domain name www.example.com and virtual server pool vsp for the DNS mapping, and enable the DNS mapping feature.
[Device] loadbalance dns-map dm
[Device-lb-dm-dm] domain-name www.example.com
[Device-lb-dm-dm] service enable
[Device-lb-dm-dm] virtual-server-pool vsp
[Device-lb-dm-dm] quit
11. Configure a DNS forward zone:
# Create a DNS forward zone with domain name example.com.
[Device] loadbalance zone example.com
# Configure a CNAME resource record by specifying alias l.example.com for host name www.example.com.
[Device-lb-zone-example.com] record cname alias l.example.com. canonical www.example.com. ttl 600
[Device-lb-zone-example.com] quit
Verifying the configuration
# Display information about all DNS listeners.
[Device] display loadbalance dns-listener
DNS listener name: dl1
Service state: Enabled
IPv4 address: 10.1.1.1
Port: 53
IPv6 address: --
IPv6 Port: 53
Fallback: Reject
VPN instance:
# Display information about all DNS mappings.
[Device] display loadbalance dns-map
DNS mapping name: dm
Service state: Enabled
TTL: 3600
Domain name list: www.example.com
Virtual server pool: vsp
# Display information about all DNS forward zones.
[Device] display loadbalance zone
Zone name: example.com
TTL: 3600s
SOA:
Record list:
Type TTL RDATA
CNAME 600s l.example.com. www.example.com.
# Display brief information about all virtual server pools.
[Device] display loadbalance virtual-server-pool brief
Predictor: RR - Round robin, RD - Random, LC - Least connection,
TOP - Topology, PRO - Proximity
BW - Bandwidth, MBW - Max bandwidth,
IBW - Inbound bandwidth, OBW - Outbound bandwidth,
MIBW - Max inbound bandwidth, MOBW - Max outbound bandwidth,
HASH(SIP) - Hash address source IP,
HASH(DIP) - Hash address destination IP,
HASH(SIP-PORT) - Hash address source IP-port
VSpool Pre Alt Fbk BWP Total Active
vsp RR LC Enabled 0 0
# Display detailed information about all virtual server pools.
[Device] display loadbalance virtual-server-pool
Virtual-server pool: local_pool
Predictor:
Preferred RR
Alternate --
Fallback --
Bandwidth busy-protection: Disabled
Total virtual servers: 2
Active virtual servers: 2
Virtual server list:
Name State Address Port Weight Link
vs1 Active 10.1.1.3 80 100 link1
vs2 Active 20.1.1.3 80 100 link2
# Display brief information about all real servers.
[Device] display real-server brief
Real server Address Port State VPN instance Server farm
rs 192.168.1.10 0 Active sf
# Display brief information about all LB links.
[Device] display loadbalance link brief
link Router IP State VPN instance Link group
link1 10.1.1.2 Active
link2 20.1.1.2 Probe-failed
# Display detailed information about all server farms.
[Device] display server-farm
Server farm: sf
Description:
Predictor: Round robin
Proximity: Enabled
NAT: Enabled
SNAT pool:
Failed action: Keep
Active threshold: Disabled
Slow-online: Disabled
Probe information:
Probe success criteria: All
Probe method:
t1
Selected server: Disabled
Probe information:
Probe success criteria: All
Probe method:
t1
Total real server: 1
Active real server: 1
Real server list:
Name State VPN instance Address Port Weight Priority
rs Active 192.168.1.10 0 100 4
# Display brief information about all virtual servers.
[Device] display virtual-server brief
Virtual server State Type VPN instance Virtual address Port
vs1 Active HTTP 10.1.1.3 80
vs2 Active HTTP 20.1.1.3 80
After you complete the previous configuration, domain name l.example.com can be resolved into 10.1.1.1 or 20.1.1.1. The client host can access the internal server through Link 1 or Link 2.