H3C S5500-SI Series Ethernet Switches Operation Manual-Release 1205-(V1.03)

HomeSupportSwitchesH3C S5500 Switch SeriesConfigure & DeployConfiguration GuidesH3C S5500-SI Series Ethernet Switches Operation Manual-Release 1205-(V1.03)
22-Cluster Operation
Title Size Download
22-Cluster Operation 743 KB

Chapter 1  Cluster Management Configuration

1.1  Cluster Management Overview

1.1.1  Introduction to HGMP V2

By employing Huawei group management protocol version 2 (HGMP V2), a network administrator can manage multiple switches using the IP address of a switch operating as the management device. The switches under the management of the management device are member devices. Normally, a cluster member device is not assigned a public IP address, and the network administrator manages and maintains member devices through the management device. The management device, along with the member devices, forms a cluster. Figure 1-1 shows a typical cluster implementation.

Figure 1-1 A typical cluster implementation

A cluster has one (and only one) management device. Note the following when creating a cluster:

l           You need to designate the management device first. The management device of a cluster is the portal of the cluster. That is, any operations performed in external networks and intended for the member devices of a cluster, such as accessing, configuring, managing, and monitoring, are carried out through the management device only.

l           The management device of a cluster recognizes and controls all the member devices in the cluster, regardless of their locations in the network and how they are connected.

l           The management device of a cluster collects and maintains the topology information about all the member devices and candidate devices for you manage the related devices.

l           A management device manages and monitors the devices in the cluster according to the network topology information obtained from the neighbor devices.

HGMPv2 protocol suite implements device management. As HGMPv2 is carried by data link layer packets, there is no need to assign IP addresses for the virtual interfaces of all the devices in a network.

HGMP V2 offers the following advantages:

l           The procedures to configure multiple switches remarkably simplified. When the management device is assigned a public IP address, you can configure/manage a specific member device on the management device instead of logging into it in advance.

l           Functions of topology discovery and display provided, which assist network monitoring and debugging

l           Software upgrading and parameter configuring can be performed simultaneously on multiple switches.

l           Free of topology and physical distance limitations

l           Saving IP address resource

1.1.2  Switch Roles in a Cluster

According to their functions and status in a cluster, switches in the cluster play different roles. You can specify the role a switch plays. A switch also changes its role according to specific rules.

The following three switch roles exist in a cluster: management device, member device, and candidate device.

Table 1-1 Switch roles in a cluster

Role

Configuration

Description

Management device

l      Configured with a public IP address.

l      Receive and process management commands that a user sends through the public network

l      Provide management interfaces for all switches in the cluster

l      Manage member devices through network address translation (NAT)

l      Provide these functions: neighbor discovery, topology information collection, cluster management, and cluster state maintenance.

l      Support proxies

Member device

Normally, a member device is not configured with a public IP address

l      Member in the cluster

l      Neighbor discovery, being managed by the management device, running commands forwarded by proxies, and failure/log reporting

Candidate device

Normally, a member device is not configured with a public IP address

A candidate device is a switch that does not belong to any cluster, although it can be added to a cluster

 

1.1.3  Switch Role Changes in a Cluster

Figure 1-2 Role switching rules

l           A candidate device becomes a management device after you designate it as the management device of a cluster (you can do this by building a cluster on the device). Each cluster must have one and only one management device. After you specify the management device of a cluster, the management device discovers and determines candidate devices (by collecting NDP/NTDP information), which you can then add into the cluster through manual configuration.

l           A candidate device becomes a member device after being added to a cluster.

l           A member device becomes a candidate device after being removed from the cluster.

l           The management device becomes a candidate device only after you remove the cluster.

1.1.4  Cluster Principle and Implementation

I. Procedure of building a cluster

l           Network neighbor discovery: It uses NDP to discover the information about the directly connected neighbor devices.

l           Network topology discovery. It uses NTDP to collect the information about the network topology, including device connections and candidate device information in the network. The hop range for topology discovery can be adjusted manually.

l           Member recognition: The management device recognizes each member in the cluster by locating each member and then distributes configuration and management commands to the members.

l           Member management: The following events are managed through the management device: adding/removing a member, the member’s authentication on the management device, and handshake interval.

II. Introduction to NDP

NDP is the protocol for discovering the information about the adjacent nodes. NDP operates on the data link layer, so it supports different network layer protocols.

NDP is used to discover the information about directly connected neighbors, including the device type, software/hardware version, and connecting port of the adjacent devices. It can also provide the information concerning device ID, port simplex/duplex status, product version, Bootrom version and so on.

An NDP-enabled device maintains an NDP information table. Each entry in an NDP table ages with time. You can also clear the current NDP information manually to have adjacent information collected again.

An NDP-enabled device broadcasts NDP packets regularly to all ports in up state. An NDP packet carries the holdtime field, which indicates the period for the receiving devices to keep the NDP data. Receiving devices only store the information carried in the received NDP packets rather than forward them. The corresponding data entry in the NDP table is updated when the received information is different from the existing one. Otherwise, only the holdtime of the corresponding entry is updated.

III. Introduction to NTDP

NTDP is a protocol for network topology information collection. NTDP provides the information about the devices that can be added to clusters and collects the topology information within the specified hops for cluster management.

Based on the NDP information table created by NDP, NTDP transmits and forwards NTDP topology collection request to collect the NDP information and neighboring connection information of each device in a specific network range for the management device or the network administrator to implement needed functions.

Upon detecting a change occurred on a neighbor, a member device informs the management device of the change through handshake packets. The management device then collects the specified topology information through NTDP. Such a mechanism enables topology changes to be tracked in time.

IV. Handshake packets

Handshake packets are mainly used to maintain the states of the members in a cluster.

l           After a cluster is built, a member device initiates the handshake process and sends packets. The management device also sends handshake packets to the member device. The management device and member devices do not respond to the handshake packets they received but remain in the Active state.

l           If the management switch receives no handshake packet from a member switch for three consecutive times, it changes the state of the member device to Connect. Likewise, if a member device receives no handshake response packet from the management device for three consecutive times, the state of the member device changes from Active to Connect. You can use the timer command to set the handshake interval for a cluster, which is 10 seconds by default.

l           If the member device in the Connect state receives no handshake packet or management packet within the holdtime (the holdtime can be set by using the holdtime command in cluster view of the management device and is 60 seconds by default) that switches its state to Active, the member device changes to the Disconnect state, and the management device considers the member to be disconnected. A member device in the Active or Connect state is connected.

In addition, handshake packets are used to notify the management device of topology changes of neighboring devices.

1.2  HGMP V2 Configuration Task Overview

Table 1-2 HGMP V2 configuration task overview

Operation

Description

Related section

Configure the management device

Enable NDP globally and for specific ports

Required

See section 1.3.1  "Enabling NDP Globally and for Specific Ports".

Configure NDP-related parameters

Optional

See section 1.3.2  "Configuring NDP-Related Parameters".

Enable NTDP globally and for specific ports

Required

See section 1.3.3  "Enabling NTDP Globally and for Specific Ports".

Configure NTDP-related parameters

Optional

See section 1.3.4  "Configuring NTDP-Related Parameters".

Enable the cluster function

Required

See section 1.3.5  "Enabling the Cluster Function".

Build a cluster

Required

See section 1.3.6  Building a Cluster”.

Configure cluster member management

Required

See section 1.3.7  "Configuring Cluster Member ".

Configure cluster topology management

Required

See section 1.3.8  Configuring Cluster Topology Management”.

Configure cluster parameters

Optional

See section 1.3.9  "Configuring Cluster Parameters".

Configure interaction for the cluster

Optional

See section 1.3.10  "Configuring Interaction for the Cluster".

Configure member devices

Enable NDP globally and for specific ports

Required

See section 1.4.1  "Enabling NDP Globally and on Specific Ports".

Enable NTDP globally and for specific ports

Required

See section 1.4.2  "Enabling NTDP Globally and on Specific Ports".

Enable the cluster function

Required

See section 1.4.3  "Enabling the Cluster Function"

Add a device to a cluster

Optional

See section 1.4.4  "Configuring to Add a Candidate Device to the Cluster".

 

  Caution:

Disabling NDP or NTDP on the management device of a cluster affects the operation of the cluster, although the operation cannot remove the cluster.

 

1.3  Management Device Configuration

1.3.1  Enabling NDP Globally and for Specific Ports

Table 1-3 Enable NDP globally and for specific ports

Operation

Command

Description

Enter system view

system-view

Enable NDP globally

ndp enable

Required

By default, NDP is enabled globally.

Enable NDP for the Ethernet port

system view

ndp enable interfaceinterface-list

Either is required.

By default, NDP is enabled on all ports.

Ethernet port view

interface interface-type interface-number

ndp enable

 

  Caution:

l      NDP works only if it is enabled globally and on the ports.

l      When a port of an aggregation group is connected with a device in a cluster, the NDP feature must be enabled on all the ports of the aggregation group before the feature can work properly.

 

1.3.2  Configuring NDP-Related Parameters

Table 1-4 Configure NDP-related parameters

Operation

Command

Description

Enter system view

system-view

Configure the holdtime of NDP information

ndp timer aging aging-time

Optional

By default, the aging time of NDP packets is 180 seconds

Configure the interval to send NDP packets

ndp timer hello hello-time

Optional

By default, the interval of sending NDP packets is 60 seconds

 

1.3.3  Enabling NTDP Globally and for Specific Ports

Table 1-5 Enabling NDP globally and for specific ports

Operation

Command

Description

Enter system view

system-view

Enable NTDP globally

ntdp enable

Required

Enter Ethernet port view

interface interface-type interface-number

Enable NTDP for the Ethernet port

ntdp enable

Required

 

  Caution:

l      NTDP works only if it is enabled globally and on the ports.

l      For an Ethernet port, NTDP is mutually exclusive with the BPDU tunnel function. For information about BPDU tunnel, refer to QinQ-BPDU Tunnel Operation Manual.

 

1.3.4  Configuring NTDP-Related Parameters

Table 1-6 Configure NTDP parameters

Operation

Command

Description

Enter system view

system-view

Configure the range topology information within which is to be collected

ntdp hop hop-value

Optional

By default, the hop range for topology collection is 3 hops

Configure the interval to collect topology information

ntdp timer interval-time

Optional

By default, the interval of topology collection is 1 minute.

Configure the hop delay time to forward topology-collection request packets

ntdp timer hop-delay time

Optional

By default, the hop delay time is 200 ms

Configure the port delay time to forward topology collection request packets for the device whose topology information is collected

ntdp timer port-delay time

Optional

By default, the port delay time is 20 ms

Quit to user view.

quit

Start topology information collection

ntdp explore

Optional

 

The ntdp enable command is not compatible with the bpdu-tunnel enable command. You cannot configure these two commands at the same time. For information about BPDU Tunnel, refer to QinQ-BPDU Tunnel Operation Manual.

 

1.3.5  Enabling the Cluster Function

Table 1-7 Enable the cluster function

Operation

Command

Description

Enter system view

system-view

Enable the cluster function globally

cluster enable

Optional

By default, the cluster function is enabled

 

1.3.6  Building a Cluster

Before building a cluster, you must configure a private IP address pool available for the member devices in the cluster. When a candidate device joins the cluster, the management device dynamically assigns the candidate device a private IP address for inner-cluster communication. This enables the management device to manage and maintain member devices.

 

  Caution:

When configuring a cluster, make sure the routing tables are not full. Otherwise, the private IP routes of the cluster may not be advertised, causing the handshake packets cannot be exchanged and a device joins and exits the cluster repeatedly. If the routing table of the management device is full when you build a cluster, all the candidate devices will join and exit the cluster repeatedly. If the routing table of a candidate device is full when the candidate device joins the cluster, the candidate device will exit and join the cluster repeatedly.

 

I. Configuring cluster parameters manually

Table 1-8 Configuring cluster parameters manually

Operation

Command

Description

Enter system view

system-view

Specify the management VLAN

management-vlan vlan-id

Optional

By default, VLAN1 is the management VLAN.

Enter cluster view

Cluster

Configure a private IP address pool on the device to be used as the management device for the member devices in the cluster

ip-pool administrator-ip-address { ip-mask | ip-mask-length }

Required

Do not configure the IP addresses of the VLAN interfaces of the management device and member devices on the same network segment. Otherwise, the cluster will not work.

Set the current device as the management device and assign a cluster name

build name

Required

By default, a device is not the management device.

 

  Caution:

l      When the management VLAN is not VLAN 1, if the port on the management device that is connected to member devices are trunk or hybrid port, to implement cluster management, you must configure the port to permit the packets of management VLAN to pass with tags. If the port on the management device used to connecting member devices is an access port, to implement cluster management, you need to configure the port as a hybrid port or trunk port and configure the port to permit the packets of the management VLAN to pass with tags. See the VLAN Operation for details.

l      When the management VLAN is configured as VLAN1, if the port on the member device that is connected to the management device permits the packets from the management VLAN to pass with tags, configure the management device by following the previous description. If the port on the member device that is connected to management device permits the packets of management VLAN to pass without tags, to implement cluster management, you must perform one of the following configuration tasks: configure the corresponding port on the management device as the access type, or configure the port as trunk and the default VLAN of the port as VLAN1, or configure the port as hybrid and the default VLAN of the port as VLAN1 and permits the packets of management VLAN to pass the port without tags. See the VLAN Operation section for details.

l      You can configure an IP address pool only before the cluster is built. Moreover, you can perform the configuration on the management device only. You cannot change the IP address pool for an existing cluster.

 

II. Building a cluster automatically

Besides allowing you to build a cluster manually, the system also enables a cluster to be built automatically. You can build a cluster by using the following commands on the management device and following the steps prompted.

During the process of building a cluster automatically, after you enter the name of the cluster to be created as prompted, the system collects the information about all the candidate devices discovered within the specified hop range and add them to the cluster.

You can press <CTRL+C> to exit automatic cluster establishment. After this operation, no new device will be added and the added devices remain in the cluster.

Table 1-9 Building a cluster automatically

Operation

Command

Description

Enter system view

system-view

Specify the management VLAN

management-vlan vlan-id

Optional

By default, VLAN1 is the management VLAN.

Enter cluster view

cluster

Configure an IP address pool for the cluster

ip-pool administrator-ip-address { ip-mask | ip-mask-length }

Required

Build a cluster automatically

auto-build [ recover ]

Required

 

  Caution:

The VLAN interface IP addresses of the management device and member devices and the IP addresses in the cluster address pool cannot be of the same network segment. Otherwise, the cluster operates improperly.

 

1.3.7  Configuring Cluster Member Management

Member management covers the following:

l           You can manually designate the candidate device to join a cluster or manually remove the designated member device from the cluster. You must add/remove a member on the management device; otherwise, an error message will be returned.

l           You can control the member device remotely by using the remote control function of the management device. For example, if a member device fails due to incorrect configuration, you can delete the start configuration file and reboot the member device to recover the normal communication between the management device and member devices.

l           On the management device, you can configure and manage the specified member device by switching to the view of the member device. After the configuration is complete, you can switch back to the management device from the member device.

Table 1-10 Configure member management

Operation

Command

Description

Enter system view

system-view

Enter cluster view

cluster

Add a candidate device to a cluster

add-member [ member-number ] mac-address mac-address [ password password ]

Optional

Remove a member device from the cluster

delete-member member-number [ to-black-list ]

Optional

Reboot a specified member device

reboot member { member-number | mac-address mac-address } [ eraseflash ]

Optional

Quit to system view

quit

Quit to user view

quit

Switch between the management device view and a member device view

cluster switch-to { member-number | mac-address mac-address | administrator }

Optional

 

&  Note:

Normally, members are numbered sequentially, and the numbers already assigned to members are tracked by the management device. When a member device joins the cluster again after it exits the cluster, its original member number is assigned to it again if the number is currently not assigned to another member.

 

Telnet is used when you use the cluster switch-to command to switch between the management device and member devices in a cluster. Note the following when switching between the devices of a cluster.

l           Make sure that the telnet server command is enabled for the peer device before switching. Otherwise, switching fails.

l           Authentication is performed when you switch between the management device and a member device in a cluster. You will fail to switch between the management device and a member device if the level-3 super passwords of the two devices are not the same. When a candidate device joins a cluster, its super password is set to that of the management device. You are not recommended to modify the super passwords of cluster members (including the management device and member devices) after a cluster is established for fear of switching failures caused by inconsistency super passwords.

l           When you switch from the management device to a member device, the view level remains unchanged.

l           When you switch from a member device to the management device, the user level is determined as preset.

l           You will fail to switch a device where Telnet users are full.

l           To prevent resource wasting and performance decrease, avoid closed-loop switching. A closed loop occurs when you switch from the management device to a member device and then switch back to the management device again from the member device. So, after you switch to a member device from the management device, you are recommended to use the quit command to return the management device rather than using the cluster switch-to administrator command to switch to the management device again.

Note that the telnet server command is enabled on the devices where the cluster function is enabled by default. If the undo telnet server enable command and the cluster function are enabled at the same time in the configuration file used to boot the device, the undo telnet server enable command does not take effect when a device boots with the configuration file. Instead, the telnet server command will be enabled for the device automatically.

1.3.8  Configuring Cluster Topology Management

White lists and black lists provide basis for topology management. Their meanings are described as follows:

l           White list for topology management: Correct network topology confirmed to be correct by network administrators. The information of nodes and their relationship with their neighbors at any give moment can be extracted from the current network topology. Meanwhile, the white list can be maintained based on the current network topology, such as adding, removing, and modifying nodes.

l           Blacklist for topology management: Any device in the blacklist is not allowed to join a cluster automatically. The network Administrator needs blacklist a device manually, including device MAC address. If a device is blacklisted and connected to the network through another device not blacklisted, the access device’s information and the access port will be automatically recorded.

The white list and black list are mutually exclusive: nodes in the white list must not be in the black list, and vice versa. Note that a topology node can be neither in the white list nor the black list. These are usually new nodes and need to be authenticated by administrators.

The white list and black list and will not disappear even if the management switch is powered off. They implement two backup and recovery mechanisms: backups on the FTP server or the Flash of the management switch. In either backup mode, you need to restore the white list or blacklist manually. When the management switch restarts or the cluster management is reconfigured, the management switch restores the white list and blacklist from the Flash. You can use the topology restore-from command to restore the topology using the standard topology information on the FTP server or in the Flash of the management device.

Note that device cannot judge whether the saved standard topology information is right. So, make sure that the saved topology information is correct.

Table 1-11 Configure topology management

Operation

Command

Description

Enter system view

system-view

Enter cluster view

cluster

Blacklist a device

black-list add-mac mac-address

Optional

Remove a device from the backlist

black-list delete-mac { all | mac-address }

Optional

Confirm the current topology of the cluster and save it as base topology

topology accept { all [ save-to { ftp-server | local-flash } ] | mac-address mac-address | member-id member-number }

Optional

Save the standard topology information to the FTP server or the Flash of the management device

topology save-to { ftp-server | local-flash }

Optional

Restore the topology using the standard topology information stored the FTP server or in the Flash of the management device

topology restore-from { ftp-server | local-flash }

Optional

 

1.3.9  Configuring Cluster Parameters

Cluster parameters include device holdtime and handshake interval.

l           Handshake interval: in a cluster, the connections between the member devices and the management device are maintained by handshake packets. The cluster member states and link states are monitored by exchanging handshake packets between member devices and the management device periodically. You can use the timer command in cluster view of the management device to configure the handshake interval.

l           Holdtime of a device: if the management device does not receive handshake packets from a member device in three consecutive handshake intervals, the member device state saved on the management device is switched from the active state to the connect state. Within the holdtime, if a member device in the connect state does not receive handshake packets or management packets which can switch the member device back to the active state, the member device will be switched to the disconnect state. In this case, the management device regards that the member device has left the cluster, and the state of the member device is down on the management device.

Table 1-12 Configure cluster parameters

Operation

Command

Description

Enter system view

system-view

Enter cluster view

cluster

Configure the holdtime for a device

holdtime seconds

Optional

By default, the holdtime is 60 seconds.

Configure a handshake interval

timer interval-time

Optional

By default, the handshake interval is 10 seconds.

 

1.3.10  Configuring Interaction for the Cluster

After building a cluster, you can configure FTP server, TFTP server, NMS host, and log host universally on the management device for the cluster. A member device in the cluster will access the server configured through the management device.

All logs of the member devices in the cluster will be output to the log host configured: when member devices output logs, the logs are directly sent to the management device, which then translates the address of the logs and sends them to the log host configured for the cluster. Likewise, all Trap messages sent by member devices are output to the NMS host configured for the cluster.

Table 1-13 Configure interaction for the cluster

Operation

Command

Description

Enter system view

system-view

Enter cluster view

cluster

Configure the public FTP server for the cluster

ftp-server ip-address [ user-name username password { simple | cipher } password ]

Optional

By default, the cluster has no public FTP server.

Configure the TFTP server for the cluster

tftp-server ip-address

Optional

By default, the cluster has no public TFTP server.

Configure the log host for the cluster

logging-host ip-address

Optional

By default, the cluster has no public log host.

Configure the SNMP host for the cluster

snmp-host ip-address [ community-string read string1 write string2 ]

Optional

By default, the cluster has no SNMP host configured.

Configure the network management (NM) interface for the cluster

nm-interface vlan-interface vlan-id

Optional

By default, the management VLAN interface functions as the network management interface.

 

  Caution:

The log host configured for the cluster takes effect only after you use the info-center loghost command in system view. For more about the info-center loghost command, see the "Information Center Commands".

 

1.4  Configuring Member Devices

1.4.1  Enabling NDP Globally and on Specific Ports

Refer to 1.3.1  Enabling NDP Globally and for Specific Ports.

1.4.2  Enabling NTDP Globally and on Specific Ports

Refer to 1.3.3  Enabling NTDP Globally and for Specific Ports.

1.4.3  Enabling the Cluster Function

Refer to 1.3.5  Enabling the Cluster Function.

1.4.4  Configuring to Add a Candidate Device to the Cluster

Table 1-14 Configure to add a member to the cluster

Operation

Command

Description

Enter system view

system-view

Enter cluster view

cluster

Add a candidate device to the cluster

administrator-address mac-address name name

Optional

By default, a device is not a member of any cluster.

 

1.5  Displaying and Maintaining a Cluster

After the configuration above, you can execute the display command to display the running status after the cluster configuration. You can verify the configuration effect through checking the displayed information.

You can use the reset command in user view to clear NDP statistics.

Table 1-15 Display and maintain cluster configurations

Operation

Command

Display NDP configuration

display ndp [ interface port-list ]

Display the global NTDP information

display ntdp

Display device information collected through NTDP

display ntdp device-list [ verbose ]

Display state and statistics information about a cluster

display cluster

Display the base topology of the cluster

display cluster base-topology [ mac-address mac-address | member-id member-number ]

Display the current blacklist of the cluster

display cluster black-list

Display the information about the candidate devices of a cluster

display cluster candidates [ mac-address mac-address | verbose ]

Display the current topology of the cluster or the topological path between two nodes

display cluster current-topology [ mac-address mac-address [ to-mac-address mac-address ] | member-id member-number [ to-member-id member-number ] ]

Display the information about the cluster members

display cluster members [ member-number | verbose ]

Clear the NDP statistics on a port

reset ndp statistics [ interface interface-list ]

 

1.6  HGMP V2 Configuration Example

I. Network requirements

Three switches form a cluster, in which:

l           The management device is an S5500-SI series switch.

l           The rest are member devices.

The S55500-SI switch manages the rest two member devices as the management device. The detailed information about the cluster is as follows.

l           The two member devices are connected to GigabitEthernet1/0/2 and GigabitEthernet1/0/3 ports of the management device.

l           The management device is connected to the external network through its GigabitEthernet1/0/1 port.

l           GigabitEthernet1/0/1 port of the management device belongs to VLAN2, whose interface IP address is 163.172.55.1.

l           All the devices in the cluster use the same FTP server and TFTP server.

l           The FTP server and TFTP server share one IP address: 63.172.55.1.

l           The SNMP site and log host share one IP address: 69.172.55.4.

l           The management device collects topology information in every three minutes.

l           Blacklist the device whose MAC address is 00e0-fc01-0013.

II. Network diagram

Figure 1-3 Network diagram for HGMP cluster configuration

III. Configuration procedure

1)         Configure the member devices (taking one member as an example)

# Enable NDP globally and for GigabitEthernet1/0/1.

<Sysname> system-view

[Sysname] ndp enable

[Sysname] interface GigabitEthernet 1/0/1

[Sysname-GigabitEthernet1/0/1] ndp enable

[Sysname-GigabitEthernet1/0/1] quit

# Enable NTDP globally and for GigabitEthernet1/0/1.

[Sysname] ntdp enable

[Sysname] interface GigabitEthernet 1/0/1

[Sysname-GigabitEthernet1/0/1] ntdp enable

[Sysname-GigabitEthernet1/0/1] quit

# Enable the cluster function.

[Sysname] cluster enable

2)         Configure the management device

# Enable NDP globally and for the GigabitEthernet1/0/2 and GigabitEthernet1/0/3 ports.

<Sysname> system-view

[Sysname] ndp enable

[Sysname] interface GigabitEthernet 1/0/2

[Sysname-GigabitEthernet1/0/2] ndp enable

[Sysname-GigabitEthernet1/0/2] quit

[Sysname] interface GigabitEthernet 1/0/3

[Sysname-GigabitEthernet1/0/3] ndp enable

[Sysname-GigabitEthernet1/0/3] quit

# Configure the holdtime of NDP information to be 200 seconds.

[Sysname] ndp timer aging 200

# Configure the interval to send NDP packets to be 70 seconds.

[Sysname] ndp timer hello 70

# Enable NTDP globally and for GigabitEthernet1/0/2 and GigabitEthernet1/0/3 ports.

[Sysname] ntdp enable

[Sysname] interface GigabitEthernet 1/0/2

[Sysname-GigabitEthernet1/0/2] ntdp enable

[Sysname-GigabitEthernet1/0/2] quit

[Sysname] interface GigabitEthernet 1/0/3

[Sysname-GigabitEthernet1/0/3] ntdp enable

[Sysname-GigabitEthernet1/0/3] quit

# Configure the hop count to collect topology to be 2.

[Sysname] ntdp hop 2

# Configure the delay time for topology-collection request packets to be forwarded on member devices to be 150 ms.

[Sysname] ntdp timer hop-delay 150

# Configure the delay time for topology-collection request packets to be forwarded through the ports of member devices to be 15 ms.

[Sysname] ntdp timer port-delay 15

# Configure the interval to collect topology information to be 3 minutes.

[Sysname] ntdp timer 3

# Enable the cluster function.

[Sysname] cluster enable

# Enter cluster view.

[Sysname] cluster

[Sysname-cluster]

# Configure an IP address pool for the cluster. The IP address pool contains six IP addresses, starting from 172.16.0.1.

[Sysname-cluster] ip-pool 172.16.0.1 255.255.255.248

# Specify a name for the cluster and create the cluster.

[Sysname-cluster] build aaa

[aaa_0.Sysname-cluster]

# Configure the holdtime of the member device information to be 100 seconds.

[aaa_0.Sysname-cluster] holdtime 100

# Configure the interval to send handshake packets to be 10 seconds.

[aaa_0.Sysname-cluster] timer 10

# Configure the FTP Server, TFTP Server, Log host and SNMP host for the cluster.

[aaa_0.Sysname-cluster] ftp-server 63.172.55.1

[aaa_0.Sysname-cluster] tftp-server 63.172.55.1

[aaa_0.Sysname-cluster] logging-host 69.172.55.4

[aaa_0.Sysname-cluster] snmp-host 69.172.55.4

# Blacklist the device whose MAC address is 00e0-fc01-0013.

[aaa_0.Switch-cluster] black-list add-mac 00e0-fc01-0013

 

&  Note:

l      Upon the completion of the above configurations, you can execute the cluster switch-to { member-number | mac-address H-H-H } command on the management device to switch to a member device to maintain and manage a member device. After the configuration, you can execute the quit command on a member device to return the management device. Make sure that the telnet server command is enabled on the peer before switching.

l      On a member device, you can also use the cluster switch-to administrator command to switch to the management device view.

l      On the management device, you can use the reboot member { member-number | mac-address H-H-H } [ eraseflash ] command to reboot a member device. Refer to the related configuration sections in this manual for details.

l      Upon the completion of the above configurations, logs and SNMP trap messages of all the members in the cluster are sent to the SNMP host.

 

  • Cloud & AI
  • InterConnect
  • Intelligent Computing
  • Security
  • SMB Products
  • Intelligent Terminal Products
  • Product Support Services
  • Technical Service Solutions
All Services
  • Resource Center
  • Policy
  • Online Help
All Support
  • Become a Partner
  • Partner Resources
  • Partner Business Management
All Partners
  • Profile
  • News & Events
  • Online Exhibition Center
  • Contact Us
All About Us
新华三官网