Country / Region
Products and Solutions
InterConnect
Application-Driven Data Center (AD-DC)
Flagship Products
Software Products
H3C AD-DC SolutionH3C Application-Driven Data Center (AD-DC) is a unified next-generation data center solution designed to support accelerated service delivery. It helps customers build an intelligent data center network that can change quickly to accommodate exponentially growing traffic and accelerated service provisioning driven by cloud computing, big data, and mobile Internet.
H3C AD-DC solution is an intelligent, comprehensive solution that provides management, control, analysis, and AI for data center network scenarios. This solution delivers the following benefits:
Uses graphic, visualized orchestration to enable automated deployment. It provides one-stop, end-to-end intent assurance from architecture design, pre-factor simulation, network construction, to operation assurance.
Provide basic network management and automated device configuration provisioning from a controller component.
Enables uniform network profile and security policy deployment across the network.
Uses an analyzer component to offer AI- and big data-assisted analytics, enabling administrators to gain a holistic view of the network and quickly identify and resolve network issues.
Implements AI ECN in high-performance computing and distributed/centralized storage scenarios to enable zero packet loss and full consolidation of high-performance computing, storage, and general-purpose computing over the Ethernet.
Provides the user an environmentally friendly, energy efficient data center with optimized network architecture, lossless Ethernet, and energy saving management.
Provides a management, control, and analysis integrated system to help customers build a new type of data center network that is simplified and unified, intent-based and intelligent, ultrabroad and lossless, and green and energy efficient, and enables rapid service delivery and high-performance operation, meeting the evolving requirements in this cloud computing era.
The following contents are complex, and it is recommended to browse on PC.

Enter c.h3c.com.cn on the PC browser and operate according to the page to synchronize to the PC and continue browsing.
Continue by mobile
Extensive support for standards and protocols
H3C AD-DC is designed for openness. It provides extensive support for standards and protocols, including BGP EVPN, OVSDB, OpenFlow 1.3, NETCONF, INT, gRPC, and ERSPAN. Customers can integrate it with mainstream resource management platforms or cloud platforms to provide unified management or avoid the risk of vendor lock-in.
Multiple networking models
AD-DC supports automated underlay deployment by using spine-leaf, spine-aggregation-leaf, or spine-aggregation-leaf-access network model, as well as network-based overlay and hybrid overlay on-demand deployment.
Multi-egress flexibility
AD-DC supports GUI-based orchestration of multiple egresses and can use multiple borders as fabric egresses. Different tenants or VPCs can choose different borders as egress devices.
Flexible networking architecture options
AD-DC allows selection of networking architecture from single fabric, single DC, multi-fabric, multi-DC, to remote leaf as needed.
Flexible deployment modes
AD-DC supports deployment on a single node or VM to address requirements in a small-scale scenario and deployment in cluster mode for enhanced service availability.
Network fabric automation—The solution offers automated role-based underlay fabric deployment, which reduces initial deployment workload and improves deployment efficiency
Service deployment automation—This solution builds services-adaptive network models and enables automated configuration provisioning on the overlay logical networks, which greatly accelerates service provisioning and increase service deployment efficiency by more than 90%.

Tiered management
In the large-scale multi-DC scenario, each DC has its own controller component. To provide across-DC automated, unified network management, AD-DC introduces Super Controller for tiered management. In the southbound direction, Super Controller centrally manages controllers in the DCs and enables unified management of network resources. In the northbound direction, Super Controller provides a unified network management interface for the DCs and enables unified network resource orchestration across DCs from the perspective of tenants.
Unified management and orchestration of multiple fabrics from one controller component
In a multiple-fabric single-DC scenario, the solution deploys a controller to provide unified management and orchestration across the network fabrics. In the southbound direction, the controller centrally manages network resources distributed across the network fabrics. In the northbound direction, the SDN controller interacts with the OpenStack cloud platform through a Neutron plugin, enabling unified service orchestration across fabrics from the perspective of tenants.
Flexible, highly reliably deployment of the controller
The controller allows you to choose a disaster recovery solution such as cold cluster backup and primary/secondary cluster backup as needed to improve management control plane availability.
On-demand security resource scheduling—Security resources are pooled, service-oriented, and graphically orchestrated based on policy-driven security service chaining. Security policies can be deployed automatically to meet businesses' security requirements on demand, providing comprehensive protection of both internal and external traffic for tenants.

Unified network and security for coordinated defense—Through network-wide "network + security" collaboration and coordinated defense, AD-DC provides a three-tier coordinated closed-loop defense system that encompasses analysis, control, and implementation capabilities. AD-DC automates business-driven policy establishment and deployment and enables the transition from using manual approaches for network management and maintenance to AI-driven operations (AIOps), saving operations expenses by more than 80%.
Fine-grained isolation based on EPGs—Hardware entry-based EPGs allow you to group hosts by discrete IPs and configure flexible inter-group strategies to provide whitelists, blacklists, stateless firewalls, and service chains, and provide host-granularity network isolation for the data center network.

As the pipeline to transport data, the data center network requires seamless integration and compatibility with compute resources. Based on the standard OpenStack architecture and projects, AD-DC can automate provisioning of all types of compute resources including virtual, bare metal, and containerized, improving compute resource provisioning efficiency by 70%.
Compute resources collaboration with virtualization platforms—By coupling with OpenStack's VLAN model and VXLAN model, AD-DC provides support for most mainstream compute virtualization platforms in the industry including KVM, VMware, and CAS. The controller can interoperate with virtualization platforms such as VMware vCenter, Microsoft System Center, and Red Hat Virtualization Manager to achieve dynamic online association between computing and network resources and across-vCenter dynamic migration.
Compute resources collaboration with bare metal—Based on the OpenStack Ironic project, AD-DC seamlessly integrates with OpenStack to provide one-stop, full-lifecycle service for bare metal resources on tenant networks.
Compute resources collaboration with containers
·Container network Layer 2 bridging solution—This solution applies to the scenario where a new data center is to be established. Based on the proprietary CNI plugins, AD-DC can cooperate with open-source container platforms developed based on Kubernetes and Openshift to automate container network deployment and enable interworking and isolation of container network multi-tenancy, containers and VMs, and bare metal at Layer 2 and Layer 3 on demand, so that network connections are available for container on demand.
·Container network Layer 3 routing solution—This solution applies to the scenario where services have been deployed in containers on the Calico container network and the network needs a transformation to SDN. The AD-DC controller automates BGP peer relationship establishment between the Calico vRouter and switch side to enable route advertisement of Calico content devices across the SDN network and automated network connection between containers.
Automated intent-based service deployment
On traditional networks, administrators are required to configure devices from the CLI based on the business requirements for service deployment, which is expertise demanding, labor intensive, time consuming, and error prone. In contrast, with intent-based networking, users are not required to understand the configuration theories. The network controller will automatically translate business requirements to device configurations to be deployed.
Simulation before service deployment
Before deploying services on the production network, the system simulates service deployment on a twin network built 1: 1 of the real production networks and assesses the network change risks and impacts. Only when the assessment result meets the expectations, the configuration is deployed to the production environment.
Intent verification after service deployment
After the configuration is deployed on the network, the analyzer collects relevant data on the network periodically, including device configuration, device ARP entries, FIB entries, network topology, and network device status. The analyzer submits collected data to the intelligent analysis module which simulates the forwarding behavior of network devices with simulation verification algorithms, and finally provides the verification results for network-wide connectivity, route reachability, and configuration consistency.
Closed loop for faults
The analyzer can identify and predict device failures, network failures, protocol failures, overlay failures, and service failures based on various metric data collected over the network and uses means of notification, suggestion, and resolution deployment to help resolve issues quickly and close loop for faults.

Multiple data collection methods, network health visibility
The analyzer can use gRPC, Telemetry, ERSPAN and in-band telemetry (INT) technologies to achieve
millisecond-precision data capture, data analysis, and real-time fault detection, helping user to gain a holistic view of the network and visibility into tenant networks.
AI intelligent analysis, precise fault location, risk prediction, trend analysis, and closed-loop fault resolution
AC-DC provides AI-powered intelligent analysis. Precise fault location, risk prediction, trend analysis, and closed-loop business O&M that encompasses perception, pre-judgment, and execution, shortens fault resolution time from hours to minutes. AD-DC automates a closed-loop process for fault events from discovery, diagnosis, solution, to closure. When a fault occurs in the network, the analyzer will detect, locate, and identify the root cause in real time and triggers the controller to issue a solution to fix and resolve the fault.

Inter-DC traffic intelligent analysis from the analyzer
·Analyzes the inter-DC traffic composition, link utilization and status between sites, and application distribution on the links from the data center perspective.
·Analyzes and presents the inter-DC traffic and traffic proportion based on the application information combined with the inter-DC traffic composition.
SDN-based
A software-defined data center network allows administrators to customize the data center more flexibly at the control plane. H3C SDN controller is the real performer and core of programmable data centers. With its high reliability, high performance, fully open interfaces, and programmable extensibility, SeerEngine is changing the deployment mode and operation mode of the network. The controller provides richer and more flexible functions to help enterprises adapt to changing network trends and build an intelligent, secure, and reliable information network.
Northbound openness
In the northbound direction, the controller adopts open, standard RESTful APIs, allowing users to develop programmable SDN apps of their own. The controller can interoperate with a standard OpenStack, Kubernetes, or OpenShift platform through Neutron/CNI APIs, which enables unified management and on-demand orchestration of network resources and deep cloud-network integration.
Southbound openness
In the southbound direction, the controller automates device configuration provisioning through OpenFlow, NETCONF, and OVSDB protocols.
Runbook
Runbook is a workflow orchestration tool based on model-driven concepts, capable of flexibly customizing network change processes, sequencing scattered atomic APIs, and offering comprehensive functionality based on business needs. Supports no-code/low-code business orchestration, which can shorten the time to launch services and enhance the automation efficiency and reliability of network changes. Additionally, Runbook supports operations such as tracking the progress of flow instances, querying, and configuration rollback to adapt to complex and ever-changing application scenarios
Since its release, the AD-DC solution has helped wide range of customers across industries accelerate their digital transformation.
Underlay network automation | · Underlay network automation with up to four layers: Spine, Aggregation, Leaf, and Access layers. · Automated underlay network provisioning with DRNI (MLAG) groups: Automated underlay network provisioning process. When the DRNI (MLAG) option is enabled in the fabric template and the intra-portal link (IPL) exists between two same layer devices, the DRNI (MLAG) device group is automatically configured with the two devices. · Automatic link-aggregation group (LAG) creation: The link-aggregation group can be manually or automatically configured on the switches of the DRNI (MLAG) group when a server connects to the DRNI (MLAG) group with multiple links. · IPv4 or IPv6 underlay network. · BGP peer configuration of the DRNI (MLAG) network with Calico. · Manages devices with the certificate. · Device replacement wizard including device replacement impact analysis and assessment. · Auto discovered links and manual added links in the topology. · Send CLI commands to a specific device. · Bulk configuration of commands to multiple devices. |
Overlay network automation | · Bulk deployment of tenant, virtual network, virtual router, virtual subnet, virtual port, gateway, external network, floating IP and VLAN-VXLAN mapping with templates. · VLAN-VXLAN mapping: Map VLAN ID to overlay logical network’s VXLAN segment ID for access control. Each mapping is applied to specific interfaces or devices. · VXLAN service pre-configuration for server master and backup network interface failover. · Virtual Port QoS rate limit: Each virtual port represents one endpoint (VM, K8s pod, bare metal server, etc.). · Traffic behavior for specified tenant services: DSCP remark, rate limit, etc. for certain traffic that matches the classifier. · Configure the routing table and bind the routes to a vRouter: Static route and default route configuration for the external network and inter-VPN traffic. |
EVPN VXLAN | VXLAN with BGP EVPN control plane and distributed gateways. |
Multicast | · Layer 3 multicast service in the same VPN and to the external network. · Layer 3 multicast service across VPNs. · Layer 2 multicast service across different data centers. · Layer 3 multicast service across different data centers. |
Container network | · Interoperate with K8s to automatically provide network connectivity for K8s pods: Layer 2 and Layer 3 connectivity in the same VPN. · Interoperate with K8s to provide security policies for K8s pods. · Interoperate with K8s to provide QoS for K8s pods: Rate limit, DSCP remark, etc. · Access the service in a pod using the pod’s IP and port number e.g. Access the HTTP service in a pod using port 8080 · K8s cluster IP. · Interconnectivity with Calico. · K8s Pod IP and Node IP interconnectivity. · K8s pod fixed IP address. · H3C AD-DC interoperates with K8s via the H3C CNI plug-in. |
IPv6 | · IPv4 and IPv6 dual-stack overlay network: IPv4/IPv6 endpoints can access the network. · IPv6 endpoints can access the IPv6 external network without NAT. |
Easy provisioning and expansion | · Network provisioning wizard for the underlay network. · Network expansion wizard for the underlay network. |
Security function | · Security policy for the endpoint: Applies to the virtual port. Each virtual port represents one endpoint (VM, K8s pod, bare meter, etc.). · Floating IP: H3C AD-DC can configure the Destination NAT (DNAT) service (floating IP service) on the H3C firewall to allow external users to use the public IP address and port number to access the internal virtual port’s IP address and port number. · Firewall as a Service (FWaaS): Manages H3C firewalls including virtual firewall instances and the firewall policies. · Antivirus as a Service (AVaaS): Configure Antivirus policies on the H3C firewall and provide antivirus services. · IPS as a Service (IPSaaS): Configure IPS policies on the H3C firewall and provide IPS services. · Load balance as a Service (LBaaS): Configure H3C load balancer’s policies and provide LB services. · Manages F5 load balancer’s policies and provide LB services. |
IP service chain | · The policy based IP service chain redirects traffic between two contexts through one or multiple firewalls, load balancers or third-party value-added service (VAS) instances Each context matches certain private subnets (VXLAN segments) or external network subnets. · Service chain drag-and-drop style configuration. · East-west IP service chain in a VPN: Each virtual router in H3C AD-DC represents a VPN. · East-west IP service chain across two VPNs. · East-west IP service chain across two fabrics: Each fabric contains a group of border, spine, leaf physical devices to form a network. Two fabrics are in the same data center. · East-west IP service chain with multiple hops: Each hop is a firewall, load balancer or third-party VAS instance. · North-south IP service chain. · North-south IP service chain with multiple hops. · North-south IP service chain for all external traffic or traffic to specific external subnets. · North-south IP service chain with excluded external subnets. · Service chain firewall bypass: Each firewall has the optional bypass mode. · Reuse a firewall instance in multiple IP service chains. · IP service chain priority: If the traffic matches multiple service chains, it will use the service chain with the highest priority. · IP service chain through a third-party value-added service (VAS) instance. · North-south IP service chain through a third-party firewall device for SNAT/DNAT. · North-south IP service chain through a third-party load balancer device with its virtual IP. |
Microsegmentation | · The microsegmentation application policy for a vRouter (VPN) performs certain action including deny, permit, redirect (service chain), etc. for traffic between two EPGs. · Whitelist and blacklist: An application policy can be applied to a VPN (vRouter) with a default action (deny all or permit all) and exception rules. Exception rules can be created for traffic between certain two EPGs as whitelist or blacklist. · Microsegmentation service chain matching Layer 4 protocol and port . · The microsegmentation service chain applies to traffic between two endpoint groups (EPGs) and can match Layer 4 protocol and port. The IP service chain (which is discussed on the previous page) applies to traffic between two contexts. The context can only specify the private or external subnet. The EPG can specify many more attributes including subnet, VM, host, and more. · Microsegmentation service chain with EPGs matching host name, VM name and VM MAC address: The match schemes include “exact match”, “include”, “start with” and “end with”. · Microsegmentation service chain through the firewall. · Microsegmentation service chain through the load balancer. · Microsegmentation service chain through the third-party value-added service (VAS). · Microsegmentation service chain with multi hops. · Microsegmentation service chain priority: If the traffic matches multiple microsegmentation service chains rules, it will use the rule with the highest priority. · Source side matching: The network matches traffic and applies the traffic policy on the source side leaf switch for maximum security. · IPv4/IPv6 microsegmentation north-south service chain. · Microsegmentation north-south service chain with multiple external networks. · Microsegmentation north-south service chain with multiple external networks distributed in two fabrics for redundancy. |
2-in-1 deployment | Service leaf and server leaf in one. Service leaf and border in one. |
Basic operation and maintenance (O&M) | · Logs, alarms and dashboard. · Physical network topology, logical network topology, and application topology interactive display. · Loop detection and automatic removal. · Underlay network check:Connectivity check, black hole route check, routing loop check, and configuration check. · Overlay configuration consistency check between the controller and the device: Inconsistent configuration can be synchronized from AD-DC to the network device. · Configuration consistency check whitelist: Configuration in the whitelist won’t be displayed in the configuration differences between the controller and the network device. · VM IP search. · Radar Detection: Displays the path from a specified source to a specified destination in the topology. The source/destination can be a vPort’s IP address (IP address of a VM, K8s pod or bare metal server) or an external IP address. |
Health assessment and insight | · Topology overview. · Network health overview. · Device health. · Chip health. · Interface health. · Queue health. · Transceiver modules health. · Daily, weekly and monthly health report. |
Key performance indicators assessment and insight | · Link statistics analysis. · Transient Capture Buffer (TCB) packet loss statistics: Monitors packet loss in buffer. Displays packet loss of each traffic flow. · Mirror on Drop (MOD) packet loss statistics . · Monitors packet loss in the forwarding process on each device. Displays packet loss of each traffic flow. · Configuration change analysis and visualization. · Hardware resources change analysis and visualization: Including ARP, MAC, IPv4 routing, IPv6 routing, LLDP neighbor, VRF, VSI, L2VPN MAC tables and more. · Software version change. |
TCP flow analysis | · Fabric TCP flow overview. · Host-dimension TCP flow analysis. · Application-dimension TCP flow analysis. · TCP flow session statistics. · Illegal flow analysis with configured compliance rules. · Illegal flow analysis of SYN flood attack. |
INT flow analysis | In-band Network Telemetry (INT) analyzes latency caused by each device. |
Topology interactive display based intelligent analysis | · Physical network topology, logical network topology, and application topology interactive display. · Inter-application topology. |
Issue analysis | · Network Issues: Grouped into multiple types and sub-types. · Application Issues: Grouped into multiple types and sub-types. |
Intelligent troubleshooting | · Device type problems. · Network type problems. · Protocol type problems. · Overlay type problems. · Problem discovery, identification and remediation in minutes (Closed-loop troubleshooting): Based on AI and big data technologies, H3C AD-DC helps IT team troubleshoot and remediate the problem in minutes with recommended remediation action for problems. |
Intelligent prediction | Device KPI trend and prediction. |
Intent simulation | · Device replacement impact analysis and assessment. · Service and configuration change simulation and assessment |
Intent verification | · Reachability verification. · Consistency verification. · Isolation verification. · Black hole route and routing loop existence verification. |
Emergency recovery | · Contingency plan with plan templates. · Tenant snapshot rollback. · Entire network restore point rollback. |
AD-DC anywhere | · Connectivity across multiple fabrics: VMs in the same VPN can access each other across fabrics. · Remote Leaf: The remote leaf switch across WAN. |
AD-DC lite | · All-in-one device: Server leaf, service leaf, border, spine, edge device roles in one device. · Single-node mode or three-node mode controller cluster. · Manages security devices (H3C firewalls) in the controller. |
Multiple external network gateways | · Multiple gateways for a VPN in a single fabric. · Multiple VPNs, each with multiple gateways in a single fabric. · Multiple fabrics are managed by the same H3C AD-DC: Each fabric includes multiple physical Spine/Leaf/gateway/external network devices to form a full functional network. · Two gateways (deployed in two separate fabrics) for multiple fabrics. For each VPN, one gateway acts as the active gateway for VMs in all specified fabrics to access the external network while the other one acts as the standby gateway. · Active-standby gateways in two fabrics with H3C firewalls or third-party firewalls. · Active-standby gateways failover. · Active-active gateways with redundancy for fast traffic failover. · Back-to-back connection between multiple fabrics. |
Active-standby ISPs | The firewall can have static routes to multiple ISPs with different priorities for redundancy. |
Data center interconnect (DCI) | · Each data center has its own H3C SeerEngine-DC controller. · Layer 2 connectivity between VMs across two data centers. · Layer 3 connectivity between VMs across two data centers with firewalls. · Layer 3 connectivity between VMs across two data centers without firewall. · Boarder and Edge Device (ED) 2-in-1 on the same device: Edge Device is used for DCI. · Individual Edge Device: The device only acts as an Edge Device. · The H3C Super Controller can manage multiple SeerEngine-DC controllers in multiple data centers and orchestrate the DCI. |
Cross-technology deployment | · H3C SeerEngine-DC and the H3C SeerAnalyzer can be installed on the same K8s nodes in a cluster as containers. · The controller features and analyzer features are integrated in the same Web console. |
Cross-domain deployment | · The H3C AD-DC data center network solution, AD-Campus campus network solution, and AD-WAN SD-WAN solution’s components can be installed on the same K8s nodes in a cluster as containers. · The features in the H3C AD-DC solution, AD-Campus solution and AD-WAN solution are integrated in the same Web console. |
Interoperability with OpenStack | · Interoperate with OpenStack to automatically provide network connectivity for OpenStack VMs. · support VM creation, VM deletion, and VM Migration. · Synchronize the OpenStack security group to be the security policy on the H3C vPort for Bare Metal servers. · Display the synchronization status between H3C AD-DC and the OpenStack platform. · Northbound configuration check. · Configuration difference check between the OpenStack platform and H3C AD-DC (in case any configuration has been removed or modified by administrators accidentally on the controller). · Northbound configuration synchronization. · Synchronize the different configuration on the OpenStack platform to H3C AD-DC. |
VMware | Interoperability with VMware vCenter to automatically provide network connectivity for VMs. |
Microsoft system center | Interoperability with Microsoft System Center to automatically provide network connectivity for VMs. |
Red Hat | Interoperability with Red Hat to automatically provide network connectivity for VMs. |
Ansible | Interoperates with Ansible. After installing H3C’s plugin in Ansible, administrators can use Ansible to automatically configure the network for H3C AD-DC. |
Controller high availability | · Node-level controller failover: There is no traffic disruption when one controller node fails. The northbound service can recover rapidly when the leader controller node fails, and without service break when the backup node fails. · 2+1+1 controller cluster disaster tolerance (2 nodes in one datacenter, 1 node + 1 backup node in anther datacenter) . · In a 2+1+1 controller disaster tolerance mode deployment, when the datacenter with 2 nodes failed, administrators can quick start the backup node to restore the controller cluster and take over the whole network. · 3+3+1 controller cluster disaster tolerance (three-node active controller cluster + three-node backup controller cluster + one-node arbitrator): One three-node controller deployment locally as the active controller. One three-node controller deployment in a remote disaster tolerance site with real-time data synchronization to the active controller. When the active controller is down, the remote controller can switchover to be the active controller and maintain the network service. · Container-level controller failover: H3C AD-DC is installed on each K8s node as a container in the pod. In a typical three-node controller deployment, when one controller container fails, the system reload the container and recover the controller services automatically. There is no packet loss for existing services. |
Configuration consistency audit | · Southbound configuration and data consistency audit: Audit the configuration difference between the controller and the switches. Synchronize the selected differences to the switches. · Northbound configuration and data consistency audit: Audit the configuration difference between the OpenStack cloud platform and H3C AD-DC. Synchronize the selected difference to the controller. · Configuration and data consistency audit between controller cluster nodes. |
Business availability when the controller is offline | · No impact on existing services when the controller is offline. · New VMs can be created to have network access when the controller is offline: If the service has been created on the controller before and the “VXLAN Service Preconfiguration” option is enabled. |
MLAG GIR | Graceful Insertion and Removal (GIR) for DRNI (MLAG) devices . Put one device into maintenance mode to migrate all traffic on it to the other DRNI (MLAG) member. Then upgrade the device with 0 packet loss.. |
Part | Description |
LIS-SeerEngine-DC-vBRAS-VNF-VAR | H3C AD-DC Management License for vBRAS VMs, 1 VM |
LIS-SeerEngine-DC-SC-VAR | H3C AD-DC Management License for Virtual Services, 1 Node |
LIS-SeerEngine-DC-VA-SWF-VAR | H3C AD-DC Management License for Value-Added Services, 1 Modular Switch |
LIS-SeerEngine-DC-VA-SWB-VAR | H3C AD-DC Management License for Value-Added Services, 1 Port-Fixed Switch |
LIS-SeerEngine-DC-BAS1 | H3C AD-DC Management and Control Software License, 1 Server Node |
LIS-SeerEngine-DC-VSW-VAR | H3C AD-DC Management and Control License for Virtual Switches, 1 vSwitch |
LIS-ADNET-FCAPS-1 | H3C AD-NET Management License for Basic Network Management, 1 Device |
LIS-AD-DC-PSWF-MC-VAR | H3C AD-DC Management and Control License for Modular Switches, 1 Device |
LIS-AD-DC-PSWB-MC-VAR | H3C AD-DC Management and Control License for Fixed-Port Switches, 1 Device |
LIS-SeerAnalyzer-DC | H3C AD-DC Intelligent Analytics Software License |
LIS-SeerAnalyzer-Collector-I | H3C AD-DC Intelligent Analytics License for Type I Collector |
LIS-SeerAnalyzer-DC-Analyzer | H3C AD-DC Intelligent Analytics Software License, 1 Server Node |
LIS-SeerAnalyzer-DC-PSWFA-VAR | H3C AD-DC Intelligent Analytics License for Flow Analytics, 1 Switch |
LIS-SeerAnalyzer-DC-PSW-VAR | H3C AD-DC Intelligent Analytics License for Fixed-Port Switches, 1 Device |
LIS-SeerAnalyzer-SeerFabric-VAR | H3C AD-DC Intelligent Analytics License for Lossless Network Analysis, 1 Switch |
LIS-SeerAnalyzer-DC-PSW-M-VAR | H3C AD-DC Intelligent Analytics License for Modular Switches, 1 Device |
Click following link for more information of AD-DC:
AD-DC configuration video: Click Here
AD-DC technical white paper and topics: Click Here
AD-DC configuration guide: Click Here
AD-DC server hardware requirement: Click Here
AD-DC seerEngine-DC online help: Click Here
AD-DC interoperability guides: Click Here
