Specifications for LiveNX
LiveNX components can be deployed via the following methods: Virtual, Physical, and Cloud. The Virtual Deployment Specifications, as well as the Cisco and Multi-Vendor Device Support lists, are provided below.
If you are interested in deploying LiveNX in a Physical, Cloud (Azure, AWS, & Google Cloud), Hyper-V, or KVM environment, please contact LiveAction sales (email@example.com) for the specifications appropriate for those environments and your needs.
Virtual Deployment Specifications
The LiveNX Server is primarily deployed as a VMware .OVA appliance and is fully operational right out of the box. The server operating system runs on a Linux (TinyCore or Ubuntu) platform.
Server Platform Specifications:
- VMware ESXi v5.0 or higher – VMware Hardware Version 8 (vmx-8)
- Network Hardware – At least two Physical NICS on ESXi
– Support up to 10 Gbps
– Virtual NICs on OVA are utilizing E100
LiveNX Network Protocol Requirements
Below is a list of required network protocols for normal operation of the LiveNX platform. This can be used as the basis for any firewall rules required.
|TCP||7000||Java Client to NX Server||Java Client Access to Platform|
|TCP||443||Web Browser to NX Server||User Access to Web UI of Platform|
|TCP||7026||Server to Node (Bidirectional)||Server <-> Node Communication|
|UDP||2055||Network devices to nodes||Netflow Export|
|UDP||161||NX Node/Server to Network Devices||SNMP Polling of Network Devices|
Table 2: LiveNX Protocol/Port Requirements
LiveNX requires READ-ONLY access to network devices, the platform supports both versions 2c and 3, table below details the configuration required on Cisco devices to enable SNMP v2c or v3.
|v2c||snmp-server community |
|v3||snmp-server group |
Table 3: SNMP Config (Cisco)
LiveNX Semantics Data Requirements
In addition to the information and prerequisites required for LiveNX to discover devices and add them to the inventory, there are other pieces of data required; semantic data. The network/device semantics allows LiveNX to provide maximum insight into the network both in terms of visualization and reporting.
|Sites to which devices belong||Allows devices to associated with sites for visualization within Java Client|
|Address of the sites – street, city, postcode, country||Allows sites to represented on a map from Web UI|
|IP Subnet’s associated with the respective sites||Allows LiveNX to run site to site reports based upon IP Address/Subnet|
|Any differentiators between circuits e.g. MPLS/INET or Service Provider||A tag allowing LiveNX to build reports based|
|Bandwidth capacity of WAN facing interfaces/circuits||Allows Live NX to calculate utilization rates per WAN interface – 95/99th percentiles|
Table 4: Semantic Data Requirements
The collection of Netflow data is a key component of the LiveNX platform, it allows users to both visualize and report upon traffic through the network. LiveNX is a platform that supports several different ‘flavors’ of Netflow: IPFIX, Version 5, Version 9, Flexible Netflow (FNF).
Specifications for LiveSP
LiveSP Deployment Requirement
|Component||Sizing Tool *|
|Hardware||LiveSP can be deployed on a single server or a distributed infrastructure. I/O is optimized for random data access, Data storage is implemented on the physical machine with SSD. The other components can be virtualized.|
|Operating System||The current supported and validated Linux distribution are (Debian, Red hat, Ubuntu) with Kernel version greater than 3.10.|
Kernel version 3.16 onwards is recommended for higher performance data access.
|Browser||Service providers, administrators, operation team and end-customers access LiveSP through support web browsers.|
Supported web browsers are IE 11, Mozilla Firefox (latest), Google Chrome (latest, Safari (latest)
LiveSP Sizing Guide
|Component||Sizing Tool *|
|Link (Bandwidth)||Bandwidth = Average flow size * Flow count at this max traffic * Predicted max aggregated traffic.|
Typical enterprise network with 10000 live interfaces, static template = 200 Mbps
|Hardware (Storage)||Storage = Client profiles * Data retention rule * Predicted average aggregated traffic|
* LiveSP sizing tool is designed to help size Link and Storage. It is based on observations on large networks, but could vary on traffic profile. Please contact LiveSP support for details analysis.
Flexible NetFlow FNF V9: (IPv4&6 compatible) Version v9 has brought FNF capability, which makes Netflow a highly versatile protocol. Its flexibility makes it particularly more relevant for complex reporting and heterogeneous data.
- Flexible key field aggregation
- variable number of data fields.
- unidirectional or bidirectional
- sampled or not
- multi-vendor (430 standardized fields, thousands vendor-specific fields)
- aggregated, synchronized or not for exports
IPFIX: (“IP Flow Information eXport”) also referred to as NFv10, IPFIX is the industry standardized version of Netflow. It builds on NFv9 for most of the features, and brings additional flexibility (variable-length fields, sub -application extracted fields, options-data).
Note: Netflow version 9 and IPFIX are the export protocols of choices for AVC, because they can accommodate flexible record format and multiple records required by Flexible Netflow infrastructure. IPFix is recommended.
If service providers choose a centralized collection, they must size the collection link properly. Link sizing recommendation depends on:
- IWAN features enabled: More features = more data to export.
- Bandwidth per site repartition: headquarter with 500 employees will have more variety (=more export) than a small office with 20 employees)
- The time traffic distribution: the CPEs don’t have their max traffic at the same time.
|A Typical 10000 CP enterprise SP IWAN network required a 200 Mbps bandwidth|
Collection Link Max Speed = Average Flow Size * Predicted Max Aggregated Traffic * Flow Count at this Max Traffic