Specifications

LiveNX Specifications

Specifications for LiveNX

LiveNX Deployment

Deployment Options

LiveNX components can be deployed via the following methods: Virtual, Physical, and Cloud. The Virtual Deployment Specifications, as well as the Cisco and Multi-Vendor Device Support lists, are provided below.

If you are interested in deploying LiveNX in a Physical, Cloud (Azure, AWS, & Google Cloud), Hyper-V, or KVM environment, please contact LiveAction sales (sales@liveaction.com) for the specifications appropriate for those environments and your needs.

Virtual Deployment Specifications

The LiveNX Server is primarily deployed as a VMware .OVA appliance and is fully operational right out of the box. The server operating system runs on a Linux (TinyCore or Ubuntu) platform.

Server Platform Specifications:

  • VMware ESXi v5.0 or higher – VMware Hardware Version 8 (vmx-8)
  • Network Hardware – At least two Physical NICS on ESXi
    – Support up to 10 Gbps
    – Virtual NICs on OVA are utilizing E100

LiveActiion LiveNX specs

LiveNX Network Protocol Requirements

Below is a list of required network protocols for normal operation of the LiveNX platform. This can be used as the basis for any firewall rules required.

ProtocolPort NumberDirectionDescription
TCP7000Java Client to NX ServerJava Client Access to Platform
TCP443Web Browser to NX ServerUser Access to Web UI of Platform
TCP7026Server to Node (Bidirectional)Server <-> Node Communication
UDP2055Network devices to nodesNetflow Export
UDP161NX Node/Server to Network DevicesSNMP Polling of Network Devices

Table 2: LiveNX Protocol/Port Requirements

LiveNX requires READ-ONLY access to network devices, the platform supports both versions 2c and 3, table below details the configuration required on Cisco devices to enable SNMP v2c or v3.

VersionCommand
v2csnmp-server community RO
v3snmp-server group v3 auth read access snmp-server user v3 auth md5 priv aes 128

Table 3: SNMP Config (Cisco)

LiveNX Semantics Data Requirements

In addition to the information and prerequisites required for LiveNX to discover devices and add them to the inventory, there are other pieces of data required; semantic data. The network/device semantics allows LiveNX to provide maximum insight into the network both in terms of visualization and reporting.

Semantic DataPurpose
Sites to which devices belongAllows devices to associated with sites for visualization within Java Client
Address of the sites – street, city, postcode, countryAllows sites to represented on a map from Web UI
IP Subnet’s associated with the respective sitesAllows LiveNX to run site to site reports based upon IP Address/Subnet
Any differentiators between circuits e.g. MPLS/INET or Service ProviderA tag allowing LiveNX to build reports based
Bandwidth capacity of WAN facing interfaces/circuitsAllows Live NX to calculate utilization rates per WAN interface – 95/99th percentiles

Table 4: Semantic Data Requirements

The collection of Netflow data is a key component of the LiveNX platform, it allows users to both visualize and report upon traffic through the network. LiveNX is a platform that supports several different ‘flavors’ of Netflow: IPFIX, Version 5, Version 9, Flexible Netflow (FNF).

Specifications for LiveSP

LIveSP Specifications

LiveSP Deployment Requirement

ComponentSizing Tool *
HardwareLiveSP can be deployed on a single server or a distributed infrastructure. I/O is optimized for random data access, Data storage is implemented on the physical machine with SSD. The other components can be virtualized.
Operating SystemThe current supported and validated Linux distribution are (Debian, Red hat, Ubuntu) with Kernel version greater than 3.10.

Kernel version 3.16 onwards is recommended for higher performance data access.

BrowserService providers, administrators, operation team and end-customers access LiveSP through support web browsers.

Supported web browsers are IE 11, Mozilla Firefox (latest), Google Chrome (latest, Safari (latest)

LiveSP Sizing Guide

ComponentSizing Tool *
Link (Bandwidth)Bandwidth = Average flow size * Flow count at this max traffic * Predicted max aggregated traffic.

 

Typical enterprise network with 10000 live interfaces, static template = 200 Mbps

Hardware (Storage)Storage = Client profiles * Data retention rule * Predicted average aggregated traffic

 

Virtual EnvironmentProxy: 1 * 4 CPU, 4 GB RAM, 1 SAW 20 GB
Collect: 2 * 8 CPU, 16 GB RAM, SAS 1TB
Services: 4 * 8 CPU, 8 GB RAM, 1 SAS 200 GB
Physical Environment2 * [20 CPU, 128 GB RAM, 3 TB SSD disk/ 3 TB Backup]

 

* LiveSP sizing tool is designed to help size Link and Storage. It is based on observations on large networks, but could vary on traffic profile. Please contact LiveSP support for details analysis.

Flow Information

Flow Information

Flexible NetFlow FNF V9: (IPv4&6 compatible) Version v9 has brought FNF capability, which makes Netflow a highly versatile protocol. Its flexibility makes it particularly more relevant for complex reporting and heterogeneous data.
  • Flexible key field aggregation
  • variable number of data fields.
  • unidirectional or bidirectional
  • sampled or not
  • multi-vendor (430 standardized fields, thousands vendor-specific fields)
  • aggregated, synchronized or not for exports

 

IPFIX: (“IP Flow Information eXport”) also referred to as NFv10, IPFIX is the industry standardized version of Netflow. It builds on NFv9 for most of the features, and brings additional flexibility (variable-length fields, sub -application extracted fields, options-data).

Note: Netflow version 9 and IPFIX are the export protocols of choices for AVC, because they can accommodate flexible record format and multiple records required by Flexible Netflow infrastructure. IPFix is recommended.

If service providers choose a centralized collection, they must size the collection link properly. Link sizing recommendation depends on:

  • IWAN features enabled: More features = more data to export.
  • Bandwidth per site repartition: headquarter with 500 employees will have more variety (=more export) than a small office with 20 employees)
  • The time traffic distribution: the CPEs don’t have their max traffic at the same time.

 

A Typical 10000 CP enterprise SP IWAN network required a 200 Mbps bandwidth

Collection Link Max Speed = Average Flow Size * Predicted Max Aggregated Traffic * Flow Count at this Max Traffic