LiveNX 7 Specifications


LiveNX 7.x components can be deployed via the following method: OVA packages

Component Virtual Appliance Option
Server All-in-one Server OVA
Client Client software for Mac OS, Win 32-bit and 64-bit
Node (optional) Node OVA
LiveSensor (optional) LiveSensor OVA
LiveAnalytics Node (optional) LiveAnalytics OVA


LiveNX Server is primarily deployed on ESXi and is fully operational right out of the box. The Server operating system runs on a Linux platform.


Custom Deployment Small Deployment Medium Deployment Large Deployment
Used for less than 25 devices or less than 25k flows/sec. Used for less than 100 devices or lesser than 100k flows/sec Used for 100 to 500 devices or less than 200k flows/sec Used for 500 to 1000 devices or greater than 200k flows/sec
Proof of Concept Deployments – Installation for nonserver installations (Laptops, Desktops) Installation for server environments with Hyper-V Manager/VMware ESXi/ Hypervisor Installation for server environments with Hyper-V Manager/VMware ESXi/ Hypervisor Installation for server environments with Hyper-V Manager/VMware ESXi/ Hypervisor
Specifications: Specifications: Specifications: Specifications:
8vCPU Xeon or i7 16GB RAM LiveNX Server Max Heap Size 8GB
500GB Data Disk*
8vCPU Xeon or i7 16GB RAM LiveNX Server Max Heap Size 8GB
2TB Data Disk*
16vCPU Xeon or i7 32GB RAM LiveNX Server Max Heap Size 16GB
4TB Data Disk*
32vCPU Xeon or i7 64GB RAM LiveNX Server Max Heap Size 31GB
8TB Data Disk*

* ie 
Data Disk Size is a minimum recommendation.

Each LiveNX node supports ~76TB disk space. Recommended way is to add each disk of 10TB.
Server IOPS Recommendation LiveNX 7.1 – 1000 IOPS Read and 4500 IOPS Write
Virtual NICs on OVA are utilizing E1000

The client application can be launched via Web Start directly from the LiveNX Web Server or can be installed
as a 64-bit client application for Windows or Mac. For large scale deployments, the client application installer
is recommended as it can scale and perform to higher capacity than the Web Start versions.

Operating System Specification

  • Windows 7, 8, 10 or Mac OSX 64-bit OS
  • 4 Cores
  • 8 GB RAM
  • Web browser: IE11 and higher, Firefox, Chrome and Safari

  • Java JDK `1.8.0_172`
  • NodeJS `v8.19.4`
  • Influx DB `1.5.3`
  • MongoDB `3.6.3`


Cisco ISR Series Routers: 800, 900, 1700, 1800, 1900, 2600, 2600XM, 2800, 2900, 3600, 3700, 3800, 3900, 4200, 4300, 4400, 4500, 7200, 7600* ASR 1001x, 1002x Series Routers, CSR 1000V* Cisco Catalyst Series Switches 2900, 3650, 3850 & 4500-X 6500, 6800, 9000 are supported. * (Limited LiveNX QoS Monitor support on Layer 3-routable interfaces and VLANs depending upon Cisco hardware capabilities.) Cisco Nexus Switches (Nexus 3000, 7000, 6000 & 9000 Series)
ASR 9000 Series Routers Cisco NetFlow Generation Appliance Cisco AnyConnect Network Visibility Module on Windows and Mac OS X Platforms
Cisco vEdge Devices Cisco ASA 5500 Series Firewalls Cisco Meraki MX Security Appliance
*Recommend IOS versions 12.3 or higher or 15.0 or higher for use with the software (IOS XE 2.6.0 or higher for ASR 1000 series). Earlier IOS versions may also work, but are not officially supported. General-release IOS versions are recommended, although early-and limited-release versions will also work with LiveNX.


Adtran NetVanta Series Routers

Extreme Network Switches

Ntop nProbe

Alcatel-Lucent Routers

Gigamon GigaSMART

Palo Alto Networks Firewalls

Brocade Series Routers

Hewlett-Packard Enterprise Procurve Series Switches

Riverbed SteelHead WAN Optimization Controllers

Barracuda Firewall

Ixia’s Network Visibility Solution

Silver Peak WAN Optimization Controllers

Checkpoint Firewall

Juniper MX Series Routers

Sophos Firewall

Citrix NetScaler Load Balancer

Ziften ZFlow

LIveSP Specifications

LiveSP Deployment Requirement

Component Sizing Tool *
Hardware LiveSP can be deployed on a single server or a distributed infrastructure. I/O is optimized for random data access, Data storage is implemented on the physical machine with SSD. The other components can be virtualized.
Operating System The current supported and validated Linux distribution are (Debian, Red hat, Ubuntu) with Kernel version greater than 3.10.

Kernel version 3.16 onwards is recommended for higher performance data access.

Browser Service providers, administrators, operation team and end-customers access LiveSP through support web browsers.

Supported web browsers are IE 11, Mozilla Firefox (latest), Google Chrome (latest, Safari (latest)

LiveSP Sizing Guide

Component Sizing Tool *
Link (Bandwidth) Bandwidth = Average flow size * Flow count at this max traffic * Predicted max aggregated traffic.


Typical enterprise network with 10000 live interfaces, static template = 200 Mbps

Hardware (Storage) Storage = Client profiles * Data retention rule * Predicted average aggregated traffic


Virtual Environment Proxy: 1 * 4 CPU, 4 GB RAM, 1 SAW 20 GB
Collect: 2 * 8 CPU, 16 GB RAM, SAS 1TB
Services: 4 * 8 CPU, 8 GB RAM, 1 SAS 200 GB
Physical Environment 2 * [20 CPU, 128 GB RAM, 3 TB SSD disk/ 3 TB Backup]


* LiveSP sizing tool is designed to help size Link and Storage. It is based on observations on large networks, but could vary on traffic profile. Please contact LiveSP support for details analysis.

Flow Information

Flow Information

Flexible NetFlow FNF V9: (IPv4&6 compatible) Version v9 has brought FNF capability, which makes Netflow a highly versatile protocol. Its flexibility makes it particularly more relevant for complex reporting and heterogeneous data.
  • Flexible key field aggregation
  • variable number of data fields.
  • unidirectional or bidirectional
  • sampled or not
  • multi-vendor (430 standardized fields, thousands vendor-specific fields)
  • aggregated, synchronized or not for exports


IPFIX: (“IP Flow Information eXport”) also referred to as NFv10, IPFIX is the industry standardized version of Netflow. It builds on NFv9 for most of the features, and brings additional flexibility (variable-length fields, sub -application extracted fields, options-data).

Note: Netflow version 9 and IPFIX are the export protocols of choices for AVC, because they can accommodate flexible record format and multiple records required by Flexible Netflow infrastructure. IPFix is recommended.

If service providers choose a centralized collection, they must size the collection link properly. Link sizing recommendation depends on:

  • IWAN features enabled: More features = more data to export.
  • Bandwidth per site repartition: headquarter with 500 employees will have more variety (=more export) than a small office with 20 employees)
  • The time traffic distribution: the CPEs don’t have their max traffic at the same time.


A Typical 10000 CP enterprise SP IWAN network required a 200 Mbps bandwidth

Collection Link Max Speed = Average Flow Size * Predicted Max Aggregated Traffic * Flow Count at this Max Traffic