Using NetFlow with nProbe for ntopng

This blog post is about using NetFlow for sending network traffic statistics to an nProbe collector which forwards the flows to the network analyzer ntopng. It refers to my blog post about installing ntopng on a Linux machine. I am sending the NetFlow packets from a Palo Alto Networks firewall.

My current ntopng installation uses a dedicated monitoring ethernet port (mirror port) in order to “see” everything that happens in that net. This has the major disadvantage that it only gets packets from directly connected layer 2 networks and vlans. NetFlow on the other hand can be used to send traffic statistics from different locations to a NetFlow flow collector, in this case to the tool nProbe. This single flow collector can receive flows from different subnets and routers/firewalls and even VPN tunnel interfaces, etc. However, it turned out that the “real-time” functionalities of NetFlow are limited since it only refreshes flows every few seconds/bytes, but does not give a real-time look at the network. It should be used only for statistics but not for real-time troubleshooting.

Some Pre Notes

I am using a Ubuntu 14.04.5 LTS (GNU/Linux 3.16.0-77-generic x86_64) server. At the time of writing, nProbe had version v.7.4.160802 while ntopng was in version v.2.4.160802. Furthermore note that nProbe requires a license.

For general information about NetFlow use Wikipedia or Cisco or RFC 3954. For the other tools, use the official web sites: nProbe and ntopng. The nProbe site offers a detailed documentation PDF. A similar tutorial for installing nProbe is this one.

Installation of nProbe

(Since I already showed how to install ntopng, I will only show how to use nProbe here.) The stable builds for nProbe and ntopng are listed here. That is, to install nProbe, I used the following commands:

Since I want to receive NetFlow packets and forward them to ntopng, nProbe must run in Collector Mode. That is, I am using the following configuration file:

with these entries:

Note the naming of the config file: “nprobe-none.conf“. This is mandatory due to the documentation of nProbe: “When nProbe is used in probe mode it is not bound to any interface as its job is to collect NetFlow from some other device. In this case the configuration file to be created is: nprobe-none.conf.” (To my mind, this is a spelling mistake because it should read “When nProbe is NOT used in probe mode…”. However, it is working.)

Furthermore, an empty “start” file is needed to tell the init process to use this configuration file:

After a start of the service with sudo service nprobe start , ntopng must be configured to use this nProbe instance. Open the configuration file:

and add the following interface (= localhost):

Finally, restart the ntopng process: sudo service ntopng restart .

A netstat view should indicate the listening 2055 UDP port for nProbe, the 5556 TCP port for the connection between nProbe and ntopng, as well as the common 3000 TCP port from the ntopng WebGUI:

Since all services are now configured within configuration files that are referenced in the init scripts, they are started automatically after a system reboot. Great.

Palo Alto NetFlow

I am using a Palo Alto Networks firewall (version 7.1.3) to send NetFlow statistics to the nProbe collector. (More information about NetFlow on Palo.) This is configured in the following way: Adding of a NetFlow Server Profile and referencing this profile on all needed Network Interfaces, such as:

I am using quite fast values for the Template Refresh Rate as well as the Active Timeout. On interfaces with huge amount of traffic other values are probably better.

A small tcpdump capture shows some samples of the NetFlow packets sent by the Palo Alto. The following Wireshark screenshots show a NetFlow template as well as a sample flow:

ntopng Usage

Now here is the usage within ntopng. Simply choose the tcp:// interface at the upper right side. All features of ntopng remain the same, such as using the Dashboard, the Flows or the Hosts pages. (Refer to my post to see some features.)

However, here comes the problem with NetFlow: It is NOT a real-time application that lets ntopng show every single flow and its bandwidth correctly. It can be used to see a rough view of all flows during the past few seconds, but not its actual throughput at the moment.

Refer to the following two dashboard screenshots from ntopng. The first shows the Realtime Top Application Traffic from the NetFlow probe, while the second one shows the same from the mirror port eth1. The 54 MBit/s peak in the first screenshot is not true at all. In fact, it was a constant download over a few minutes. Whereas the second screenshot from eth1 shows the correct real-time bandwidth usage.


nProbe for ntopng can be used quite easily. It is possible to receive flows from different locations which can be displayed in a single instance of ntopng. However, if the primary goal is to have a real-time look at the network, e.g., which hosts or flows are consuming bandwidth, this approach does not fit. NetFlow data must be used with statistical applications that can report traffic stats, but not with real-time analyzers such as ntopng.

How-to – Configuring Ntopng to collect sFlow packets

Maybe you thought the same as I thought when I searched online for good ntopng tutorials : “damn, I’ll have to make my own”. Well, as I will have to install the whole setup myself again, I prefer write it here and share it with you.


Just to clarify things before we put our hands in the dirt, ntopng is a netflow analyzer with a nice web-interface, that can get the traffic of its own interface. HOWEVER. It cannot work as a netflow collector too. That means that if you have a couple network devices on a WAN Network, and you want to know what kind of flows are going through your network, you will have to install a separate tool, which is also developped by the ntopng guys : nProbe. Sadly, this one is not free, and you will need a license to get it working in production environnement as the default-installation provides a 20K flows limit per nprobe thread, then it stops collecting them.

So to make it short, you will have to :

  • install ntopng and nprobe
  • configure your network devices to send net/sflow packets to ntopng server
  • configure nProbe to collect net/sflow packets and to stream them in JSON to ntopng
  • configure ntopng to listen for nProbe JSON streams


I used Ubuntu 12.04 amd64 with latest updates for this setup. But I’m pretty sure it works with 14.04, maybe I’ll test it and update this post according to it.

The easiest way to get these packages installed would be installing their sources in APT :

sudo dpkg -i apt-ntop.deb

and do an update of package list :

apt-get clean all
apt-get update
apt-get install nprobe ntopng

Well, the other way to get these packages installed would be downloading the .deb files and install them manually (follow the right steps because there are some dependencies):


Once you’ve downloaded the files, install them like this:

dpkg -i  pfring_6.0.1-7598_amd64.deb
dpkg -i  nprobe_6.16.140627-4223_amd64.deb
dpkg -i  ntopng_1.2.1-8121_amd64.deb
dpkg -i  ntopng-data_1.1.4-7806_all.deb

N.B. : You could download the subversion repository and build the packages by your own but I don’t see the point while you can directly download the built packages. Note that for Centos, there are pre-built packages too on .


My test server has the IPv4

First, launch ntopng :

ntopng -i tcp:// -d /var/tmp -w 3000 -v >> /dev/null &

Then, launch the nprobe collector:

nprobe --collector-port 6343 --zmq tcp:// >> /dev/null &

I want packet samples from my Brocade router so I configure it:

(config)#sflow enable
(config)#sflow destination 6343
(config)#sflow polling-interval 1
(config)#sflow sample 1024

And then activate sflow forwarding on the ports you want:

(config)#interface ethernet 1/6
 (config-if-e1000-1/6)#sflow forwarding

NTOP Next-Generation network analyzer

Go to and login with admin/admin. Change the password in Settings and wait for traffic coming in.

Congrats, now you can see a lot of details concerning traffic flows inside your network.


Activate the whole for production

The last thing to do to get this working outside your lab, in the real world, is activating the nProbe. For this, you have to purchase a license here (Ntopng itself is free on Unix systems) :

Once you got it, just generate the license file on the ntopng website (composed of order ID and system ID). Create the file like this:

 echo 10225F63D0LICENSE5216043489 > /etc/nprobe.license

Just restart the nprobe, it should recognize the license and no longer limit the flows to 25k.


Wake on lan win7
This was such a nightmare to troubleshoot that I just had to document the process. In my scouring the web, I found many like me experiencing the same woes in setting up their WOL. My whole purpose for doing this was in the interest of saving power. I could put my computer to sleep and still be able to wake it up remotely so I can RDP (Remote Desktop Protocol) into it to access my files while away. [EDIT: I recently found this great article on WoWLAN by Andrew vonNagy which details some of the benefits and downsides to using this technology, and provides a nice cost analysis for an organization case study. Check it out if you’re interested in the Wireless WOL.]

There are numerous steps to the process, so I’m first going to outline all of them to give you a nice overview of how to get setup. Henceforth I will refer to the Wake-on-Lan capability as WOL. Here is a zipped file with shortcuts to all the Control Panels you’ll need for your convenience. It also contains a script for the WOL utility linked at the end of this article. 😉


  • WOL ONLY works for Ethernet (i.e. hard-wired) connections, NOT Wireless!
  • You must use the MAC address of you Ethernet card.
  • Orange bullets relate to enabling WOL feature / services.
  • Green bullets relate to the actual routing of the packets to your computer.
  • Things you’ll need to know about your computer: LAN IP, WAN IP, and MAC address. Real quick, hit Windows key + R and type ‘cmd’ then enter. In the console type ‘ipconfig /all’ and enter. Look for your “Ethernet card” and write down its IP (i.e. LAN IP) and MAC address. Then head to to get your WAN IP and write it down as well.
  • This guide is designed so you don’t have to read everything, but only refer to the sections where you’re stuck or having issues. The troubleshooting tools at the bottom can help you deduce what is wrong.


  • Enable WOL in BIOS (from boot)
  • Enable WOL for your Ethernet Card (i.e. NIC)
  • Install Windows Feature “Simple TCPIP services”
  • Start Service “Simple TCP/IP Service” (enables ports 7 & 9)
  • Open UDP for Port 9 in Windows Firewall
  • Forward the port on your Router
  • Testing / Troubleshooting Tools
Because of all the various motherboards out there, I’m not going to go into how to enable WOL in your BIOS, only know that you need to. However, if your BIOS, like mine, is severely limited in settings and no option is available to enable WOL, it might be safe to assume that it will work by default. My HP laptop has many of the BIOS settings locked for warranty purposes, but despite not having the option available, I am still able to use the feature.

Open “Device Manager” from the control panels or use the link in the zip. Expand “Network Adapters” and find your Ethernet Card. Right-click and open “Properties” then go to the “Advanced” tab. You should see something similar to the following. You want to enable “Wake on Magic Packet” or something similar.


Open “Programs and Features” from the control panels or use the link in the zip. Click “Turn Windows features on or off” over on the sidebar. Scroll down and check “Simple TCPIP services” then click OK to install the feature.


Open “Services” from the “Administrative Tools” control panels or use the link in the zip. Scroll down to the service for which we just installed the feature. Make sure the service is started by clicking the link in the sidebar. Also ensure that the “Startup Type” is set to “Automatic” so that it will run with Windows.


Open “Windows Firewall” from the control panels or use the link in the zip. Only UDP is needed, but you can if you with open the port for TCP as well. This is because UDP is a broadcast packet which can always be received by your NIC, whereas TCP requires the computer to be powered up. This can also be set to limit the IPs which can use the port and other security features to make your computer less vulnerable. These are the easiest settings.

You should see your routers’ manual for the details, but here are the basics. Your router IP is almost always or Enter your router IP into the address bar in your web browser (Chrome ftw :P) and login to the admin page. Once there, you will find a section called something like “Advanced.” What you are looking for is the “Port Forwarding” section. Make sure you forward port 9 to your LAN IP that we got earlier.


  • – Can auto send you magic packets on a schedule though the schedule is messed up. I had to set the schedule time to EST while keeping the timezone set to my timezone. It’s quirky, but it works. When you get the schedule set correctly is should tell you how many minutes before it sends the packets at the top. Alternately you can use another computer on your LAN to test it, but be sure to use your computers’ WAN IP address to ensure it actually works from outside your LAN.
  • Wake-on-LAN Packet sniffer v1.1 (direct download) – This nifty little free tool was incredibly useful when paired with the above site to verify that the magic packets were actually getting through to my computer.
  • Wake-on-LAN Utility (direct download) – This is what you’ll be using from your remote location to send the magic packet which will wake up your computer. There are other utilities like this available, but I like this one. It’s simple. I will also include a batch script in a separate zip which will make your life easier. You’ll only have to edit the script to put your computer MAC and WAN IP.
  • – The indispensable tool for ensuring that your port is open to the outside world. Will only show open TCP ports. Not UDP as is used for WOL.
  • Android app with excellent reviews called “Wol Wake on Lan Wan” by Brobble, available on Android Market. (thanks to Emad)

NGINX Log Analysis with Elasticsearch, Logstash, and Kibana

According to Netcraft’s latest web server survey last month, NGINX is the second-most widely used web server (after Apache) among the one million busiest sites worldwide.

NGINX is popular because of its focus on concurrency, high performance, and low memory usage. It serves dynamic HTTP content and is used to handle requests, caching, and load balancing.

But despite the popularity of NGINX, it is still challenging to obtain relevant and useful information from the thousands of log entries that NGINX web servers generate every second. In this article, I will take a deeper look at NGINX logs and give three use cases on how users can leverage ELK to store, parse, and analyze their NGINX web server logs.

If you carefully watch what’s going on, NGINX logs can help reveal issues within NGINX itself as well as in other areas within your web infrastructure. NGINX logs — such as error logs — can let you know, for example, when and where your servers are failing so that you can process valid requests. In addition, NGINX generates access logs such as the amount of currently-active client connections and the overall count of client requests. Learn more about NGINX logging and monitoring.

With Elasticsearch, Logstash, and Kibana, this vast amount of log data can be collected, parsed, and stored. The digested data can then be transformed into insights that can be presented in a way so that users can receive immediate notifications and quickly find and fix the root causes of problems.

How to Parse NGINX Logs Using Logstash

One of the most-common things that need to be done first is to access NGINX logs and apply some filtering and enhancements with Logstash. Here is an example of an NGINX log line and the Logstash configuration that we at use to parse such logs in our own environment.

A sample NGINX access log entry:

  1. [10/Nov/2015:07:06:59 +0000] “POST /kibana/elasticsearch/_msearch?timeout=30000&ignore_unavailable=true&preference=1447070343481 HTTP/1.1” 200 8352 “” “Mozilla/5.0 (X11; Linux armv7l) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/45.0.2454.101 Chrome/45.0.2454.101 Safari/537.36” 0.465 0.454

The Logstash configuration to parse that NGINX access log entry:

  1. grok {
  2. match => [ “message” , “%{COMBINEDAPACHELOG}+%{GREEDYDATA:extra_fields}”]
  3. overwrite => [ “message” ]
  4. }
  6. mutate {
  7. convert => [“response”, “integer”]
  8. convert => [“bytes”, “integer”]
  9. convert => [“responsetime”, “float”]
  10. }
  12. geoip {
  13. source => “clientip”
  14. target => “geoip”
  15. add_tag => [ “nginx-geoip” ]
  16. }
  18. date {
  19. match => [ “timestamp” , “dd/MMM/YYYY:HH:mm:ss Z” ]
  20. remove_field => [ “timestamp” ]
  21. }
  23. useragent {
  24. source => “agent”
  25. }

A sample NGINX error log:

  1. 2015/11/10 06:49:59 [warn] 10#0: *557119 an upstream response is buffered to a temporary file /var/lib/nginx/proxy/4/80/0000003804 while reading upstream, client:, server:, request: “GET /kibana/index.js?_b=1273 HTTP/1.1”, upstream: “”, host: “”, referrer: “”

The Logstash configuration to parse that NGINX error log:

  1. grok {
  2. match => [ “message” , “(?<timestamp>%{YEAR}[./-]%{MONTHNUM}[./-]%{MONTHDAY}[- ]%{TIME}) \[%{LOGLEVEL:severity}\] %{POSINT:pid}#%{NUMBER}: %{GREEDYDATA:errormessage}(?:, client: (?<client>%{IP}|%{HOSTNAME}))(?:, server: %{IPORHOST:server})(?:, request: %{QS:request})?(?:, upstream: \”%{URI:upstream}\”)?(?:, host: %{QS:host})?(?:, referrer: \”%{URI:referrer}\”)”]
  3. overwrite => [ “message” ]
  4. }
  6. geoip {
  7. source => “client”
  8. target => “geoip”
  9. add_tag => [ “nginx-geoip” ]
  10. }
  12. date {
  13. match => [ “timestamp” , “YYYY/MM/dd HH:mm:ss” ]
  14. remove_field => [ “timestamp” ]
  15. }

These are two of the configurations that we are currently using ourselves — of course, there are more fields that can be added to the NGINX log files and then can be parsed and analyzed accordingly.

The following use cases exemplify the benefits of using ELK with NGINX logs.

NGINX Log Analysis Use Cases

Use Case #1: Operational Analysis


This is one of the most common use cases. DevOps engineers and site reliability engineers can get notifications of event such as whenever traffic is significantly higher than usual or the NGINX error rate exceeds a certain level. As a result of these issues, site page response rates can slow down to undesirable levels and create a poor user experience.

By using ELK log management to analyze the NGINX error logs, users can quickly see, for example, that there is a significant decrease in the number of users who are accessing the servers or an unprecedented peak in traffic that overloaded the server and caused it to crash. If these unusual traffic patterns occur in a single dashboard, then that can indicate a DDoS attack. In response, ELK Stack log management solutions can quickly drill down to find the suspicious source IP address of the traffic generator and block it.

One of the most helpful visualizations and ELK Stack alerts we have is the number of log lines that cache responds to disk. You can read more here about this configuration and how to track it.

This visualization and more can be found in our ELK Apps library by searching for NGINX.

Use Case #2: Technical SEO

Quality content creation is now extremely important for SEO purposes, although it’s basically useless if Google has not crawled, parsed, and indexed the content. As shown in the dashboard above, tracking and monitoring your NGINX access log with ELK can provide you with the last Google crawl date to validate that your site is constantly being crawled by Googlebot.

By capturing and analyzing web server access logs with ELK, you can also find out if you have hit your Google crawl limits, how Google crawlers prioritize your web pages, and which URLs get the most and least attention. Learn how to use server log analysis for technical SEO.


Use Case #3: Business Intelligence

NGINX Access logs contain all the information needed in order to run a thorough analysis of your application users, from their geographic location to the pages they visit to the experience they are receiving. The benefit of using ELK to monitor the NGINX logs is that you can also correlate it with infrastructure-level logs and better understand your audience’s experience as it is affected by your underlying infrastructure. For example, you can analyze response times and correlate them with the CPUs and memory loads on the machines to see if stronger machines may provide a better UX.

Visualizing NGINX Logs

As mentioned above, one of the biggest benefits of using the ELK Stack for NGINX log analysis is the ability to visualize analyses and correlations. Kibana allows you to create detailed visualizations and dashboards that can help you keep tabs on NGINX and identify anomalies. Configuring and building these visualizations is not always easy, but the end-result is extremely valuable.

Examples abound — starting with the most simple visualizations, showing a breakdown of requests per country, through heat maps showing users and response codes, and ending with complex line charts displaying response time per response code, broken down into sub-groups per agent and client.

To help you hit the ground running, we provide a free library of pre-made searches, visualizations and dashboards for NGINX — ELK Apps. The library includes 11 visualizations for the NGINX log format including a complete, real-time monitoring dashboard.



Log analysis for operational intelligence, business intelligence and technical SEO are just three examples of why NGINX users need to monitor logs. There are many more use cases, such as log driven development and application monitoring. In fact, not a week goes by without us learning about a new way the ELK Stack is being used by one of our customers.

We’d love to hear in the comments below how you are using ELK to analyze NGINX log files!

OSSEC rule to detect new run keys added to the registry!topic/ossec-list/xgkBnuyJ6ek


I’m wondering if anyone has created (or could help me) create an OSSEC rule to detect new additions to the “run” keys in the registry.

The goal is to detect malware and fileless malware adding run keys to the registry.
If anyway has started creating rules for fileless malware detection that would be great too.

A: by Janis Zoldners

1) Install Sysmon 5 (Sysinternals)

2) Configure registry monitoring in Sysmon configuration (xml file):

<RegistryEvent onmatch=”include”>
<TargetObject condition=”contains”>Software\Microsoft\Windows\CurrentVersion\Run</TargetObject>
<TargetObject condition=”contains”>Software\Microsoft\Windows\CurrentVersion\RunOnce</TargetObject>

3) Configure OSSEC agents to parse Sysmon eventlog:


4) Create OSSEC rule:

<rule id=”18200″ level=”5″>
<description>Sysmon: registry modified</description>
<info>Microsoft Sysmon</info>


Rule: 18200 (level 5) -> ‘Sysmon: registry modified’
2016 Dec 20 WinEvtLog: Microsoft-Windows-Sysmon/Operational: Information(12): no source: SYSTEM: NT AUTHORITY: COMPUTER: Registry object added or deleted:
EventType: CreateKey
UtcTime: 2016-12-20
ProcessGuid: {6C563ED9-D21B-5858-0000-0010C79A2E07}
ProcessId: 6252
Image: C:\Program Files (x86)\Google\Chrome\Application\55.0.2883.87\Installer\setup.exe
TargetObject: \REGISTRY\USER\S-1-5*\Software\Microsoft\Windows\CurrentVersion\Run