NGINX Log Analysis with Elasticsearch, Logstash, and Kibana

According to Netcraft’s latest web server survey last month, NGINX is the second-most widely used web server (after Apache) among the one million busiest sites worldwide.

NGINX is popular because of its focus on concurrency, high performance, and low memory usage. It serves dynamic HTTP content and is used to handle requests, caching, and load balancing.

But despite the popularity of NGINX, it is still challenging to obtain relevant and useful information from the thousands of log entries that NGINX web servers generate every second. In this article, I will take a deeper look at NGINX logs and give three use cases on how users can leverage ELK to store, parse, and analyze their NGINX web server logs.

If you carefully watch what’s going on, NGINX logs can help reveal issues within NGINX itself as well as in other areas within your web infrastructure. NGINX logs — such as error logs — can let you know, for example, when and where your servers are failing so that you can process valid requests. In addition, NGINX generates access logs such as the amount of currently-active client connections and the overall count of client requests. Learn more about NGINX logging and monitoring.

With Elasticsearch, Logstash, and Kibana, this vast amount of log data can be collected, parsed, and stored. The digested data can then be transformed into insights that can be presented in a way so that users can receive immediate notifications and quickly find and fix the root causes of problems.

How to Parse NGINX Logs Using Logstash

One of the most-common things that need to be done first is to access NGINX logs and apply some filtering and enhancements with Logstash. Here is an example of an NGINX log line and the Logstash configuration that we at use to parse such logs in our own environment.

A sample NGINX access log entry:

  1. [10/Nov/2015:07:06:59 +0000] “POST /kibana/elasticsearch/_msearch?timeout=30000&ignore_unavailable=true&preference=1447070343481 HTTP/1.1” 200 8352 “” “Mozilla/5.0 (X11; Linux armv7l) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/45.0.2454.101 Chrome/45.0.2454.101 Safari/537.36” 0.465 0.454

The Logstash configuration to parse that NGINX access log entry:

  1. grok {
  2. match => [ “message” , “%{COMBINEDAPACHELOG}+%{GREEDYDATA:extra_fields}”]
  3. overwrite => [ “message” ]
  4. }
  6. mutate {
  7. convert => [“response”, “integer”]
  8. convert => [“bytes”, “integer”]
  9. convert => [“responsetime”, “float”]
  10. }
  12. geoip {
  13. source => “clientip”
  14. target => “geoip”
  15. add_tag => [ “nginx-geoip” ]
  16. }
  18. date {
  19. match => [ “timestamp” , “dd/MMM/YYYY:HH:mm:ss Z” ]
  20. remove_field => [ “timestamp” ]
  21. }
  23. useragent {
  24. source => “agent”
  25. }

A sample NGINX error log:

  1. 2015/11/10 06:49:59 [warn] 10#0: *557119 an upstream response is buffered to a temporary file /var/lib/nginx/proxy/4/80/0000003804 while reading upstream, client:, server:, request: “GET /kibana/index.js?_b=1273 HTTP/1.1”, upstream: “”, host: “”, referrer: “”

The Logstash configuration to parse that NGINX error log:

  1. grok {
  2. match => [ “message” , “(?<timestamp>%{YEAR}[./-]%{MONTHNUM}[./-]%{MONTHDAY}[- ]%{TIME}) \[%{LOGLEVEL:severity}\] %{POSINT:pid}#%{NUMBER}: %{GREEDYDATA:errormessage}(?:, client: (?<client>%{IP}|%{HOSTNAME}))(?:, server: %{IPORHOST:server})(?:, request: %{QS:request})?(?:, upstream: \”%{URI:upstream}\”)?(?:, host: %{QS:host})?(?:, referrer: \”%{URI:referrer}\”)”]
  3. overwrite => [ “message” ]
  4. }
  6. geoip {
  7. source => “client”
  8. target => “geoip”
  9. add_tag => [ “nginx-geoip” ]
  10. }
  12. date {
  13. match => [ “timestamp” , “YYYY/MM/dd HH:mm:ss” ]
  14. remove_field => [ “timestamp” ]
  15. }

These are two of the configurations that we are currently using ourselves — of course, there are more fields that can be added to the NGINX log files and then can be parsed and analyzed accordingly.

The following use cases exemplify the benefits of using ELK with NGINX logs.

NGINX Log Analysis Use Cases

Use Case #1: Operational Analysis


This is one of the most common use cases. DevOps engineers and site reliability engineers can get notifications of event such as whenever traffic is significantly higher than usual or the NGINX error rate exceeds a certain level. As a result of these issues, site page response rates can slow down to undesirable levels and create a poor user experience.

By using ELK log management to analyze the NGINX error logs, users can quickly see, for example, that there is a significant decrease in the number of users who are accessing the servers or an unprecedented peak in traffic that overloaded the server and caused it to crash. If these unusual traffic patterns occur in a single dashboard, then that can indicate a DDoS attack. In response, ELK Stack log management solutions can quickly drill down to find the suspicious source IP address of the traffic generator and block it.

One of the most helpful visualizations and ELK Stack alerts we have is the number of log lines that cache responds to disk. You can read more here about this configuration and how to track it.

This visualization and more can be found in our ELK Apps library by searching for NGINX.

Use Case #2: Technical SEO

Quality content creation is now extremely important for SEO purposes, although it’s basically useless if Google has not crawled, parsed, and indexed the content. As shown in the dashboard above, tracking and monitoring your NGINX access log with ELK can provide you with the last Google crawl date to validate that your site is constantly being crawled by Googlebot.

By capturing and analyzing web server access logs with ELK, you can also find out if you have hit your Google crawl limits, how Google crawlers prioritize your web pages, and which URLs get the most and least attention. Learn how to use server log analysis for technical SEO.


Use Case #3: Business Intelligence

NGINX Access logs contain all the information needed in order to run a thorough analysis of your application users, from their geographic location to the pages they visit to the experience they are receiving. The benefit of using ELK to monitor the NGINX logs is that you can also correlate it with infrastructure-level logs and better understand your audience’s experience as it is affected by your underlying infrastructure. For example, you can analyze response times and correlate them with the CPUs and memory loads on the machines to see if stronger machines may provide a better UX.

Visualizing NGINX Logs

As mentioned above, one of the biggest benefits of using the ELK Stack for NGINX log analysis is the ability to visualize analyses and correlations. Kibana allows you to create detailed visualizations and dashboards that can help you keep tabs on NGINX and identify anomalies. Configuring and building these visualizations is not always easy, but the end-result is extremely valuable.

Examples abound — starting with the most simple visualizations, showing a breakdown of requests per country, through heat maps showing users and response codes, and ending with complex line charts displaying response time per response code, broken down into sub-groups per agent and client.

To help you hit the ground running, we provide a free library of pre-made searches, visualizations and dashboards for NGINX — ELK Apps. The library includes 11 visualizations for the NGINX log format including a complete, real-time monitoring dashboard.



Log analysis for operational intelligence, business intelligence and technical SEO are just three examples of why NGINX users need to monitor logs. There are many more use cases, such as log driven development and application monitoring. In fact, not a week goes by without us learning about a new way the ELK Stack is being used by one of our customers.

We’d love to hear in the comments below how you are using ELK to analyze NGINX log files!