Carlos Spitzer bio photo

Carlos Spitzer

Senior Consultant at Red Hat

Email Twitter Facebook Google+ LinkedIn Github


A few months ago I started to play with ELK deployments (Elasticsearch + Logstash + Kibana) and I wrote a series of posts where I explained how to build an RPM package for elasticsearch and logstash, install and configure Kibana and how to redirect logs to be processed by logstash filters, exploited by elasticsearch and presented on Kibana web interface.

Today, ELK ecosystem has improved a lot and when I mean a lot is a LOT. Probably this project is starting to be ‘a must’ for any company or customer who wants to centralize and exploit their logs, for example grouping, analyzing and creating business rules with them. I have no doubt that this can be consider an standard for interpreting and centralize logs.

To get introduced with this technology, I suggest you to take a deep look to the project website to understand what ELK is and how it works. You can watch the next video to get some feedback:

How ELK works?

The following diagram shows how this ecosystem works:

  • The logstash-forwarder reads as many local log files as you want to configure for and send them to the logstash server (port 5000) encrypted, using the logstash certification file.
  • The logstash server receives the logs and process the different queues to store the data in the local folders.
  • Elasticsearch performs different operations (optimize the data structures, create search indexes, group information…) to create a better experience accessing to the information.
  • Kibana reads logstash data structures and present them to the users using custom layouts, dashboards and filters. This is pretty much how these different OpenSource projects work together. Of course, this is a very high-level diagram and there is a huge world on each project. In this post we are going to see how to install and configure a single RHEL 7 ELK server to centralize and exploit our logs and how to prepare the log-clients to use logstash-forwarder to send the logs to the ELK server.

Installing and configuring our first ELK environment

### Preparing stuff I’ve chosen Red Hat Enterprise Linux 7 because it would probably be the standard enterprise linux distribution for the next 5-7 years. Of course, you can perform the same actions using CentOS or Fedora 20 releases.

In my case, I have a minimal installation of RHEL7, subscribed to the following repositories (I use Red Hat Satellite 6 as deployment, software and configuration management tool):

$ yum repolist
Loaded plugins: package_upload, product-id, subscription-manager
repo id                                               repo name                                                              status
rhel-7-server-rh-common-rpms/x86_64                   Red Hat Enterprise Linux 7 Server - RH Common (RPMs)                      68
rhel-7-server-rpms/x86_64                             Red Hat Enterprise Linux 7 Server (RPMs)                               4.817
repolist: 4.885

And firewalld and SELinux enabled by default:

$ firewall-cmd --list-all
public (default, active)
  interfaces: eth0
  services: dhcpv6-client ssh
  masquerade: no
  rich rules:
$ sestatus
SELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux
SELinux root directory:         /etc/selinux
Loaded policy name:             targeted
Current mode:                   enforcing
Mode from config file:          enforcing
Policy MLS status:              enabled
Policy deny_unknown status:     allowed
Max kernel policy version:      28

You must install java-openjdk-7, Apache web server and SELinux policy core-utils (this one if you want to keep SELinux enabled, because you will need to restore contexts in future steps):

$ yum install -y java-1.7.0-openjdk httpd policycoreutils-python

Also, you may want to open firewalld ports if you want to securize your server:

$ firewall-cmd --add-service=http --permanent
$ firewall-cmd --permanent --add-port=9200/tcp
$ firewall-cmd --permanent --add-port=5000/tcp 
$ firewall-cmd --reload
$ firewall-cmd --list-all
public (default, active)
  interfaces: eth0
  services: dhcpv6-client http ssh
  ports: 9200/tcp 5000/tcp
  masquerade: no
  rich rules:

Installing and configuring Elasticsearch

First task is to create the elasticsearch repo in your server to point to the official Elasticsearch distribution:

$ rpm --import
$ cat > /etc/yum.repos.d/elasticsearch.repo << EOF
name=Elasticsearch repository for 1.3.x packages

Then, install the software:

$ yum install -y elasticsearch

And perform a minimal configuration: * Disabling dynamic scripts:

$ cat >> /etc/elasticsearch/elasticsearch.yml << EOF
script.disable_dynamic: true
  • Restricting external access: to do it, please look for the property ‘network:hosts:‘, uncomment it and give it the value of ‘localhost‘.
  • Disable multicast: Find ‘ false‘ and uncomment it. Finally, configure systemd to start the daemon at boot time and start the service:
$ systemctl daemon-reload
$ systemctl enable elasticsearch.service
$ systemctl start elasticsearch.service

Note: This is a default installation. To look for default directories, please refer to this link.

Installing and configuring Kibana

Kibana hasn’t got a repository or RPM available, but we can download it executing the following command:

$ wget -P /tmp/
$ tar xvf /tmp/kibana-3.1.0.tar.gz; rm -f /tmp/kibana-3.1.0.tar.gz

Next, edit ‘kibana-3.1.0/config.js‘ config file, find the line that specifies the elasticsearch server URL and replace the port number 9200 with 80:

elasticsearch: "http://"+window.location.hostname+":80",

Move the entire directory to /var/www/html/ location and fix SElinux context:

$ mv kibana-3.1.0/ /var/www/html/kibana3
$ restorecon -R /var/www/html/

Create the apache VirtualHost configuration file for kibana3 service:


  DocumentRoot /var/www/html/kibana3
  <Directory /var/www/html/kibana3>
    Allow from all
    Options -Multiviews

  LogLevel debug
  ErrorLog /var/log/httpd/error_log
  CustomLog /var/log/httpd/access_log combined

  # Set global proxy timeouts
    ProxySet connectiontimeout=5 timeout=90

  # Proxy for _aliases and .*/_search
  <LocationMatch "^/(_nodes|_aliases|.*/_aliases|_search|.*/_search|_mapping|.*/_mapping)$">

  # Proxy for kibana-int/{dashboard,temp} stuff (if you don't want auth on /, then you will want these to be protected)
  <LocationMatch "^/(kibana-int/dashboard/|kibana-int/temp)(.*)$">

  <Location />
    AuthType Basic
    AuthBasicProvider file
    AuthName "Restricted"
    AuthUserFile /etc/httpd/conf.d/kibana-htpasswd
    Require valid-user

Please note that my ELK server is ‘‘ and the root-directory is ‘/var/www/html/kibana3‘. put your own requirements as your convenience.

Move the VirtualHost configuration file to the Apache configuration folder (and fix SElinux labels):

$ mv kibana3.conf /etc/httpd/conf.d/
$ restorecon -R /etc/httpd/conf.d/
$ semanage port -a -t http_port_t -p tcp 9200

In you want to protect Kibana from unauthorized access, add an htpasswd entry for your user (for example ‘admin’):

$ htpasswd -c /etc/httpd/conf.d/kibana-htpasswd admin
New password: 
Re-type new password: 
Adding password for user admin

And finally, enable the service at boot time and start it:

$ systemctl start httpd; systemctl status httpd

Installing and configuring Logstash

The last step is to install and configure logstash, the responsible to receive and process log traces. First, create the repo file to get access to the latest logstash version:

$ cat > /etc/yum.repos.d/logstash.repo << EOF
name=logstash repository for 1.4.x packages

and install it:

$ yum install -y logstash

Since we are going to use Logstash Forwarder to ship logs from our Servers to our Logstash Server, we need to create an SSL certificate and key pair. The certificate is used by the Logstash Forwarder to verify the identity of Logstash Server. Generate the SSL certificate and private key, in the appropriate locations (/etc/pki/tls/…), with the following command:

$ cd /etc/pki/tls
$ sudo openssl req -x509 -batch -nodes -days 3650 -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt

The logstash-forwarder.crt file will be copied to all of the servers that will send logs to Logstash.

Make the certificate available via http (to be downloaded by other potential hosts that are going to install logstash forwarder):

$ mkdir /var/www/html/kibana3/pub
$ cp /etc/pki/tls/certs/logstash-forwarder.crt /var/www/html/kibana3/pub/
$ restorecon -R /var/www/html/kibana3

From now on, the certificate will be available for every client at:

Create the configuration file for lumberjack protocol (which is used by Logstash forwarders):

$ cat > /etc/logstash/conf.d/01-lumberjack-input.conf << EOF
input {
  lumberjack {
    port => 5000
    type => "logs"
    ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
    ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"

This configuration file specifies a lumberjack input that will listen on tcp port 5000, and it will use the SSL certificate and private key that we created earlier. Now let’s create a configuration file called 10-syslog.conf, where we will add a filter for syslog messages:

$ cat > /etc/logstash/conf.d/10-syslog.conf << EOF
filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    syslog_pri { }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]

This filter looks for logs that are labeled as “syslog” type (by a logstash-forwarder), and it will try to use “grok” to parse incoming syslog logs to make it structured and query-able.

Lastly, we will create a configuration file called 30-lumberjack-output.conf, that basically configures Logstash to store the logs in Elasticsearch:

$ cat > /etc/logstash/conf.d/30-lumberjack-output.conf << EOF
output {
  elasticsearch { host => localhost }
  stdout { codec => rubydebug }

One of the last steps is to restart the logstash service to apply changes (please, be aware that logstash is not systemd compliant):

$ chkconfig logstash on; service logstash restart

And finally, share the following files from your elk-server to any potential host-client in your network:

$ wget -P /var/www/html/kibana3/pub/;
$ wget -P /var/www/html/kibana3/pub/
$ wget -P /var/www/html/kibana3/pub/
$ restorecon -R /var/www/html/kibana3/

Where: * logstash-forwarder-0.3.1-1.x86_64.rpm: Package for logstash-forwarder agent. * logstash_forwarder_redhat_init: Example of init.d script for logstash-forwarder. * logstash_forwarder_redhat_sysconfig: Config file for logstash-forwarder.

Install and configure logstash-forwarder on your hosts/clients

The main reason to share the files described above is to easily perform the following steps. First, you must install Logstash Forwarder package:

$ wget -P /tmp/ --user=<user> --password=<pass>
$ yum localinstall /tmp/logstash-forwarder-0.3.1-1.x86_64.rpm
$ rm -f /tmp/logstash-forwarder-0.3.1-1.x86_64.rpm

Then, download the logstash-forwarder init script and config file:

$ wget -O /etc/init.d/logstash-forwarder --user=<user> --password=<pass>
$ chmod +x /etc/init.d/logstash-forwarder
$ wget -O /etc/sysconfig/logstash-forwarder --user=<user> --password=<pass>

Copy the SSL certificate into the appropriate location (/etc/pki/tls/certs):

$ wget -P /etc/pki/tls/certs/ --user=<user> --password=<pass>

And create (or download from logstash-server at the logstash-forwarder config file:

$ mkdir -P /etc/logstash-forwarder
$ cat > /etc/logstash-forwarder/logstash-forwarder.conf << EOF
  "network": {
    "servers": [ "" ],
    "timeout": 15,
    "ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt"
  "files": [
      "paths": [
      "fields": { "type": "syslog" }

The internal IP-Address of my logstash-server is

This configures Logstash Forwarder to connect to your Logstash Server on port 5000 (the port that we specified an input for earlier), and uses the SSL certificate that we created earlier. The paths section specifies which log files to send (here we specify messages and secure), and the type section specifies that these logs are of type “syslog* (which is the type that our filter is looking for).

This is where you would add more files/types to configure Logstash Forwarder to other log files to Logstash on port 5000.

Finally, lets activate Logstash-Forwarder on boot and start the service:

$ chkconfig --add logstash-forwarder
$ service logstash-forwarder start

You should see now that log events are forwarded to your elk-server accessing to the Kibana Dashboard (in my case):

Are you a JBoss EAP admin?

Maybe your are interested on centralize JBoss EAP log files and metrics. If so, please check Jochen Cordes Blog’s post where you can find useful information about it.

Deep dive

If you want to go further, I suggest you to go to the Elasticsearch Resources page and regist to one of the scheduled trainings on your hometown or next to it. For example: Elasticsearch Core Outline or Getting started workshop.

There is a nice O’Reilly book just to be released: Elasticsearch: The Definitive Guide.