This is a example of ELK stack deployment of ansible provisioned by Vagrant.
To bring up the Vagrant environment, it requires the followings to be install on your host machine:
- Ansible (tested with 1.9.2)
- sshpass (tested with 1.0.5)
- Vagrant (tested with 1.7.2)
There are 10 virtual machines (5.75GB of RAM) defined in this stack:
| Hostname | IP | CPU | Memory | Role |
|---|---|---|---|---|
| 10.elastic | 10.10.10.10 | 4 | 768 MB | elasticsearch + topbeat |
| 11.elastic | 10.10.10.11 | 4 | 768 MB | elasticsearch + topbeat |
| 12.elastic | 10.10.10.12 | 4 | 768 MB | elasticsearch + kibana + topbeat |
| 10.logstash | 10.10.20.10 | 4 | 512 MB | logstash + topbeat |
| 11.logstash | 10.10.20.11 | 4 | 512 MB | logstash + topbeat |
| 12.logstash | 10.10.20.12 | 4 | 512 MB | logstash + redis + topbeat |
| 13.logstash | 10.10.20.13 | 4 | 512 MB | logstash + redis |
| 14.logstash | 10.10.20.14 | 4 | 512 MB | logstash + redis |
| 10.postgres | 10.10.30.10 | 4 | 512 MB | postgres + packetbeat + topbeat |
| 10.nginx | 10.10.40.10 | 4 | 512 MB | fake-app + logstash-forwarder + packetbeat + topbeat |
-
The two nodes
10.elasticand11.elasticare the elasticsearch cluster collecting logs from three logstash instances12.logstash,13.logstashand14.logstash. -
The node
11.elasticis the monitor node of the elasticsearch cluster. There is also a elasticsearch instance on it, but it comes with the watcher plugin for querying10.elasticand11.elasticcluster. The kibana instance is also installed on this node for querying10.elasticand11.elasticcluster. -
The two nodes
10.logstashand11.logstashare the log shippers receiving logs sent by logstash-forwarder from the10.nginxnode and send them to the redis instance on the node12.logstash. -
The node
12.logstashis the logstash indexer, which reads and processes logs from the local redis, and then index them into10.elasticand11.elasticcluster. -
The node
13.logstashis the logstash indexer for the topbeats, which reads and processes logs from the local redis, and then index them into10.elasticand11.elasticcluster. -
The node
14.logstashis the logstash indexer for the packetbeats, which reads and processes logs from the local redis, and then index them into10.elasticand11.elasticcluster. -
The node
10.postgresis the postgresql database instance for the fake app on the node10.nginx. -
The node
10.nginxis the application node which installed with a fake nodejs app I wrote. The purpose of the fake app is to generate PostgreSQL traffic, so that it can be visualized on the PostgreSQL dashboard of Kibana. Please see here for how to use it: https://github.com/rueian/fake-app. The logstash-forwarder instance on the node is responsible for sending log from local files/var/log/syslogandvar/log/auth.log. -
All the
topbeatinstances are responsible for collecting system information like cpu usage and then send to the node13.logstash. -
All the
packetbeatinstances are responsible for collecting all the http and postgresql traffic and send processed packets to the node14.logstash.
If you want to change this artitecture, you may need to modify the 3 files:
Vagrantfilewhich defined the hardware details of machines.inventory.iniwhich is the inventory of Ansible Playbook.main.ymlwhich is the playbook containing IP configs as well.
And if you want to change the logstash node, you also need to replace the ssl certs in files/certs
The certs is used for communication between logstash-forwarder and logstash and is configured with CN: *.logstash, therefore you must replace them if you want to change the hostname of logstash node.
See here for generating a new cert.
Run the command:
$ vagrant upWhen finished, you should be able access kibana from http://10.10.10.12 and see the logs.
If you make changes in the playbook, run:
$ vagrant provision
- Topbeat 1.0.0-beta3 can't generate proc field with elasticsearch 2.0. But it is fixed in beta4.