Part 1 - Tweak OS
-
Run the following
vi /etc/sysctl.conf
-
Add the following to the end of the file and save (:wq!)
# Elasticsearch uses a hybrid mmapfs / niofs directory by default to # store its indices. The default operating system limits on mmap counts # is likely to be too low, which may result in out of memory exceptions. # We can mitigate this by setting the below vm.max_map_count vm.max_map_count=262144 # Make sure to increase the number of open files descriptors on the machine #(or for the user running elasticsearch). Setting it to 32k or even 64k is recommended. fs.file-max=64000 # Redis needs the following to avoid low memory conditions vm.overcommit_memory = 1
-
Commit the change
sysctl -p
-
Run the following
vi /etc/security/limits.conf
-
Add the following and save (:wq!)
elasticsearch soft nofile 32000 elasticsearch hard nofile 32000 elasticsearch - memlock unlimited
-
Update sources.list for our installs
wget -O - http://packages.elasticsearch.org/GPG-KEY-elasticsearch | sudo apt-key add - vi /etc/apt/sources.list
-
Add the following to the end of the file and save (:wq!)
#ELK Stack deb http://packages.elasticsearch.org/elasticsearch/1.1/debian stable main deb http://packages.elasticsearch.org/logstash/1.4/debian stable main
-
Install Oracle JDK
sudo add-apt-repository ppa:webupd8team/java sudo apt-get update && sudo apt-get install oracle-java7-installer
-
Confirm Java install
java -version
-
Run the following
ulimit -l unlimited
Part 2 - Elasticsearch
-
Install Elasticsearch
sudo apt-get update && sudo apt-get install elasticsearch=1.1.1
-
Run the following to configure Elasticsearch to start on boot after OS restart
sudo update-rc.d elasticsearch defaults 95 10
-
Configure elasticsearch
vi /etc/init.d/elasticsearch
-
Ensure the following are uncommented / set and save (:wq!)
# Min/Max Memory ES_MIN_MEM=512m ES_MAX_MEM=512m # Heap Size (defaults to 256m min, 1g max) ES_HEAP_SIZE=512m # Maximum number of open files MAX_OPEN_FILES=65535 # Maximum amount of locked memory MAX_LOCKED_MEMORY=unlimited
-
Configure elasticsearch more
cp /etc/elasticsearch/elasticsearch.yml /etc/elasticsearch/elasticsearch.yml.bak vi /etc/elasticsearch/elasticsearch.yml
-
Ensure the following are uncommented / set and save (:wq!)
bootstrap.mlockall: true #dont allow memory swapping cluster.name: RestonES #identifies our elasticsearch cluster; must make this unique if multiple elasticsearch installs on same network node.name: "logstashsimsky" #identifies our elasticsearch node in our cluster node.master: true #indicates if node provides cluster management; we ideally want dedicated server with this setting true and node.data false node.data: true #indicates if node stores data (meaning, shards of indices can be stored here) path.conf: /etc/elasticsearch path.data: /var/lib/elasticsearch path.logs: /var/log/elasticsearch network.host: 10.50.101.51 #your IP here # Search thread pool threadpool.search.type: fixed threadpool.search.size: 20 threadpool.search.queue_size: 100 # Index thread pool threadpool.index.type: fixed threadpool.index.size: 60 threadpool.index.queue_size: 200 indices.memory.index_buffer_size: 50% #give JVM equal parts search and querying memory buffer # Set the number of shards (splits) of an index (5 by default): # index.number_of_shards: 1 #we have only one ES server, so only 1 shared with no replicas # Set the number of replicas (additional copies) of an index (1 by default): # index.number_of_replicas: 0 #we have only one ES server, so only 1 shared with no replicas
-
Install Elasticsearch plugins (optional)
sudo /usr/share/elasticsearch/bin/plugin -install lukas-vlcek/bigdesk/2.4.0 sudo /usr/share/elasticsearch/bin/plugin -install mobz/elasticsearch-head
-
Restart Elasticsearch and test with the following (should see "mlockall" : true for your ES instance)
sudo service elasticsearch restart curl http://10.50.101.51:9200 curl http://10.50.101.51:9200/_nodes/process?pretty
Part 3 - Nginx (webserver) and Kibana
-
Install Nginx
sudo apt-get update && sudo apt-get install nginx
-
Install Kibana
sudo mkdir -p /srv/www/kibana wget https://download.elasticsearch.org/kibana/kibana/kibana-3.1.0.tar.gz sudo tar xf kibana-3.1.0.tar.gz -C /srv/www/ sudo chown -R www-data:www-data /srv/www/
-
Configure Nginx for Kibana
cp /etc/nginx/sites-available/default /etc/nginx/sites-available/default.bak vi /etc/nginx/sites-available/default
-
Ensure the following and save (:wq!)
server { listen 80 default_server; root /srv/www; index index.html index.htm; # Make site accessible from http://localhost/ server_name localhost; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ =404; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } location /kibana { alias /srv/www/kibana-3.1.0/; try_files $uri $uri/ =404; } }
-
Configure Kibana to use Logstash Dashboard as Default
cp /srv/www/kibana-3.1.0/app/dashboards/default.json /srv/www/kibana-3.1.0/app/dashboards/default.json.bak cp /srv/www/kibana-3.1.0/app/dashboards/logstash.json /srv/www/kibana-3.1.0/app/dashboards/logstash.json.bak mv /srv/www/kibana-3.1.0/app/dashboards/logstash.json /srv/www/kibana-3.1.0/app/dashboards/default.json mv /srv/www/kibana-3.1.0/app/dashboards/logstash.json.bak /srv/www/kibana-3.1.0/app/dashboards/logstash.json
-
Restart Nginx
sudo service nginx reload
-
Confirm Elasticsearch can hit Kibana by going to the following in your web browser
http://10.50.101.51/kibana/
- If successful, will see
- If unsuccessful, will see
- If successful, will see
Part 4 - Redis (Required only for multiple Logstash installs)
-
Install Redis
sudo apt-get update && sudo apt-get install redis-server
-
Configure Redis
cp /etc/redis/redis.conf /etc/redis/redis.conf.bak vi /etc/redis/redis.conf
-
Ensure the following and save (:wq!)
daemonize yes pidfile /var/run/redis/redis-server.pid port 6379 bind 0.0.0.0 loglevel notice logfile /var/log/redis/redis-server.log stop-writes-on-bgsave-error yes rdbcompression yes rdbchecksum yes dbfilename dump.rdb dir /var/lib/redis maxmemory 500mb maxmemory-policy allkeys-lru
-
Restart Redis
sudo service redis-server restart
Part 5 - Logstash Indexer Install
-
Install Logstash
sudo apt-get update && sudo apt-get install logstash
-
Configure Logstash
cp /etc/logstash/conf.d/logstash.conf /etc/logstash/conf.d/logstash.conf.bak vi /etc/logstash/conf.d/logstash.conf
-
Ensure the following and save (:wq!)
input { redis { host => "10.50.101.51" key => "logstash" data_type => "list" codec => json } } output { elasticsearch { cluster => "RestonES" node_name => "logstashsimsky" } if "alert" in [tags] { email { body => "Triggered in: %{message}" subject => "Logstash Alert" from => "no-reply.logstash@blackboardss.com" to => "lora.brock@blackboard.com" via => "sendmail" } } }
-
Restart Logstash
sudo service logstash restart
-
Run the following to configure OS to not auto update Elasticsearch and Logstash to latest versions because it could cause compatibility issues
sudo aptitude hold elasticsearch logstash
Part 6 - Sendmail Install
-
Run the following
sudo apt-get install sendmail
-
Configure Sendmail
cp /etc/mail/sendmail.cf /etc/mail/sendmail.cf.bak vi /etc/mail/sendmail.cf
-
Ensure the following and save (:wq!)
# "Smart" relay host (may be null) DSsmtp.inapps.presidium.inc
-
Restart Sendmail
sudo service sendmail restart