Technical

Hands-On Practice with the ELK Stack

Geoff Burke avatarGB
Geoff Burke · 9 min to read
Share:

SIEM & Monitoring Blog Series: ELK Stack

In this second installment of our ELK Stack mini-series within the broader SIEM & Monitoring Blog Series, we shift from foundational concepts to hands-on implementation. This post walks you through setting up a functional ELK test environment using Docker on an Ubuntu virtual machine, giving you a sandbox to explore how Elasticsearch, Logstash, and Kibana work together to process and visualize data. 

Whether you’re a backup administrator or an IT professional, this guide offers a customizable lab where you can experiment with parsing Veeam Backup & Replication Syslog messages and forwarding them into your ELK stack. The goal is to provide a safe, flexible environment for learning and testing, not production deployment. 

Docker Setup 

Requirements: You will need an Ubuntu server VM.  

1. First, we will install docker: 

curl -fsSL https://get.docker.com | sh  

sudo usermod -aG docker $USER 

2. Log out and log back in to verify that Docker was installed successfully. 

docker --version  

docker compose version 

ELK Setup  

1. Perform the following commands in your Ubuntu VM: 

mkdir elk-stack 

cd elk-stack 

mkdir -p logstash/pipeline   

mkdir -p elasticsearch 

2. Create the Elasticsearch config file: 

cat > elasticsearch/elasticsearch.yml << 'EOF' 

cluster.name: "docker-cluster" 

network.host: 0.0.0.0 

discovery.type: single-node 

xpack.security.enabled: false 

xpack.monitoring.collection.enabled: true 

EOF 

3. Create configuration files for Logstash. 

4. On the command line the command line paste this command to create the Logstash configuration file: 

cat > logstash/pipeline/logstash.conf << 'EOF' 

input { 

  tcp {      port => 5000      codec => json_lines      tags => ["tcp"]   }    syslog {      port => 5514      tags => ["syslog"]    }  } 

filter {    if "syslog" in [tags] {      if [program] =~ /Veeam/ {        mutate {          add_field => { "source_type" => "veeam" }          add_field => { "category" => "backup" } 

        add_field => { "vendor" => "veeam" }        }                if [message] =~ /Job \[/ {         grok {            match => { "message" => "Job \[%{DATA:job_name}\]" }          }        }                if [message] =~ /Success/ {          mutate { add_field => { "backup_status" => "success" } }        } else if [message] =~ /Warning/ {          mutate { add_field => { "backup_status" => "warning" } }        } else if [message] =~ /Error|Failed/ {          mutate { add_field => { "backup_status" => "error" } }        }      }    }     

  mutate {      add_field => { "processed_by" => "logstash" }    } } 

output {    if [source_type] == "veeam" {      elasticsearch {       hosts => ["http://elasticsearch:9200"]        index => "veeam-logs-%{+YYYY.MM.dd}"      }    } else {      elasticsearch {        hosts => ["http://elasticsearch:9200"]        index => "logs-%{+YYYY.MM.dd}"      }    }  }  EOF 

 

5. Here we will create our docker-compose.yml file that will have all the necessary components for our test lab setup. 

cat > docker-compose.yml << 'EOF' 

 

services: 

  elasticsearch: 

    image: docker.elastic.co/elasticsearch/elasticsearch:7.17.0 

    container_name: elasticsearch 

    environment: 

      - discovery.type=single-node 

      - xpack.security.enabled=false 

      - cluster.name=docker-cluster 

      - network.host=0.0.0.0 

      - "ES_JAVA_OPTS=-Xms512m -Xmx512m" 

    ports: 

      - "9200:9200" 

    volumes: 

      - elasticsearch_data:/usr/share/elasticsearch/data 

    networks: 

      - elk 

 

  logstash: 

    image: docker.elastic.co/logstash/logstash:7.17.0 

    container_name: logstash 

    ports: 

      - "5044:5044" 

      - "5000:5000/tcp" 

      - "5514:5514/udp" 

      - "9600:9600" 

    volumes: 

      - ./logstash/pipeline:/usr/share/logstash/pipeline:ro 

    environment: 

      - "LS_JAVA_OPTS=-Xmx256m -Xms256m" 

      - ELASTICSEARCH_HOSTS=http://elasticsearch:9200 

    depends_on: 

      - elasticsearch 

    networks: 

      - elk 

 

  kibana: 

    image: docker.elastic.co/kibana/kibana:7.17.0 

    container_name: kibana 

    ports: 

      - "5601:5601" 

    environment: 

      - ELASTICSEARCH_HOSTS=http://elasticsearch:9200 

      - SERVER_HOST=0.0.0.0 

    depends_on: 

      - elasticsearch 

    networks: 

      - elk 

 

volumes: 

  elasticsearch_data: 

 

networks: 

  elk: 

    driver: bridge 

EOF 

 

6. Now bring up the containers: 

  1. docker compose up -d  
  2. Accessing Kibana in Your Browser 

Now that everything is set up, you can open Kibana in your browser. By default, Kibana runs on port 5601, so you’ll need to navigate to the appropriate URL based on where your Docker environment is hosted. 

For example, if your Docker setup is running on a VM named docker01, you would go to: 

http://docker01:5601 

If your VM has a custom DNS name like geoffsvm, then the URL would be: http://geoffsvm:5601 

Alternatively, if you don’t have a DNS name configured, you can use the VM’s IP address instead: 

http://[your-vm-ip]:5601

Ultimately, just make sure you're pointing to the correct host and using port 5601, which is the default for Kibana. 

Note: If at any time you want to start fresh, then perform the following steps to erase the setup: 

To remove all containers, networks and volumes perform the following commands: 

docker compose down --volumes --remove-orphans 

docker system prune -a --volumes -f  

Configuring Veeam VBR to Send Syslog Data to ELK 

To ingest logs into our setup to see how it is working, we will configure Veeam VBR to forward its syslog messages to ELK: 

1. Go to the upper left-hand drop-down menu and click on “Options.” 

2. Then click on the “Event Forwarding Tab.” 

3. Here, we will add the syslog server that we created in Docker. 

Click on “Add” and type in the details.  

Then, click on either: 

  • “Apply” to save and close the window.
  • “OK” to save and keep the window open.

 

Once you press apply or ok, Veeam will send its first test syslog message to Elasticsearch. 

VERIFY Log Discovery in Kibana 

4. Navigate back to the Kibana interface. 

5. In the upper left drop-down menu, click on “Discover.” There you will be presented with an invitation to create a new data view, including a view of the syslog data by field: 

6. Click on “Create data view.” 

You can see that our Elasticsearch server has already received some syslog messages from Veeam. When you configure the syslog server settings in Veeam, it automatically sends a test message. This allows you to verify that your monitoring or SIEM system is receiving the logs correctly.

We'll name our data view Veeam_Logs and use an index pattern that matches the logs already ingested.

7. Click on “Save data view to Kibana.”

8. Press "Discover" in the menu on the left-hand side of the page to see the log messages from VBR.

Conclusion

With your ELK lab up and running, you've taken a major step toward building a functional SIEM environment. In this post, we walked through setting up Docker containers for Elasticsearch, Logstash, and Kibana, configured a Logstash pipeline for Veeam syslog messages, and explored those logs in Kibana. You now have a solid foundation for ingesting and visualizing log data in a controlled test environment.

In the final post of this mini-series up next month, we’ll build on this foundation by diving into ElasticSecurity and exploring how it enhances threat detection and alerting within the ELK ecosystem. We’ll also introduce Fluentd as an alternative to Logstash, comparing the two tools and showing how to configure Fluentd to forward logs to Elasticsearch. If you're interested in security monitoring use cases, dashboard creation, and optimizing your log pipeline, you won’t want to miss out on what’s next.

Stay up-to-date

By submitting this form, I confirm that I have read and agree to the Privacy Policy.

You can unsubscribe any time.