Using Splunk for Security Monitoring

Overview

As a newly hired SOC analyst at Vandalay Industries, I was tasked with leveraging Splunk to develop searches, custom reports, and alerts to enhance security monitoring. Vandalay Industries has been experiencing DDoS attacks that have disrupted their online business, impacting system uptime and network performance.

This post details how I used Splunk to analyze the impact of a recent DDoS attack and create a report to track network performance metrics.

Part 1: The Need for Speed

Background

Vandalay Industries’ web servers suffered a DDoS attack, causing significant downtime and degraded network performance. To assess the impact, I analyzed network speed data provided by the networking team and created a Splunk report.

Splunk Analysis

Steps Taken:

  1. Used the eval command to create a custom field called ratio, which represents the upload-to-download speed ratio.

  2. Created a statistics report using the table command to display:

    • _time
    • IP_ADDRESS
    • DOWNLOAD_MEGABITS
    • UPLOAD_MEGABITS
    • ratio

Key Findings:

  • Approximate Date and Time of the Attack: 2020-02-23 14:30:30
  • Time to Recovery: 9 Hours
    • Attack Start Time: 14:30
    • Systems Fully Recovered: 23:30

Time of Attack

Time of Attack Report

Part 2: Are We Vulnerable?

Background

Due to the frequency of attacks, my manager needed assurance that sensitive customer data on Vandalay’s servers was not vulnerable. Since Vandalay uses Nessus vulnerability scanners, I analyzed the last 24 hours of scan results to check for critical vulnerabilities.

Splunk Analysis

Steps Taken:

  1. Ran the following Splunk query to check for critical vulnerabilities on the customer database server (IP: 10.11.36.23):

    source="nessus_logs.csv" dest_ip="10.11.36.25" severity="critical"
    
    • This query returned 49 critical vulnerabilities.

    Critical Vuln

  2. Created an alert to monitor this server daily:

    • If the number of critical vulnerabilities exceeds 49, an email alert is sent to soc@vandalay.com.

    Critical Vuln Alert

Part 3: Drawing the (Base)line

Background

A Vandalay server experienced brute force attacks on its administrator account. Management requested monitoring to detect and alert the SOC team in case of future attacks.

Splunk Analysis

Steps Taken:

  1. Identified the attack timeframe:
    • Date: Friday, February 21, 2020
    • Time: ~08:00 to 14:00

    Brute Force Time

  2. Established a baseline and threshold for alerting:
    • Normal failed login attempts: 8-23 per hour
    • Baseline: 23 failed login attempts/hour
    • Threshold determined using SIGMA 1.5: 34 failed login attempts/hour
    • Threshold for alert: Any login activity exceeding 34 failed attempts per hour triggers an alert.
  3. Created an alert:
    • The alert runs hourly.
    • If failed login attempts exceed 34 per hour, an email alert is sent to SOC@vandalay.com.

Reasoning for Threshold

The threshold accounts for natural fluctuations in login activity. This ensures genuine brute force attacks —which involve rapid and repeated login attempts— are detected without false positives.

Conclusion

During this project, I demonstrated my skills with Splunk, including:

  • Splunk search using queries
  • Using fields
  • Creating custom reports
  • Designing custom alerts

This hands-on experience reinforced my ability to leverage Splunk for real-world security monitoring and incident response.