Skip to the content.

Thresholds

For a quick lookup of syntax and metrics, see the Thresholds Quick Reference.

Thresholds allow you to define performance criteria that must be met for a load test to pass. This is essential for CI/CD integration where you want automated tests to fail if performance degrades.

Overview

When a threshold fails, crankfire exits with a non-zero exit code, making it easy to integrate into continuous integration pipelines. Thresholds are evaluated after the test completes and results are displayed alongside the regular metrics.

Threshold Format

Thresholds follow this format:

metric:aggregate operator value

For example:

Supported Metrics

http_req_duration

Measures request latency in milliseconds.

Supported aggregates:

Examples:

thresholds:
  - "http_req_duration:p95 < 500"
  - "http_req_duration:p99 < 1000"
  - "http_req_duration:avg < 200"
  - "http_req_duration:max < 2000"

http_req_failed

Measures request failures.

Supported aggregates:

Examples:

thresholds:
  - "http_req_failed:rate < 0.01"    # Less than 1% failures
  - "http_req_failed:count < 10"     # Less than 10 total failures

http_requests

Measures request throughput and count.

Supported aggregates:

Examples:

thresholds:
  - "http_requests:rate > 100"       # At least 100 RPS
  - "http_requests:count > 1000"     # At least 1000 total requests

Supported Operators

Configuration

YAML Configuration

target: https://api.example.com
method: GET
concurrency: 50
duration: 1m
thresholds:
  - "http_req_duration:p95 < 500"
  - "http_req_duration:p99 < 1000"
  - "http_req_failed:rate < 0.01"
  - "http_requests:rate > 100"

JSON Configuration

{
  "target": "https://api.example.com",
  "method": "GET",
  "concurrency": 50,
  "duration": "1m",
  "thresholds": [
    "http_req_duration:p95 < 500",
    "http_req_duration:p99 < 1000",
    "http_req_failed:rate < 0.01",
    "http_requests:rate > 100"
  ]
}

Command Line

Use the --threshold flag (repeatable):

crankfire --target https://api.example.com \
  --concurrency 50 \
  --duration 1m \
  --threshold "http_req_duration:p95 < 500" \
  --threshold "http_req_failed:rate < 0.01"

Output

Human-Readable Output

When thresholds are defined, the output includes a “Thresholds” section:

--- Load Test Results ---
Total Requests:    1000
Successful:        998
Failed:            2
Duration:          10.2s
Requests/sec:      98.04

Latency:
  Min:             12ms
  Max:             156ms
  Mean:            45ms
  P50:             42ms
  P90:             68ms
  P99:             112ms

Thresholds:
  ✓ http_req_duration:p95 < 500: 90.50 < 500.00
  ✓ http_req_duration:p99 < 1000: 112.00 < 1000.00
  ✓ http_req_failed:rate < 0.01: 0.00 < 0.01
  ✓ http_requests:rate > 50: 98.04 > 50.00

Threshold Summary: 4/4 passed

A ✓ indicates a passing threshold, while a ✗ indicates a failure.

JSON Output

With --json-output, thresholds are included in the JSON response:

{
  "total": 1000,
  "successes": 998,
  "failures": 2,
  "requests_per_sec": 98.04,
  "p95_latency_ms": 90.50,
  "p99_latency_ms": 112.00,
  "thresholds": {
    "total": 4,
    "passed": 4,
    "failed": 0,
    "results": [
      {
        "threshold": "http_req_duration:p95 < 500",
        "metric": "http_req_duration",
        "aggregate": "p95",
        "operator": "<",
        "expected": 500,
        "actual": 90.50,
        "pass": true
      },
      {
        "threshold": "http_req_duration:p99 < 1000",
        "metric": "http_req_duration",
        "aggregate": "p99",
        "operator": "<",
        "expected": 1000,
        "actual": 112.00,
        "pass": true
      },
      {
        "threshold": "http_req_failed:rate < 0.01",
        "metric": "http_req_failed",
        "aggregate": "rate",
        "operator": "<",
        "expected": 0.01,
        "actual": 0.002,
        "pass": true
      },
      {
        "threshold": "http_requests:rate > 50",
        "metric": "http_requests",
        "aggregate": "rate",
        "operator": ">",
        "expected": 50,
        "actual": 98.04,
        "pass": true
      }
    ]
  }
}

Exit Codes

This makes it easy to use in CI/CD pipelines:

#!/bin/bash
crankfire --config loadtest.yaml
if [ $? -eq 0 ]; then
  echo "✓ Performance test passed"
else
  echo "✗ Performance test failed"
  exit 1
fi

CI/CD Integration Examples

GitHub Actions

name: Performance Tests

on: [push, pull_request]

jobs:
  performance:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      
      - name: Install crankfire
        run: |
          curl -L https://github.com/torosent/crankfire/releases/latest/download/crankfire-linux-amd64 -o crankfire
          chmod +x crankfire
      
      - name: Run performance test
        run: |
          ./crankfire --config perf-test.yaml
          
      - name: Upload results
        if: always()
        uses: actions/upload-artifact@v3
        with:
          name: performance-results
          path: results.json

GitLab CI

performance_test:
  stage: test
  image: alpine:latest
  before_script:
    - apk add --no-cache curl
    - curl -L https://github.com/torosent/crankfire/releases/latest/download/crankfire-linux-amd64 -o crankfire
    - chmod +x crankfire
  script:
    - ./crankfire --config perf-test.yaml --json-output > results.json
  artifacts:
    when: always
    paths:
      - results.json
    reports:
      junit: results.json

Jenkins

pipeline {
    agent any
    
    stages {
        stage('Performance Test') {
            steps {
                sh '''
                    curl -L https://github.com/torosent/crankfire/releases/latest/download/crankfire-linux-amd64 -o crankfire
                    chmod +x crankfire
                    ./crankfire --config perf-test.yaml --json-output > results.json
                '''
            }
            post {
                always {
                    archiveArtifacts artifacts: 'results.json', fingerprint: true
                }
            }
        }
    }
}

Best Practices

Start Conservative

Begin with generous thresholds and tighten them as you understand your system’s baseline:

thresholds:
  # Start with generous thresholds
  - "http_req_duration:p95 < 2000"
  - "http_req_failed:rate < 0.05"

After establishing baseline performance, tighten gradually:

thresholds:
  # Tightened after establishing baseline
  - "http_req_duration:p95 < 500"
  - "http_req_failed:rate < 0.01"

Use Multiple Percentiles

Don’t rely on just one percentile. Use a combination to catch different types of degradation:

thresholds:
  - "http_req_duration:p50 < 100"   # Median should be fast
  - "http_req_duration:p90 < 200"   # Most requests should be fast
  - "http_req_duration:p99 < 500"   # Even outliers should be reasonable

Combine Different Metrics

Use both latency and failure rate thresholds:

thresholds:
  - "http_req_duration:p95 < 500"
  - "http_req_failed:rate < 0.01"
  - "http_requests:rate > 100"

Environment-Specific Thresholds

Use different threshold files for different environments:

# Development - more lenient
crankfire --config perf-test.yaml --threshold "http_req_duration:p95 < 1000"

# Production - strict
crankfire --config perf-test.yaml --threshold "http_req_duration:p95 < 200"

Troubleshooting

Threshold Not Evaluating

If a threshold doesn’t seem to be evaluating:

  1. Check the format: Ensure it follows metric:aggregate operator value
  2. Verify the metric name: Must be exactly http_req_duration, http_req_failed, or http_requests
  3. Check the aggregate: Must be one of the supported aggregates for that metric
  4. Validate the operator: Must be <, <=, >, >=, or ==

Flaky Thresholds

If thresholds fail inconsistently:

  1. Increase test duration to get more stable statistics
  2. Use higher percentiles (p99 instead of p50) for more forgiving thresholds
  3. Add more concurrency to smooth out variance
  4. Check network stability in your CI environment

Threshold Too Strict

If legitimate changes cause threshold failures:

  1. Re-baseline your thresholds based on current performance
  2. Use ranges with both upper and lower bounds
  3. Consider percentages instead of absolute values for scaling scenarios

Examples

Basic API Test

target: https://api.example.com/users
method: GET
concurrency: 20
duration: 30s
thresholds:
  - "http_req_duration:p95 < 200"
  - "http_req_failed:rate < 0.01"

High-Throughput Test

target: https://api.example.com/search
method: POST
body: '{"query":"test"}'
concurrency: 100
rate: 1000
duration: 1m
thresholds:
  - "http_req_duration:p99 < 1000"
  - "http_req_failed:rate < 0.05"
  - "http_requests:rate > 800"

Strict SLA Test

target: https://api.example.com/critical
method: GET
concurrency: 50
duration: 5m
thresholds:
  - "http_req_duration:p50 < 50"
  - "http_req_duration:p90 < 100"
  - "http_req_duration:p95 < 150"
  - "http_req_duration:p99 < 300"
  - "http_req_failed:rate < 0.001"
  - "http_requests:rate > 200"