Security Automation: From Manual to Continuous

July 17, 2017

Security teams are outnumbered. A handful of security engineers reviewing work from hundreds of developers can’t keep pace. Manual security reviews become bottlenecks. Developers ship without review. Vulnerabilities reach production.

The solution isn’t more security engineers—it’s security automation. Integrate security checks into development pipelines. Automate what can be automated. Reserve human expertise for what requires judgment.

This is DevSecOps: security as code, security in the pipeline, security everyone’s responsibility.

The Case for Automation

Scale

Manual security review doesn’t scale. If your security team can review one deployment per hour, and developers deploy twenty times per day, math defeats you.

Automated checks scale infinitely. Every commit, every pull request, every deployment can be checked without human involvement.

Speed

Manual review takes time. Developers wait for security sign-off. Features queue behind review backlog. Security becomes a reason things ship late.

Automated checks run in minutes. Fast feedback enables developers to fix issues immediately, not days later.

Consistency

Manual review quality varies. Tired reviewers miss things. Different reviewers have different focuses. Coverage is inconsistent.

Automated checks are consistent. Every check runs every time. Nothing is skipped because someone was busy.

Documentation

Manual review knowledge lives in heads. Why was this change approved? What was checked?

Automated checks are documented in configuration. The pipeline defines what security checks run. Results are logged.

Building the Security Pipeline

Pre-Commit

Before code reaches the repository, catch issues locally.

Secrets detection: Prevent credentials in code.

# .pre-commit-config.yaml
repos:
  - repo: https://github.com/Yelp/detect-secrets
    rev: v1.0.0
    hooks:
      - id: detect-secrets

Linting: Security-focused linting rules.

  - repo: https://github.com/PyCQA/bandit
    rev: 1.7.0
    hooks:
      - id: bandit
        args: ['-r', 'src/']

Pre-commit checks are instant feedback. Developers fix issues before committing.

Pull Request Checks

When code is proposed for merge, run comprehensive checks.

Static Application Security Testing (SAST): Analyze source code for vulnerability patterns.

# CI pipeline
security-scan:
  script:
    - semgrep --config=auto src/
    - bandit -r src/
  allow_failure: false

SAST tools find:

Dependency scanning: Check dependencies for known vulnerabilities.

dependency-check:
  script:
    - safety check -r requirements.txt
    - npm audit --audit-level=high

Most vulnerabilities come from dependencies. Automated scanning catches them.

Infrastructure as Code scanning: Check Terraform, CloudFormation for misconfigurations.

iac-scan:
  script:
    - checkov -d terraform/
    - cfn_nag_scan --input-path cloudformation/

Misconfigured infrastructure is a common attack vector. Catch it before deployment.

Container scanning: Check container images for vulnerabilities.

container-scan:
  script:
    - trivy image our-app:latest

Container base images contain packages with vulnerabilities. Regular scanning identifies them.

Merge Requirements

Configure repositories to require security checks before merge.

# GitHub branch protection
required_status_checks:
  strict: true
  contexts:
    - "security-scan"
    - "dependency-check"
    - "container-scan"

Developers can’t merge code that fails security checks. Security gates are non-negotiable.

Post-Merge

After code merges, run additional checks.

Dynamic Application Security Testing (DAST): Test running applications for vulnerabilities.

dast-scan:
  script:
    - zap-baseline.py -t https://staging.example.com
  only:
    - main

DAST finds issues SAST can’t: configuration problems, authentication issues, security headers.

Integration tests: Security-focused integration tests.

def test_authentication_required():
    response = client.get('/api/private')
    assert response.status_code == 401

def test_rate_limiting():
    for _ in range(100):
        client.get('/api/login')
    response = client.get('/api/login')
    assert response.status_code == 429

Production

Security monitoring in production.

Runtime protection: Web application firewalls, runtime application self-protection (RASP).

Security monitoring: Log analysis, anomaly detection, intrusion detection.

Vulnerability management: Continuous scanning of production systems.

Tool Selection

SAST Tools

Dependency Scanning

Container Scanning

DAST Tools

IaC Scanning

Making It Work

Start Simple

Don’t implement everything at once. Start with high-value, low-friction checks:

  1. Secrets detection (high value, low false positives)
  2. Dependency scanning (known vulnerabilities)
  3. Basic SAST rules (SQL injection, XSS)

Add more checks as the team adapts.

Manage False Positives

Security tools produce false positives. Unmanaged, they train developers to ignore warnings.

False positives are a calibration problem, not a reason to abandon automation.

Provide Context

When checks fail, provide actionable information:

Generic “security check failed” teaches nothing. Specific guidance enables fixes.

Enable Self-Service

Developers shouldn’t wait for security team to unblock them. Provide:

Security team enables developers; they don’t gate them.

Measure and Improve

Track metrics:

Use metrics to improve the program. Reduce friction while maintaining coverage.

Beyond Automation

Automation handles known patterns. Human expertise handles everything else.

Threat modeling: Understanding what could go wrong requires human thinking.

Design review: Evaluating architecture decisions for security implications.

Penetration testing: Creative attack simulation finds what automation misses.

Incident response: Handling actual security events.

Automation frees security engineers for high-value work. It doesn’t replace them.

Key Takeaways