Skip to content

Use Cases & Scenarios

Real-world examples of how greengate fits into different team workflows.


1. SaaS startup — post-deploy Lighthouse health check

Situation: You ship helpdeck-landing to production on every merge to main. You want to know immediately if a deploy breaks your Lighthouse scores.

How it works: Run Lighthouse after deploy so it tests the new live code.

yaml
jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - run: npm run build
      - run: ./scripts/deploy.sh   # your deploy script

  lighthouse:
    needs: deploy                  # waits for deploy to finish
    runs-on: ubuntu-latest
    steps:
      - name: Install greengate
        run: |
          curl -sL https://github.com/thinkgrid-labs/greengate/releases/latest/download/greengate-linux-amd64 \
            -o /usr/local/bin/greengate && chmod +x /usr/local/bin/greengate

      - name: Lighthouse post-deploy audit
        env:
          PAGESPEED_API_KEY: ${{ secrets.PAGESPEED_API_KEY }}
        run: |
          greengate lighthouse \
            --url https://yourapp.com \
            --min-performance 85 \
            --min-accessibility 90

What it catches: Image regressions, unoptimized bundles, missing meta tags, accidental noindex — after every production deploy.


2. SaaS startup — pre-merge gate with staging environment

Situation: You have a staging server. You want to block merges if a PR degrades performance before it hits production.

How it works: Deploy to staging first, then run Lighthouse against the staging URL.

yaml
jobs:
  deploy-staging:
    runs-on: ubuntu-latest
    outputs:
      url: ${{ steps.deploy.outputs.url }}
    steps:
      - uses: actions/checkout@v4
      - run: npm run build
      - name: Deploy to staging
        id: deploy
        run: |
          ./scripts/deploy-staging.sh
          echo "url=https://staging.yourapp.com" >> $GITHUB_OUTPUT

  lighthouse-gate:
    needs: deploy-staging
    runs-on: ubuntu-latest
    steps:
      - name: Install greengate
        run: |
          curl -sL https://github.com/thinkgrid-labs/greengate/releases/latest/download/greengate-linux-amd64 \
            -o /usr/local/bin/greengate && chmod +x /usr/local/bin/greengate

      - name: Lighthouse pre-merge gate
        env:
          PAGESPEED_API_KEY: ${{ secrets.PAGESPEED_API_KEY }}
        run: |
          greengate lighthouse \
            --url "${{ needs.deploy-staging.outputs.url }}" \
            --strategy mobile \
            --min-performance 80 \
            --min-accessibility 90

What it catches: Regressions in the PR's code before they merge to main.


3. Preventing secrets from being committed

Situation: A developer accidentally hardcodes an API key and pushes it. You want to catch this before it ever reaches the remote.

Two-layer approach — local + CI:

bash
# Layer 1: catch it locally before git push
greengate install-hooks

This installs a pre-commit hook that scans staged files on every git commit. It fails the commit if a secret is detected.

yaml
# Layer 2: catch it in CI as a safety net
- name: Secret scan
  run: greengate scan

What it catches: AWS keys, Stripe secrets, GitHub tokens, GCP service account JSON, .env values, high-entropy strings — 26 built-in patterns.


4. Kubernetes team — manifest quality gate

Situation: Your team ships microservices with Kubernetes manifests. You want to block deployments that are missing resource limits, health probes, or use the latest image tag.

yaml
- name: Lint Kubernetes manifests
  run: greengate lint --dir ./k8s

With .greengate.toml:

toml
[lint]
target_dir = "./k8s"
bash
greengate lint   # reads config automatically

What it catches: Missing resources.limits, missing livenessProbe/readinessProbe, image: nginx:latest (unpinned), containers running as root.


5. Full security pipeline — Next.js / React app

Situation: You run a React frontend with a Node.js backend. You want secrets, SAST, dependency CVEs, and coverage all gated in one pipeline.

yaml
- name: Secret & SAST scan
  run: greengate scan --annotate   # annotates PRs via GitHub Check Runs

- name: Dependency audit (OSV)
  run: greengate audit             # checks Cargo.lock / package-lock.json

- name: Coverage gate
  run: greengate coverage --file coverage/lcov.info --min 80

- name: React perf regression gate
  if: hashFiles('output/current.perf') != ''
  run: greengate reassure

What it catches across the pipeline:

StepCatches
scanHardcoded secrets, XSS, eval(), command injection in JS/TS
auditKnown CVEs in npm/Cargo dependencies
coverageTest coverage silently dropping below threshold
reassureReact component render time regressing between PRs

6. Solo developer — minimal setup

Situation: You're shipping solo, no staging environment. You want the basics without overhead.

.greengate.toml:

toml
[scan]
entropy = true
entropy_threshold = 4.5

[coverage]
file = "coverage/lcov.info"
min = 70.0

GitHub Actions:

yaml
- name: Install greengate
  run: |
    curl -sL https://github.com/thinkgrid-labs/greengate/releases/latest/download/greengate-linux-amd64 \
      -o /usr/local/bin/greengate && chmod +x /usr/local/bin/greengate

- name: Scan
  run: greengate scan

- name: Audit
  run: greengate audit

- name: Coverage
  run: greengate coverage

Locally:

bash
greengate install-hooks   # catch secrets before they leave your machine

7. Engineering team — PR review intelligence gate

Situation: PRs are going out with untested new code and no consistent estimate of review effort. You want instant feedback on every PR: exactly which newly added lines lack test coverage, and an estimated review time so reviewers can plan their load.

What greengate review outputs:

  • Complexity Score — estimated review time based on lines added/removed, files touched, and cyclomatic complexity of added code
  • New-code coverage gaps — cross-references the diff against your LCOV report and pinpoints exactly which added lines are not covered
yaml
- name: Install greengate
  run: |
    curl -sL https://github.com/thinkgrid-labs/greengate/releases/latest/download/greengate-linux-amd64 \
      -o /usr/local/bin/greengate && chmod +x /usr/local/bin/greengate

- name: PR Review (Complexity + Coverage Gaps)
  if: github.event_name == 'pull_request'
  env:
    GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
    GITHUB_REPOSITORY: ${{ github.repository }}
    GITHUB_SHA: ${{ github.sha }}
  run: |
    greengate review \
      --base "${{ github.event.pull_request.base.sha }}" \
      --coverage-file coverage/lcov.info \
      --min-coverage 80 \
      --annotate

Example output:

╔══ PR Review ════════════════════════════════╗
  Complexity Score : 47  (Normal Review ~23 min)
  Files changed    : 5
  Lines added/del  : +120 / -34
  Cyclomatic nodes : 18
╚═════════════════════════════════════════════╝

New-Code Coverage: 73.3%  ✗ (target: 80%)

  src/engine.rs      12/15 added lines covered  (80.0%) ✓
  src/scanner.rs      6/11 added lines covered  (54.5%) ✗
    Uncovered lines: 88, 89, 92, 95, 101

With --annotate, results are posted as GitHub Check Run annotations directly on the diff lines and a summary comment is added to the PR.

Without a coverage file: greengate review still outputs the Complexity Score and exits 0 — useful when you just want review-time estimates without a coverage gate.

.greengate.toml config:

toml
[review]
min_new_code_coverage = 80   # fail if new-code coverage drops below this
complexity_budget = 0         # 0 = warn only; >0 = fail when score exceeds budget

What it catches:

New code with no testsExact uncovered line numbers in the diff
PR too large to review safelyComplexity Score > 100 → "Large PR — consider splitting"
Reviewer overloadEstimated review time in the PR comment

Key principle: Lighthouse needs a deployed URL

Lighthouse tests whatever is live at the URL you provide. This means:

  • Run it after your deploy step — not before
  • For a merge gate, deploy to staging first, then test the staging URL
  • For a health check, deploy to production, then test the production URL

The threshold you set (--min-performance 80) is your quality floor. If the deployed site scores below it, the pipeline fails.

Released under the MIT License.