plan files

This commit is contained in:
Elizabeth W
2026-04-19 22:12:00 -06:00
parent 89b3586030
commit 963e020efa
14 changed files with 238 additions and 0 deletions
+19
View File
@@ -0,0 +1,19 @@
# Implementation Plan: Base ClusterWorkflowTemplate
## Objective
Create the foundational Argo `ClusterWorkflowTemplate` for the security pipeline. It must use semantic versioning (e.g., `amp-security-pipeline-v1.0.0`) so projects can pin to a stable version.
## Requirements
- Define a `ClusterWorkflowTemplate` resource.
- Name the template with a semver tag (e.g., `name: amp-security-pipeline-v1.0.0`).
- Define inputs/parameters:
- `working-dir` (default: `.`)
- `fail-on-cvss` (default: `7.0`)
- `repo-url` (required)
- `git-revision` (default: `main`)
- Define the DAG (Directed Acyclic Graph) structure that will orchestrate the phases (Clone -> Parallel Scanners -> Sinks/Enforcement).
## Agent Instructions
1. Create `helm/templates/clusterworkflowtemplate.yaml`.
2. Ensure the template is structured to accept the parameters and orchestrate downstream DAG tasks.
3. Keep the actual task implementations (like git clone or scanners) as empty stubs for now; they will be filled by subsequent steps.
+15
View File
@@ -0,0 +1,15 @@
# Implementation Plan: Shared PVC Workspace & Git Clone
## Objective
Implement a shared Persistent Volume Claim (PVC) strategy to ensure the repository is only cloned once and all parallel scanners can access the same codebase without re-downloading it.
## Requirements
- Use Argo Workflows `volumeClaimTemplates` to define a temporary PVC for the workflow duration.
- Create a `clone-repo` task in the DAG.
- The `clone-repo` task should use a standard git image (e.g., Alpine/Git) to clone the `repo-url` at `git-revision` into the shared PVC mounted at `/workspace`.
- Ensure all subsequent tasks will mount this PVC at `/workspace`.
## Agent Instructions
1. Modify the `ClusterWorkflowTemplate` to add the `volumeClaimTemplates`.
2. Add the `clone-repo` task template that executes `git clone`.
3. Configure the DAG so the parallel scanning steps depend on the successful completion of `clone-repo`.
+14
View File
@@ -0,0 +1,14 @@
# Implementation Plan: Infisical Secrets Injection InitContainer
## Objective
Ensure that Infisical secrets are injected as **Environment Variables** securely before any main container logic runs in the Argo Workflows steps.
## Requirements
- Use the Infisical Kubernetes operator approach.
- Add the necessary Infisical annotations (e.g., `secrets.infisical.com/auto-reload: "true"`) to the pod metadata templates.
- **Crucial:** Because Argo Workflows pods start quickly, inject an `initContainer` into tasks that require secrets. This initContainer should run a simple polling script (e.g., a loop checking if a specific expected environment variable exists) to pause the pod's main container execution until the Infisical mutating webhook has successfully injected the environment variables.
## Agent Instructions
1. Create a reusable snippet or template property for the `initContainer` wait logic.
2. Apply the required Infisical annotations to the `ClusterWorkflowTemplate`'s `podSpecPatch` or task metadata.
3. Document which steps will require which secrets (e.g., DefectDojo API keys, Socket.dev keys).
+17
View File
@@ -0,0 +1,17 @@
# Implementation Plan: TruffleHog Scanner
## Objective
Implement the TruffleHog secrets scanning step as a parallel task in the DAG.
## Requirements
- Define a task template named `scan-trufflehog`.
- Depend on the `clone-repo` task.
- Mount the shared PVC at `/workspace`.
- Run TruffleHog against the `/workspace` directory.
- Configure TruffleHog to output its findings in JSON or SARIF format.
- Save the output to `/workspace/reports/trufflehog.json` (or `.sarif`).
- Ensure the task exits successfully (exit code 0) even if secrets are found, so the pipeline can proceed to the aggregation step (Phase 3). (Use `continueOn` or `ignoreError` or a wrapper script like `trufflehog ... || true`).
## Agent Instructions
1. Add the `scan-trufflehog` template to the `ClusterWorkflowTemplate`.
2. Wire it into the DAG alongside the other scanners.
+17
View File
@@ -0,0 +1,17 @@
# Implementation Plan: Semgrep Scanner
## Objective
Implement the Semgrep SAST (Static Application Security Testing) scanning step as a parallel task in the DAG.
## Requirements
- Define a task template named `scan-semgrep`.
- Depend on the `clone-repo` task.
- Mount the shared PVC at `/workspace`.
- Run Semgrep with standard or configurable rulesets against the `/workspace` directory.
- Output findings in SARIF format.
- Save the output to `/workspace/reports/semgrep.sarif`.
- Ensure the task exits successfully even if vulnerabilities are found, so Phase 3 aggregation can run (e.g., wrap in a script that returns 0).
## Agent Instructions
1. Add the `scan-semgrep` template to the `ClusterWorkflowTemplate`.
2. Wire it into the DAG alongside the other scanners.
+17
View File
@@ -0,0 +1,17 @@
# Implementation Plan: KICS IaC Scanner
## Objective
Implement the KICS (Keeping Infrastructure as Code Secure) scanning step as a parallel task in the DAG.
## Requirements
- Define a task template named `scan-kics`.
- Depend on the `clone-repo` task.
- Mount the shared PVC at `/workspace`.
- Run KICS against the `/workspace` directory (or the specific `working-dir` parameter).
- Output findings in SARIF and/or JSON format.
- Save the output to `/workspace/reports/kics.sarif`.
- Ensure the task exits successfully even if issues are found, to allow Phase 3 aggregation (e.g., wrap with `|| true`).
## Agent Instructions
1. Add the `scan-kics` template to the `ClusterWorkflowTemplate`.
2. Wire it into the DAG alongside the other scanners.
+19
View File
@@ -0,0 +1,19 @@
# Implementation Plan: Socket.dev Scanner
## Objective
Implement the Socket.dev supply chain security scanning step as a parallel task in the DAG.
## Requirements
- Define a task template named `scan-socketdev`.
- Depend on the `clone-repo` task.
- Mount the shared PVC at `/workspace`.
- Expect the Socket.dev API key to be injected via Infisical as an environment variable (use the initContainer wait logic from Phase 1 Step 3).
- Run the Socket CLI against the dependency manifests in `/workspace`.
- Output findings in a standard format (JSON/SARIF).
- Save the output to `/workspace/reports/socketdev.json`.
- Ensure the task exits successfully (e.g. `|| true`) to allow Phase 3 aggregation.
## Agent Instructions
1. Add the `scan-socketdev` template to the `ClusterWorkflowTemplate`.
2. Configure the Infisical initContainer logic for this specific step to wait for the API key.
3. Wire it into the DAG alongside the other scanners.
+18
View File
@@ -0,0 +1,18 @@
# Implementation Plan: Syft & Grype Scanner
## Objective
Implement the SBOM generation (Syft) and vulnerability scanning (Grype) step as a parallel task in the DAG.
## Requirements
- Define a task template named `scan-syft-grype`.
- Depend on the `clone-repo` task.
- Mount the shared PVC at `/workspace`.
- Step A: Run Syft against `/workspace` to generate an SBOM (SPDX/CycloneDX format) -> `/workspace/reports/sbom.json`.
- Step B: Run Grype against the generated SBOM (or the workspace directly) to find vulnerabilities.
- Output Grype findings in SARIF format.
- Save the Grype output to `/workspace/reports/grype.sarif`.
- Ensure the task exits successfully (`|| true`) to allow Phase 3 aggregation.
## Agent Instructions
1. Add the `scan-syft-grype` template to the `ClusterWorkflowTemplate`.
2. Wire it into the DAG alongside the other scanners.
+18
View File
@@ -0,0 +1,18 @@
# Implementation Plan: Pulumi Crossguard
## Objective
Implement the Pulumi Crossguard policy enforcement step as a parallel task in the DAG.
## Requirements
- Define a task template named `scan-crossguard`.
- Depend on the `clone-repo` task.
- Mount the shared PVC at `/workspace`.
- Expect Pulumi credentials and cloud provider credentials (e.g., AWS/GCP) to be injected via Infisical as environment variables (using the initContainer logic).
- Run `pulumi preview --policy-pack <path>` inside the `/workspace`.
- Capture the output and convert/save it into a structured JSON/SARIF format at `/workspace/reports/crossguard.json`.
- Ensure the task exits successfully (`|| true`) to allow Phase 3 aggregation.
## Agent Instructions
1. Add the `scan-crossguard` template to the `ClusterWorkflowTemplate`.
2. Configure the Infisical initContainer to wait for Pulumi and Cloud credentials.
3. Wire it into the DAG alongside the other scanners.
+16
View File
@@ -0,0 +1,16 @@
# Implementation Plan: Long-Term Storage Upload
## Objective
Implement an aggregation task that uploads all generated reports from the PVC to long-term storage (e.g., S3/MinIO) for audit trails and historical review.
## Requirements
- Define a task template named `upload-storage`.
- Depend on the successful completion of **all** parallel scanner tasks (Phase 2).
- Mount the shared PVC at `/workspace`.
- Expect S3/MinIO credentials to be injected as environment variables via Infisical (with initContainer wait logic).
- Use a CLI (like `aws s3 cp` or `mc`) to sync the `/workspace/reports/` directory to a designated bucket, keyed by repository name, date, and commit hash.
## Agent Instructions
1. Add the `upload-storage` template to the `ClusterWorkflowTemplate`.
2. Configure the DAG dependencies so it waits for all scanners.
3. Configure the Infisical initContainer to wait for the storage credentials.
+17
View File
@@ -0,0 +1,17 @@
# Implementation Plan: DefectDojo Upload
## Objective
Implement a task that pushes all SARIF/JSON reports from the PVC to DefectDojo via its API.
## Requirements
- Define a task template named `upload-defectdojo`.
- Depend on the completion of all parallel scanner tasks (Phase 2).
- Mount the shared PVC at `/workspace`.
- Expect DefectDojo API keys and URL to be injected as environment variables via Infisical (with initContainer wait logic).
- Iterate over the `/workspace/reports/` directory.
- For each file, make an API request to DefectDojo to import the scan results (mapping the file type to the correct DefectDojo parser, e.g., SARIF -> Generic SARIF).
## Agent Instructions
1. Add the `upload-defectdojo` template to the `ClusterWorkflowTemplate`.
2. Write the API upload script (Python, curl, or a dedicated CLI) in the task template.
3. Configure the Infisical initContainer to wait for the DefectDojo credentials.
+18
View File
@@ -0,0 +1,18 @@
# Implementation Plan: Policy Enforcement
## Objective
Implement the final task that parses the aggregated results and decides whether to Pass or Fail the Argo Workflow based on the `fail-on-cvss` input threshold.
## Requirements
- Define a task template named `enforce-policy`.
- Depend on the completion of the upload tasks (Phase 3 Steps 1 & 2).
- Mount the shared PVC at `/workspace`.
- Read the input parameter `fail-on-cvss` (e.g., `7.0`).
- Run a script (Python, jq, etc.) to parse all the reports in `/workspace/reports/`.
- If any vulnerability is found with a CVSS score >= the threshold, print an error summary and exit with a non-zero code (causing the Argo Workflow to fail).
- If no vulnerabilities exceed the threshold, print a success summary and exit with 0.
## Agent Instructions
1. Add the `enforce-policy` template to the `ClusterWorkflowTemplate`.
2. Write the parsing logic inside the task (e.g., extracting CVSS scores from SARIF and JSON formats).
3. Ensure this step acts as the final gatekeeper for the pipeline.
@@ -0,0 +1,16 @@
# Implementation Plan: Renovate Bot Preset
## Objective
Create a centralized `renovate.json` (or `default.json`) preset in this repository that other projects can easily inherit to get standardized auto-merge and grouping behavior.
## Requirements
- Create a file at `renovate-preset/default.json` (or similar path).
- Configure auto-merge for patch and minor versions of dependencies.
- Enable grouping for monorepo packages (e.g., all `@babel/*` updates grouped into one PR).
- Configure the schedule (e.g., run on weekends or early mornings).
- Configure the severity levels for when notifications/PRs should block.
- Document how other repositories can `extend` this preset in their own `renovate.json` (e.g., `"extends": ["github>my-org/my-repo//renovate-preset"]`).
## Agent Instructions
1. Create the base Renovate configuration file.
2. Add a `README.md` to the `renovate-preset` directory explaining how to use it.
@@ -0,0 +1,17 @@
# Implementation Plan: Renovate Bot CronJob / ArgoCD App
## Objective
Create the Kubernetes manifests to deploy Renovate Bot as a cluster-level service (CronJob) via ArgoCD, configured to scan repositories and open PRs (which will trigger the Phase 1-3 pipeline).
## Requirements
- Create Kubernetes manifests for a CronJob that runs the Renovate Bot Docker image.
- Expect Git Provider credentials (GitHub/GitLab token) to be injected as environment variables via Infisical (using standard operator annotations).
- Configure the CronJob to run periodically (e.g., hourly).
- Package this as an ArgoCD Application or a Helm chart located in `helm/renovate-bot/`.
- The configuration should instruct Renovate to scan the designated repositories and respect the presets defined in Phase 4 Step 1.
## Agent Instructions
1. Create the `helm/renovate-bot` directory.
2. Add the `CronJob`, `ServiceAccount`, and necessary RBAC manifests.
3. Configure the Infisical annotations for secrets injection.
4. Provide an `Application` manifest for ArgoCD to deploy it easily.