Compare commits
11 Commits
2225bb2045
...
7587c285e7
| Author | SHA1 | Date | |
|---|---|---|---|
| 7587c285e7 | |||
| d8ee53395a | |||
| 5e31efd464 | |||
| ebd29176d0 | |||
| aa907060a4 | |||
| a9224a41c1 | |||
| 35d8630bf2 | |||
| df10609df5 | |||
| 8c2c420bff | |||
| 963e020efa | |||
| 89b3586030 |
@@ -0,0 +1,47 @@
|
|||||||
|
# for the pipeline
|
||||||
|
## languages
|
||||||
|
#### The tools we are using to write this in and deploy it
|
||||||
|
helm
|
||||||
|
pulumi
|
||||||
|
argo workflows?
|
||||||
|
|
||||||
|
## pipeline
|
||||||
|
#### The actual steps in the pipeline
|
||||||
|
pulumi
|
||||||
|
pulumi crossguard
|
||||||
|
socket.dev
|
||||||
|
argo workflows
|
||||||
|
semgrep
|
||||||
|
trufflehog
|
||||||
|
syft // do we need this as socket.dev or semgrep can do sbom?
|
||||||
|
grype
|
||||||
|
renovate bot
|
||||||
|
kics (keeping infrastructure as code secure)
|
||||||
|
|
||||||
|
## k8's
|
||||||
|
#### Things I assume I need installed in my k8's cluster
|
||||||
|
infisical
|
||||||
|
argo workflows
|
||||||
|
defectdojo
|
||||||
|
|
||||||
|
## repository
|
||||||
|
#### Things to set on the repository
|
||||||
|
branch protection
|
||||||
|
|
||||||
|
## local
|
||||||
|
#### Things to add to my chezmoi install so that they are always available but should be mentioned as things the user should have
|
||||||
|
eslint-plugin-security
|
||||||
|
gitleaks
|
||||||
|
socket cli
|
||||||
|
|
||||||
|
## Might be needed
|
||||||
|
#### Things that we might need. I am unsure if we have other tools that sufficiently cover the security concerns
|
||||||
|
trivy
|
||||||
|
|
||||||
|
# For homelab
|
||||||
|
## optional things
|
||||||
|
#### These are things that will exist in my homelab eventually, however they are not needed for this pipeline I think
|
||||||
|
harbor containe registry
|
||||||
|
suse security (neuvector)
|
||||||
|
nexus package caching
|
||||||
|
|
||||||
@@ -0,0 +1,26 @@
|
|||||||
|
# Improvement Plan: Refactor Infisical Secrets to Native CRD
|
||||||
|
|
||||||
|
## Objective
|
||||||
|
The previous implementation used a Mutating Webhook (Infisical Agent Injector) and an `initContainer` polling loop to wait for secrets to be injected into the Argo Workflow pods. Best practices indicate this causes race conditions and ArgoCD "OutOfSync" issues. We need to refactor the pipeline to use the native `InfisicalSecret` CRD and standard Kubernetes `secretKeyRef` environment variables.
|
||||||
|
|
||||||
|
## Requirements
|
||||||
|
- **Remove Webhook Logic**: Strip out any Infisical annotations (e.g., `secrets.infisical.com/auto-reload`) from the Argo Workflows pod metadata.
|
||||||
|
- **Remove initContainer**: Delete the `initContainer` polling logic that was waiting for environment variables to populate.
|
||||||
|
- **Create InfisicalSecret CRD**: Create a new Helm template (e.g., `helm/templates/infisical-secret.yaml`) defining an `InfisicalSecret` resource. This resource should sync the required secrets (Socket.dev API key, Pulumi credentials, S3/MinIO credentials, DefectDojo API keys) into a standard Kubernetes `Secret` (e.g., named `amp-security-pipeline-secrets`).
|
||||||
|
- **Update Workflow Tasks**: Modify the `ClusterWorkflowTemplate` (and any other files where tasks are defined). Instead of expecting the webhook to inject the secrets directly, configure the task containers to pull their required environment variables using native Kubernetes syntax:
|
||||||
|
```yaml
|
||||||
|
env:
|
||||||
|
- name: SOCKET_DEV_API_KEY
|
||||||
|
valueFrom:
|
||||||
|
secretKeyRef:
|
||||||
|
name: amp-security-pipeline-secrets
|
||||||
|
key: SOCKET_DEV_API_KEY
|
||||||
|
```
|
||||||
|
|
||||||
|
## Agent Instructions
|
||||||
|
1. Find and open the implemented `ClusterWorkflowTemplate` and task definition YAML files in `helm/templates/`.
|
||||||
|
2. Find and remove all instances of the `initContainer` secret-waiting logic.
|
||||||
|
3. Find and remove all Infisical mutating webhook annotations from the workflow/pod templates.
|
||||||
|
4. Create a new file `helm/templates/infisical-secret.yaml` defining the `InfisicalSecret` CRD. Make sure it targets the necessary secrets for Socket.dev, Pulumi, Storage, and DefectDojo.
|
||||||
|
5. Update the `scan-socketdev`, `scan-crossguard`, `upload-storage`, and `upload-defectdojo` tasks in the workflow template to use native `valueFrom: secretKeyRef` for their required environment variables, referencing the new native Kubernetes Secret.
|
||||||
|
6. Verify the YAML is valid and clean.
|
||||||
@@ -24,15 +24,15 @@ To achieve this, the architecture utilizes "Defense in Depth," split across seve
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
2. Part 1: Local Development & Repository Tooling
|
## 2. Part 1: Local Development & Repository Tooling
|
||||||
2.1 Secret Scanning: Gitleaks (Local)
|
### 2.1 Secret Scanning: Gitleaks (Local)
|
||||||
|
|
||||||
What it does: Fast, static regex matching for secrets.
|
What it does: Fast, static regex matching for secrets.
|
||||||
Where it runs: Local developer machine (via Pre-commit hook).
|
Where it runs: Local developer machine (via Pre-commit hook).
|
||||||
Detailed Rationale: Developers make human errors. Gitleaks runs in milliseconds and acts as a "spell-check for secrets." It prevents accidentally committing a .env file or hardcoded token before it ever enters the local Git history.
|
Detailed Rationale: Developers make human errors. Gitleaks runs in milliseconds and acts as a "spell-check for secrets." It prevents accidentally committing a .env file or hardcoded token before it ever enters the local Git history.
|
||||||
Trade-offs: It relies on the developer actively using the pre-commit hook. If a commit is forced (--no-verify), the local check is bypassed.
|
Trade-offs: It relies on the developer actively using the pre-commit hook. If a commit is forced (--no-verify), the local check is bypassed.
|
||||||
|
|
||||||
2.2 Supply Chain Defense: Socket CLI (Local Wrapper)
|
### 2.2 Supply Chain Defense: Socket CLI (Local Wrapper)
|
||||||
|
|
||||||
What it does: Intercepts package installation to check for malicious code, typosquatting, and hijacked packages.
|
What it does: Intercepts package installation to check for malicious code, typosquatting, and hijacked packages.
|
||||||
Where it runs: Local machine (aliased: alias pnpm="socket pnpm").
|
Where it runs: Local machine (aliased: alias pnpm="socket pnpm").
|
||||||
@@ -62,6 +62,7 @@ To achieve this, the architecture utilizes "Defense in Depth," split across seve
|
|||||||
* **Detailed Rationale:** Traditional CVE scanners check for accidental developer mistakes. Socket checks for active malice (install scripts that steal SSH keys, typosquatting, hijacked maintainer accounts). Because AI agents regularly pull in new dependencies to solve coding problems, Socket ensures neither the local machine nor the pipeline executes malicious code during dependency resolution.
|
* **Detailed Rationale:** Traditional CVE scanners check for accidental developer mistakes. Socket checks for active malice (install scripts that steal SSH keys, typosquatting, hijacked maintainer accounts). Because AI agents regularly pull in new dependencies to solve coding problems, Socket ensures neither the local machine nor the pipeline executes malicious code during dependency resolution.
|
||||||
* **Trade-offs:** API-dependent. To conserve free-tier API quotas, the pipeline step must be strictly configured to trigger *only* when lockfiles (`pnpm-lock.yaml`) change, requiring careful CI optimization.
|
* **Trade-offs:** API-dependent. To conserve free-tier API quotas, the pipeline step must be strictly configured to trigger *only* when lockfiles (`pnpm-lock.yaml`) change, requiring careful CI optimization.
|
||||||
|
|
||||||
|
**outdated, using pulumi crossguard**
|
||||||
### 2.5 Infrastructure Validation (IaC): Checkov
|
### 2.5 Infrastructure Validation (IaC): Checkov
|
||||||
* **What it does:** Parses Kubernetes manifests, Terraform, and Dockerfiles to ensure they adhere to security best practices.
|
* **What it does:** Parses Kubernetes manifests, Terraform, and Dockerfiles to ensure they adhere to security best practices.
|
||||||
* **Detailed Rationale:** A homelab exposed to the internet cannot afford basic infrastructure misconfigurations, such as running containers as `root` or mapping sensitive host volumes. Checkov acts as an automated senior cloud architect, validating the AI's generated Kubernetes manifests before Argo CD syncs them.
|
* **Detailed Rationale:** A homelab exposed to the internet cannot afford basic infrastructure misconfigurations, such as running containers as `root` or mapping sensitive host volumes. Checkov acts as an automated senior cloud architect, validating the AI's generated Kubernetes manifests before Argo CD syncs them.
|
||||||
|
|||||||
@@ -0,0 +1,19 @@
|
|||||||
|
# Implementation Plan: Base ClusterWorkflowTemplate
|
||||||
|
|
||||||
|
## Objective
|
||||||
|
Create the foundational Argo `ClusterWorkflowTemplate` for the security pipeline. It must use semantic versioning (e.g., `amp-security-pipeline-v1.0.0`) so projects can pin to a stable version.
|
||||||
|
|
||||||
|
## Requirements
|
||||||
|
- Define a `ClusterWorkflowTemplate` resource.
|
||||||
|
- Name the template with a semver tag (e.g., `name: amp-security-pipeline-v1.0.0`).
|
||||||
|
- Define inputs/parameters:
|
||||||
|
- `working-dir` (default: `.`)
|
||||||
|
- `fail-on-cvss` (default: `7.0`)
|
||||||
|
- `repo-url` (required)
|
||||||
|
- `git-revision` (default: `main`)
|
||||||
|
- Define the DAG (Directed Acyclic Graph) structure that will orchestrate the phases (Clone -> Parallel Scanners -> Sinks/Enforcement).
|
||||||
|
|
||||||
|
## Agent Instructions
|
||||||
|
1. Create `helm/templates/clusterworkflowtemplate.yaml`.
|
||||||
|
2. Ensure the template is structured to accept the parameters and orchestrate downstream DAG tasks.
|
||||||
|
3. Keep the actual task implementations (like git clone or scanners) as empty stubs for now; they will be filled by subsequent steps.
|
||||||
@@ -0,0 +1,15 @@
|
|||||||
|
# Implementation Plan: Shared PVC Workspace & Git Clone
|
||||||
|
|
||||||
|
## Objective
|
||||||
|
Implement a shared Persistent Volume Claim (PVC) strategy to ensure the repository is only cloned once and all parallel scanners can access the same codebase without re-downloading it.
|
||||||
|
|
||||||
|
## Requirements
|
||||||
|
- Use Argo Workflows `volumeClaimTemplates` to define a temporary PVC for the workflow duration.
|
||||||
|
- Create a `clone-repo` task in the DAG.
|
||||||
|
- The `clone-repo` task should use a standard git image (e.g., Alpine/Git) to clone the `repo-url` at `git-revision` into the shared PVC mounted at `/workspace`.
|
||||||
|
- Ensure all subsequent tasks will mount this PVC at `/workspace`.
|
||||||
|
|
||||||
|
## Agent Instructions
|
||||||
|
1. Modify the `ClusterWorkflowTemplate` to add the `volumeClaimTemplates`.
|
||||||
|
2. Add the `clone-repo` task template that executes `git clone`.
|
||||||
|
3. Configure the DAG so the parallel scanning steps depend on the successful completion of `clone-repo`.
|
||||||
@@ -0,0 +1,14 @@
|
|||||||
|
# Implementation Plan: Infisical Secrets Injection InitContainer
|
||||||
|
|
||||||
|
## Objective
|
||||||
|
Ensure that Infisical secrets are injected as **Environment Variables** securely before any main container logic runs in the Argo Workflows steps.
|
||||||
|
|
||||||
|
## Requirements
|
||||||
|
- Use the Infisical Kubernetes operator approach.
|
||||||
|
- Add the necessary Infisical annotations (e.g., `secrets.infisical.com/auto-reload: "true"`) to the pod metadata templates.
|
||||||
|
- **Crucial:** Because Argo Workflows pods start quickly, inject an `initContainer` into tasks that require secrets. This initContainer should run a simple polling script (e.g., a loop checking if a specific expected environment variable exists) to pause the pod's main container execution until the Infisical mutating webhook has successfully injected the environment variables.
|
||||||
|
|
||||||
|
## Agent Instructions
|
||||||
|
1. Create a reusable snippet or template property for the `initContainer` wait logic.
|
||||||
|
2. Apply the required Infisical annotations to the `ClusterWorkflowTemplate`'s `podSpecPatch` or task metadata.
|
||||||
|
3. Document which steps will require which secrets (e.g., DefectDojo API keys, Socket.dev keys).
|
||||||
@@ -0,0 +1,17 @@
|
|||||||
|
# Implementation Plan: TruffleHog Scanner
|
||||||
|
|
||||||
|
## Objective
|
||||||
|
Implement the TruffleHog secrets scanning step as a parallel task in the DAG.
|
||||||
|
|
||||||
|
## Requirements
|
||||||
|
- Define a task template named `scan-trufflehog`.
|
||||||
|
- Depend on the `clone-repo` task.
|
||||||
|
- Mount the shared PVC at `/workspace`.
|
||||||
|
- Run TruffleHog against the `/workspace` directory.
|
||||||
|
- Configure TruffleHog to output its findings in JSON or SARIF format.
|
||||||
|
- Save the output to `/workspace/reports/trufflehog.json` (or `.sarif`).
|
||||||
|
- Ensure the task exits successfully (exit code 0) even if secrets are found, so the pipeline can proceed to the aggregation step (Phase 3). (Use `continueOn` or `ignoreError` or a wrapper script like `trufflehog ... || true`).
|
||||||
|
|
||||||
|
## Agent Instructions
|
||||||
|
1. Add the `scan-trufflehog` template to the `ClusterWorkflowTemplate`.
|
||||||
|
2. Wire it into the DAG alongside the other scanners.
|
||||||
@@ -0,0 +1,18 @@
|
|||||||
|
# Implementation Plan: Semgrep Scanner
|
||||||
|
|
||||||
|
## Objective
|
||||||
|
Implement the Semgrep SAST (Static Application Security Testing) scanning step as a parallel task in the DAG.
|
||||||
|
|
||||||
|
## Requirements
|
||||||
|
- Define a task template named `scan-semgrep`.
|
||||||
|
- Depend on the `clone-repo` task.
|
||||||
|
- Mount the shared PVC at `/workspace`.
|
||||||
|
- Run Semgrep with standard or configurable rulesets against the `/workspace` directory.
|
||||||
|
- Output findings in SARIF format.
|
||||||
|
- Save the output to `/workspace/reports/semgrep.sarif`.
|
||||||
|
- Ensure the task exits successfully even if vulnerabilities are found, so Phase 3 aggregation can run (e.g., wrap in a script that returns 0).
|
||||||
|
|
||||||
|
## Agent Instructions
|
||||||
|
1. Add the `scan-semgrep` template to the `ClusterWorkflowTemplate`.
|
||||||
|
2. Wire it into the DAG alongside the other scanners.
|
||||||
|
3. **CRITICAL: File Splitting:** Do NOT put everything into one giant file! Split your YAML manifests or configurations into separate, smaller files (e.g. using separate Helm template files, configmaps, or helper scripts) to prevent exhausting the context window.
|
||||||
@@ -0,0 +1,18 @@
|
|||||||
|
# Implementation Plan: KICS IaC Scanner
|
||||||
|
|
||||||
|
## Objective
|
||||||
|
Implement the KICS (Keeping Infrastructure as Code Secure) scanning step as a parallel task in the DAG.
|
||||||
|
|
||||||
|
## Requirements
|
||||||
|
- Define a task template named `scan-kics`.
|
||||||
|
- Depend on the `clone-repo` task.
|
||||||
|
- Mount the shared PVC at `/workspace`.
|
||||||
|
- Run KICS against the `/workspace` directory (or the specific `working-dir` parameter).
|
||||||
|
- Output findings in SARIF and/or JSON format.
|
||||||
|
- Save the output to `/workspace/reports/kics.sarif`.
|
||||||
|
- Ensure the task exits successfully even if issues are found, to allow Phase 3 aggregation (e.g., wrap with `|| true`).
|
||||||
|
|
||||||
|
## Agent Instructions
|
||||||
|
1. Add the `scan-kics` template to the `ClusterWorkflowTemplate`.
|
||||||
|
2. Wire it into the DAG alongside the other scanners.
|
||||||
|
3. **CRITICAL: File Splitting:** Do NOT put everything into one giant file! Split your YAML manifests or configurations into separate, smaller files (e.g. using separate Helm template files, configmaps, or helper scripts) to prevent exhausting the context window.
|
||||||
@@ -0,0 +1,20 @@
|
|||||||
|
# Implementation Plan: Socket.dev Scanner
|
||||||
|
|
||||||
|
## Objective
|
||||||
|
Implement the Socket.dev supply chain security scanning step as a parallel task in the DAG.
|
||||||
|
|
||||||
|
## Requirements
|
||||||
|
- Define a task template named `scan-socketdev`.
|
||||||
|
- Depend on the `clone-repo` task.
|
||||||
|
- Mount the shared PVC at `/workspace`.
|
||||||
|
- Expect the Socket.dev API key to be injected via Infisical as an environment variable (use the initContainer wait logic from Phase 1 Step 3).
|
||||||
|
- Run the Socket CLI against the dependency manifests in `/workspace`.
|
||||||
|
- Output findings in a standard format (JSON/SARIF).
|
||||||
|
- Save the output to `/workspace/reports/socketdev.json`.
|
||||||
|
- Ensure the task exits successfully (e.g. `|| true`) to allow Phase 3 aggregation.
|
||||||
|
|
||||||
|
## Agent Instructions
|
||||||
|
1. Add the `scan-socketdev` template to the `ClusterWorkflowTemplate`.
|
||||||
|
2. Configure the Infisical initContainer logic for this specific step to wait for the API key.
|
||||||
|
3. Wire it into the DAG alongside the other scanners.
|
||||||
|
4. **CRITICAL: File Splitting:** Do NOT put everything into one giant file! Split your YAML manifests or configurations into separate, smaller files (e.g. using separate Helm template files, configmaps, or helper scripts) to prevent exhausting the context window.
|
||||||
@@ -0,0 +1,19 @@
|
|||||||
|
# Implementation Plan: Syft & Grype Scanner
|
||||||
|
|
||||||
|
## Objective
|
||||||
|
Implement the SBOM generation (Syft) and vulnerability scanning (Grype) step as a parallel task in the DAG.
|
||||||
|
|
||||||
|
## Requirements
|
||||||
|
- Define a task template named `scan-syft-grype`.
|
||||||
|
- Depend on the `clone-repo` task.
|
||||||
|
- Mount the shared PVC at `/workspace`.
|
||||||
|
- Step A: Run Syft against `/workspace` to generate an SBOM (SPDX/CycloneDX format) -> `/workspace/reports/sbom.json`.
|
||||||
|
- Step B: Run Grype against the generated SBOM (or the workspace directly) to find vulnerabilities.
|
||||||
|
- Output Grype findings in SARIF format.
|
||||||
|
- Save the Grype output to `/workspace/reports/grype.sarif`.
|
||||||
|
- Ensure the task exits successfully (`|| true`) to allow Phase 3 aggregation.
|
||||||
|
|
||||||
|
## Agent Instructions
|
||||||
|
1. Add the `scan-syft-grype` template to the `ClusterWorkflowTemplate`.
|
||||||
|
2. Wire it into the DAG alongside the other scanners.
|
||||||
|
3. **CRITICAL: File Splitting:** Do NOT put everything into one giant file! Split your YAML manifests or configurations into separate, smaller files (e.g. using separate Helm template files, configmaps, or helper scripts) to prevent exhausting the context window.
|
||||||
@@ -0,0 +1,19 @@
|
|||||||
|
# Implementation Plan: Pulumi Crossguard
|
||||||
|
|
||||||
|
## Objective
|
||||||
|
Implement the Pulumi Crossguard policy enforcement step as a parallel task in the DAG.
|
||||||
|
|
||||||
|
## Requirements
|
||||||
|
- Define a task template named `scan-crossguard`.
|
||||||
|
- Depend on the `clone-repo` task.
|
||||||
|
- Mount the shared PVC at `/workspace`.
|
||||||
|
- Expect Pulumi credentials and cloud provider credentials (e.g., AWS/GCP) to be injected via Infisical as environment variables (using the initContainer logic).
|
||||||
|
- Run `pulumi preview --policy-pack <path>` inside the `/workspace`.
|
||||||
|
- Capture the output and convert/save it into a structured JSON/SARIF format at `/workspace/reports/crossguard.json`.
|
||||||
|
- Ensure the task exits successfully (`|| true`) to allow Phase 3 aggregation.
|
||||||
|
|
||||||
|
## Agent Instructions
|
||||||
|
1. Add the `scan-crossguard` template to the `ClusterWorkflowTemplate`.
|
||||||
|
2. Configure the Infisical initContainer to wait for Pulumi and Cloud credentials.
|
||||||
|
3. Wire it into the DAG alongside the other scanners.
|
||||||
|
4. **CRITICAL: File Splitting:** Do NOT put everything into one giant file! Split your YAML manifests or configurations into separate, smaller files (e.g. using separate Helm template files, configmaps, or helper scripts) to prevent exhausting the context window.
|
||||||
@@ -0,0 +1,17 @@
|
|||||||
|
# Implementation Plan: Long-Term Storage Upload
|
||||||
|
|
||||||
|
## Objective
|
||||||
|
Implement an aggregation task that uploads all generated reports from the PVC to long-term storage (e.g., S3/MinIO) for audit trails and historical review.
|
||||||
|
|
||||||
|
## Requirements
|
||||||
|
- Define a task template named `upload-storage`.
|
||||||
|
- Depend on the successful completion of **all** parallel scanner tasks (Phase 2).
|
||||||
|
- Mount the shared PVC at `/workspace`.
|
||||||
|
- Expect S3/MinIO credentials to be injected as environment variables via Infisical (with initContainer wait logic).
|
||||||
|
- Use a CLI (like `aws s3 cp` or `mc`) to sync the `/workspace/reports/` directory to a designated bucket, keyed by repository name, date, and commit hash.
|
||||||
|
|
||||||
|
## Agent Instructions
|
||||||
|
1. Add the `upload-storage` template to the `ClusterWorkflowTemplate`.
|
||||||
|
2. Configure the DAG dependencies so it waits for all scanners.
|
||||||
|
3. Configure the Infisical initContainer to wait for the storage credentials.
|
||||||
|
4. **CRITICAL: File Splitting:** Do NOT put everything into one giant file! Split your YAML manifests or configurations into separate, smaller files (e.g. using separate Helm template files, configmaps, or helper scripts) to prevent exhausting the context window.
|
||||||
@@ -0,0 +1,18 @@
|
|||||||
|
# Implementation Plan: DefectDojo Upload
|
||||||
|
|
||||||
|
## Objective
|
||||||
|
Implement a task that pushes all SARIF/JSON reports from the PVC to DefectDojo via its API.
|
||||||
|
|
||||||
|
## Requirements
|
||||||
|
- Define a task template named `upload-defectdojo`.
|
||||||
|
- Depend on the completion of all parallel scanner tasks (Phase 2).
|
||||||
|
- Mount the shared PVC at `/workspace`.
|
||||||
|
- Expect DefectDojo API keys and URL to be injected as environment variables via Infisical (with initContainer wait logic).
|
||||||
|
- Iterate over the `/workspace/reports/` directory.
|
||||||
|
- For each file, make an API request to DefectDojo to import the scan results (mapping the file type to the correct DefectDojo parser, e.g., SARIF -> Generic SARIF).
|
||||||
|
|
||||||
|
## Agent Instructions
|
||||||
|
1. Add the `upload-defectdojo` template to the `ClusterWorkflowTemplate`.
|
||||||
|
2. Write the API upload script (Python, curl, or a dedicated CLI) in the task template.
|
||||||
|
3. Configure the Infisical initContainer to wait for the DefectDojo credentials.
|
||||||
|
4. **CRITICAL: File Splitting:** Do NOT put everything into one giant file! Split your YAML manifests or configurations into separate, smaller files (e.g. using separate Helm template files, configmaps, or helper scripts) to prevent exhausting the context window.
|
||||||
@@ -0,0 +1,19 @@
|
|||||||
|
# Implementation Plan: Policy Enforcement
|
||||||
|
|
||||||
|
## Objective
|
||||||
|
Implement the final task that parses the aggregated results and decides whether to Pass or Fail the Argo Workflow based on the `fail-on-cvss` input threshold.
|
||||||
|
|
||||||
|
## Requirements
|
||||||
|
- Define a task template named `enforce-policy`.
|
||||||
|
- Depend on the completion of the upload tasks (Phase 3 Steps 1 & 2).
|
||||||
|
- Mount the shared PVC at `/workspace`.
|
||||||
|
- Read the input parameter `fail-on-cvss` (e.g., `7.0`).
|
||||||
|
- Run a script (Python, jq, etc.) to parse all the reports in `/workspace/reports/`.
|
||||||
|
- If any vulnerability is found with a CVSS score >= the threshold, print an error summary and exit with a non-zero code (causing the Argo Workflow to fail).
|
||||||
|
- If no vulnerabilities exceed the threshold, print a success summary and exit with 0.
|
||||||
|
|
||||||
|
## Agent Instructions
|
||||||
|
1. Add the `enforce-policy` template to the `ClusterWorkflowTemplate`.
|
||||||
|
2. Write the parsing logic inside the task (e.g., extracting CVSS scores from SARIF and JSON formats).
|
||||||
|
3. Ensure this step acts as the final gatekeeper for the pipeline.
|
||||||
|
4. **CRITICAL: File Splitting:** Do NOT put everything into one giant file! Split your YAML manifests or configurations into separate, smaller files (e.g. using separate Helm template files, configmaps, or helper scripts) to prevent exhausting the context window.
|
||||||
@@ -0,0 +1,17 @@
|
|||||||
|
# Implementation Plan: Renovate Bot Preset
|
||||||
|
|
||||||
|
## Objective
|
||||||
|
Create a centralized `renovate.json` (or `default.json`) preset in this repository that other projects can easily inherit to get standardized auto-merge and grouping behavior.
|
||||||
|
|
||||||
|
## Requirements
|
||||||
|
- Create a file at `renovate-preset/default.json` (or similar path).
|
||||||
|
- Configure auto-merge for patch and minor versions of dependencies.
|
||||||
|
- Enable grouping for monorepo packages (e.g., all `@babel/*` updates grouped into one PR).
|
||||||
|
- Configure the schedule (e.g., run on weekends or early mornings).
|
||||||
|
- Configure the severity levels for when notifications/PRs should block.
|
||||||
|
- Document how other repositories can `extend` this preset in their own `renovate.json` (e.g., `"extends": ["github>my-org/my-repo//renovate-preset"]`).
|
||||||
|
|
||||||
|
## Agent Instructions
|
||||||
|
1. Create the base Renovate configuration file.
|
||||||
|
2. Add a `README.md` to the `renovate-preset` directory explaining how to use it.
|
||||||
|
3. **CRITICAL: File Splitting:** Do NOT put everything into one giant file! Split your JSON configurations or manifests into separate, smaller files to prevent exhausting the context window.
|
||||||
@@ -0,0 +1,18 @@
|
|||||||
|
# Implementation Plan: Renovate Bot CronJob / ArgoCD App
|
||||||
|
|
||||||
|
## Objective
|
||||||
|
Create the Kubernetes manifests to deploy Renovate Bot as a cluster-level service (CronJob) via ArgoCD, configured to scan repositories and open PRs (which will trigger the Phase 1-3 pipeline).
|
||||||
|
|
||||||
|
## Requirements
|
||||||
|
- Create Kubernetes manifests for a CronJob that runs the Renovate Bot Docker image.
|
||||||
|
- Expect Git Provider credentials (GitHub/GitLab token) to be injected as environment variables via Infisical (using standard operator annotations).
|
||||||
|
- Configure the CronJob to run periodically (e.g., hourly).
|
||||||
|
- Package this as an ArgoCD Application or a Helm chart located in `helm/renovate-bot/`.
|
||||||
|
- The configuration should instruct Renovate to scan the designated repositories and respect the presets defined in Phase 4 Step 1.
|
||||||
|
|
||||||
|
## Agent Instructions
|
||||||
|
1. Create the `helm/renovate-bot` directory.
|
||||||
|
2. Add the `CronJob`, `ServiceAccount`, and necessary RBAC manifests.
|
||||||
|
3. Configure the Infisical annotations for secrets injection.
|
||||||
|
4. Provide an `Application` manifest for ArgoCD to deploy it easily.
|
||||||
|
5. **CRITICAL: File Splitting:** Do NOT put everything into one giant file! Split your YAML manifests or configurations into separate, smaller files (e.g. using separate Helm template files, configmaps, or helper scripts) to prevent exhausting the context window.
|
||||||
@@ -23,6 +23,7 @@ To maintain developer velocity (the "Friction" principle), pipeline feedback mus
|
|||||||
* **Tool:** `eslint` with `eslint-plugin-security` and `@typescript-eslint`.
|
* **Tool:** `eslint` with `eslint-plugin-security` and `@typescript-eslint`.
|
||||||
* **Reasoning:** Linters are "dumb" but instantaneous. They will catch AI agents generating immediately dangerous syntax (like `eval()` or unsafe Regex) before a commit is even made.
|
* **Reasoning:** Linters are "dumb" but instantaneous. They will catch AI agents generating immediately dangerous syntax (like `eval()` or unsafe Regex) before a commit is even made.
|
||||||
|
|
||||||
|
**outdated, using pulumi crossguard**
|
||||||
### Layer 2: Infrastructure as Code (IaC) Scanning
|
### Layer 2: Infrastructure as Code (IaC) Scanning
|
||||||
* **Tool:** Checkov (Open Source)
|
* **Tool:** Checkov (Open Source)
|
||||||
* **Reasoning:** Lightweight CLI tool to ensure the AI agents do not accidentally expose internal homelab ports to the internet or misconfigure container permissions.
|
* **Reasoning:** Lightweight CLI tool to ensure the AI agents do not accidentally expose internal homelab ports to the internet or misconfigure container permissions.
|
||||||
@@ -47,6 +48,7 @@ To maintain developer velocity (the "Friction" principle), pipeline feedback mus
|
|||||||
| **Snyk Code** | Great UX, but lacks the ability to write custom rules. If the AI agent develops a specific bad habit unique to this codebase, Snyk cannot be easily tuned to block it. |
|
| **Snyk Code** | Great UX, but lacks the ability to write custom rules. If the AI agent develops a specific bad habit unique to this codebase, Snyk cannot be easily tuned to block it. |
|
||||||
| **Checkmarx / Veracode** | Built for massive legacy enterprise compliance. Far too expensive, slow, and noisy for a modern, agile homelab setup. |
|
| **Checkmarx / Veracode** | Built for massive legacy enterprise compliance. Far too expensive, slow, and noisy for a modern, agile homelab setup. |
|
||||||
|
|
||||||
|
**outdated using harvester default registry**
|
||||||
## 5. Future Considerations / Phase 2
|
## 5. Future Considerations / Phase 2
|
||||||
* **Build Caching:** If actual container build steps (`docker build`, `npm install`) become the bottleneck in Argo Workflows, evaluate adding open-source caching layers like **Kaniko** or **BuildKit** inside Argo pods before purchasing paid caching solutions.
|
* **Build Caching:** If actual container build steps (`docker build`, `npm install`) become the bottleneck in Argo Workflows, evaluate adding open-source caching layers like **Kaniko** or **BuildKit** inside Argo pods before purchasing paid caching solutions.
|
||||||
* **Custom Semgrep Rules:** If the AI agent repeatedly makes domain-specific logic errors (e.g., misusing a specific custom Monad), write lightweight custom Semgrep YAML rules to permanently block those specific anti-patterns.
|
* **Custom Semgrep Rules:** If the AI agent repeatedly makes domain-specific logic errors (e.g., misusing a specific custom Monad), write lightweight custom Semgrep YAML rules to permanently block those specific anti-patterns.
|
||||||
|
|||||||
@@ -0,0 +1,5 @@
|
|||||||
|
apiVersion: v2
|
||||||
|
name: renovate-bot
|
||||||
|
description: Renovate Bot deployment for agentguard-ci
|
||||||
|
version: 0.1.0
|
||||||
|
appVersion: "37.0.0"
|
||||||
@@ -0,0 +1,17 @@
|
|||||||
|
apiVersion: argoproj.io/v1alpha1
|
||||||
|
kind: Application
|
||||||
|
metadata:
|
||||||
|
name: renovate-bot
|
||||||
|
spec:
|
||||||
|
project: default
|
||||||
|
source:
|
||||||
|
repoURL: https://git.example.com/agentguard-ci.git
|
||||||
|
targetRevision: main
|
||||||
|
path: helm/renovate-bot
|
||||||
|
destination:
|
||||||
|
server: https://kubernetes.default.svc
|
||||||
|
namespace: default
|
||||||
|
syncPolicy:
|
||||||
|
automated:
|
||||||
|
prune: true
|
||||||
|
selfHeal: true
|
||||||
@@ -0,0 +1,8 @@
|
|||||||
|
apiVersion: rbac.authorization.k8s.io/v1
|
||||||
|
kind: ClusterRole
|
||||||
|
metadata:
|
||||||
|
name: renovate-bot
|
||||||
|
rules:
|
||||||
|
- apiGroups: [""]
|
||||||
|
resources: ["secrets", "configmaps"]
|
||||||
|
verbs: ["get", "list", "watch"]
|
||||||
@@ -0,0 +1,12 @@
|
|||||||
|
apiVersion: rbac.authorization.k8s.io/v1
|
||||||
|
kind: ClusterRoleBinding
|
||||||
|
metadata:
|
||||||
|
name: renovate-bot
|
||||||
|
roleRef:
|
||||||
|
apiGroup: rbac.authorization.k8s.io
|
||||||
|
kind: ClusterRole
|
||||||
|
name: renovate-bot
|
||||||
|
subjects:
|
||||||
|
- kind: ServiceAccount
|
||||||
|
name: renovate-bot
|
||||||
|
namespace: default
|
||||||
@@ -0,0 +1,12 @@
|
|||||||
|
apiVersion: v1
|
||||||
|
kind: ConfigMap
|
||||||
|
metadata:
|
||||||
|
name: renovate-bot-config
|
||||||
|
data:
|
||||||
|
renovate.json: |
|
||||||
|
{
|
||||||
|
"extends": ["github>my-org/my-repo//renovate-preset"],
|
||||||
|
"onboarding": false,
|
||||||
|
"platform": "github",
|
||||||
|
"repositories": {{ toJson .Values.repositories }}
|
||||||
|
}
|
||||||
@@ -0,0 +1,40 @@
|
|||||||
|
apiVersion: batch/v1
|
||||||
|
kind: CronJob
|
||||||
|
metadata:
|
||||||
|
name: renovate-bot
|
||||||
|
spec:
|
||||||
|
schedule: {{ .Values.schedule | quote }}
|
||||||
|
jobTemplate:
|
||||||
|
spec:
|
||||||
|
template:
|
||||||
|
spec:
|
||||||
|
serviceAccountName: renovate-bot
|
||||||
|
restartPolicy: Never
|
||||||
|
containers:
|
||||||
|
- name: renovate
|
||||||
|
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
|
||||||
|
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
||||||
|
env:
|
||||||
|
- name: RENOVATE_CONFIG_FILE
|
||||||
|
value: /etc/renovate/renovate.json
|
||||||
|
- name: RENOVATE_REPOSITORIES
|
||||||
|
value: {{ join "," .Values.repositories | quote }}
|
||||||
|
- name: GITHUB_TOKEN
|
||||||
|
valueFrom:
|
||||||
|
secretKeyRef:
|
||||||
|
name: renovate-bot
|
||||||
|
key: github-token
|
||||||
|
- name: GITLAB_TOKEN
|
||||||
|
valueFrom:
|
||||||
|
secretKeyRef:
|
||||||
|
name: renovate-bot
|
||||||
|
key: gitlab-token
|
||||||
|
args:
|
||||||
|
- renovate
|
||||||
|
volumeMounts:
|
||||||
|
- name: config
|
||||||
|
mountPath: /etc/renovate
|
||||||
|
volumes:
|
||||||
|
- name: config
|
||||||
|
configMap:
|
||||||
|
name: renovate-bot-config
|
||||||
@@ -0,0 +1,6 @@
|
|||||||
|
apiVersion: v1
|
||||||
|
kind: ServiceAccount
|
||||||
|
metadata:
|
||||||
|
name: renovate-bot
|
||||||
|
annotations:
|
||||||
|
secrets.infisical.com/auto-reload: "true"
|
||||||
@@ -0,0 +1,7 @@
|
|||||||
|
image:
|
||||||
|
repository: renovate/renovate
|
||||||
|
tag: 37.0.0
|
||||||
|
pullPolicy: IfNotPresent
|
||||||
|
|
||||||
|
schedule: "0 * * * *"
|
||||||
|
repositories: []
|
||||||
@@ -0,0 +1,186 @@
|
|||||||
|
apiVersion: argoproj.io/v1alpha1
|
||||||
|
kind: ClusterWorkflowTemplate
|
||||||
|
metadata:
|
||||||
|
name: amp-security-pipeline-v1.0.0
|
||||||
|
spec:
|
||||||
|
serviceAccountName: default
|
||||||
|
entrypoint: security-pipeline
|
||||||
|
volumeClaimTemplates:
|
||||||
|
- metadata:
|
||||||
|
name: workspace
|
||||||
|
spec:
|
||||||
|
accessModes:
|
||||||
|
- ReadWriteOnce
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
storage: 1Gi
|
||||||
|
arguments:
|
||||||
|
parameters:
|
||||||
|
- name: working-dir
|
||||||
|
value: .
|
||||||
|
- name: fail-on-cvss
|
||||||
|
value: "7.0"
|
||||||
|
- name: repo-url
|
||||||
|
- name: git-revision
|
||||||
|
value: main
|
||||||
|
templates:
|
||||||
|
- name: security-pipeline
|
||||||
|
dag:
|
||||||
|
tasks:
|
||||||
|
- name: clone
|
||||||
|
template: clone-repo
|
||||||
|
arguments:
|
||||||
|
parameters:
|
||||||
|
- name: repo-url
|
||||||
|
value: "{{workflow.parameters.repo-url}}"
|
||||||
|
- name: git-revision
|
||||||
|
value: "{{workflow.parameters.git-revision}}"
|
||||||
|
- name: scanners
|
||||||
|
dependencies:
|
||||||
|
- clone
|
||||||
|
template: parallel-scanners
|
||||||
|
arguments:
|
||||||
|
parameters:
|
||||||
|
- name: working-dir
|
||||||
|
value: "{{workflow.parameters.working-dir}}"
|
||||||
|
- name: fail-on-cvss
|
||||||
|
value: "{{workflow.parameters.fail-on-cvss}}"
|
||||||
|
- name: upload-storage
|
||||||
|
dependencies:
|
||||||
|
- scan-trufflehog
|
||||||
|
- scan-semgrep
|
||||||
|
- scan-kics
|
||||||
|
- scan-socketdev
|
||||||
|
- scan-syft-grype
|
||||||
|
- scan-crossguard
|
||||||
|
template: upload-storage
|
||||||
|
- name: upload-defectdojo
|
||||||
|
dependencies:
|
||||||
|
- scan-trufflehog
|
||||||
|
- scan-semgrep
|
||||||
|
- scan-kics
|
||||||
|
- scan-socketdev
|
||||||
|
- scan-syft-grype
|
||||||
|
- scan-crossguard
|
||||||
|
template: upload-defectdojo
|
||||||
|
- name: enforce-policy
|
||||||
|
dependencies:
|
||||||
|
- upload-storage
|
||||||
|
- upload-defectdojo
|
||||||
|
template: enforce-policy
|
||||||
|
arguments:
|
||||||
|
parameters:
|
||||||
|
- name: fail-on-cvss
|
||||||
|
value: "{{workflow.parameters.fail-on-cvss}}"
|
||||||
|
- name: sinks-and-enforcement
|
||||||
|
dependencies:
|
||||||
|
- scanners
|
||||||
|
template: sinks-and-enforcement
|
||||||
|
- name: scan-trufflehog
|
||||||
|
dependencies:
|
||||||
|
- clone
|
||||||
|
template: scan-trufflehog
|
||||||
|
arguments:
|
||||||
|
parameters:
|
||||||
|
- name: working-dir
|
||||||
|
value: "{{workflow.parameters.working-dir}}"
|
||||||
|
- name: scan-semgrep
|
||||||
|
dependencies:
|
||||||
|
- clone
|
||||||
|
template: scan-semgrep
|
||||||
|
arguments:
|
||||||
|
parameters:
|
||||||
|
- name: working-dir
|
||||||
|
value: "{{workflow.parameters.working-dir}}"
|
||||||
|
- name: scan-kics
|
||||||
|
dependencies:
|
||||||
|
- clone
|
||||||
|
template: scan-kics
|
||||||
|
arguments:
|
||||||
|
parameters:
|
||||||
|
- name: working-dir
|
||||||
|
value: "{{workflow.parameters.working-dir}}"
|
||||||
|
- name: scan-socketdev
|
||||||
|
dependencies:
|
||||||
|
- clone
|
||||||
|
template: scan-socketdev
|
||||||
|
arguments:
|
||||||
|
parameters:
|
||||||
|
- name: working-dir
|
||||||
|
value: "{{workflow.parameters.working-dir}}"
|
||||||
|
- name: scan-syft-grype
|
||||||
|
dependencies:
|
||||||
|
- clone
|
||||||
|
template: scan-syft-grype
|
||||||
|
arguments:
|
||||||
|
parameters:
|
||||||
|
- name: working-dir
|
||||||
|
value: "{{workflow.parameters.working-dir}}"
|
||||||
|
- name: scan-crossguard
|
||||||
|
dependencies:
|
||||||
|
- clone
|
||||||
|
template: scan-crossguard
|
||||||
|
arguments:
|
||||||
|
parameters:
|
||||||
|
- name: working-dir
|
||||||
|
value: "{{workflow.parameters.working-dir}}"
|
||||||
|
- name: clone-repo
|
||||||
|
inputs:
|
||||||
|
parameters:
|
||||||
|
- name: repo-url
|
||||||
|
- name: git-revision
|
||||||
|
container:
|
||||||
|
image: alpine/git:2.45.2
|
||||||
|
command:
|
||||||
|
- sh
|
||||||
|
- -c
|
||||||
|
args:
|
||||||
|
- git clone --branch "{{inputs.parameters.git-revision}}" --single-branch "{{inputs.parameters.repo-url}}" /workspace
|
||||||
|
volumeMounts:
|
||||||
|
- name: workspace
|
||||||
|
mountPath: /workspace
|
||||||
|
- name: parallel-scanners
|
||||||
|
inputs:
|
||||||
|
parameters:
|
||||||
|
- name: working-dir
|
||||||
|
- name: fail-on-cvss
|
||||||
|
dag:
|
||||||
|
tasks:
|
||||||
|
- name: trufflehog
|
||||||
|
template: scan-trufflehog
|
||||||
|
- name: semgrep
|
||||||
|
template: scan-semgrep
|
||||||
|
- name: kics
|
||||||
|
template: scan-kics
|
||||||
|
- name: socketdev
|
||||||
|
template: scan-socketdev
|
||||||
|
- name: syft-grype
|
||||||
|
template: scan-syft-grype
|
||||||
|
- name: defectdojo
|
||||||
|
template: scan-crossguard
|
||||||
|
- name: sinks-and-enforcement
|
||||||
|
container:
|
||||||
|
image: alpine:3.20
|
||||||
|
command:
|
||||||
|
- sh
|
||||||
|
- -c
|
||||||
|
args:
|
||||||
|
- echo "stub: sinks and enforcement"
|
||||||
|
- name: scan-trufflehog
|
||||||
|
template: scan-trufflehog
|
||||||
|
- name: scan-semgrep
|
||||||
|
template: scan-semgrep
|
||||||
|
- name: scan-kics
|
||||||
|
template: scan-kics
|
||||||
|
- name: scan-socketdev
|
||||||
|
template: scan-socketdev
|
||||||
|
- name: scan-syft-grype
|
||||||
|
template: scan-syft-grype
|
||||||
|
- name: scan-crossguard
|
||||||
|
template: scan-crossguard
|
||||||
|
- name: upload-storage
|
||||||
|
template: upload-storage
|
||||||
|
- name: upload-defectdojo
|
||||||
|
template: upload-defectdojo
|
||||||
|
- name: enforce-policy
|
||||||
|
template: enforce-policy
|
||||||
@@ -0,0 +1,88 @@
|
|||||||
|
{{- if .Values.pipeline.enabled }}
|
||||||
|
apiVersion: argoproj.io/v1alpha1
|
||||||
|
kind: ClusterWorkflowTemplate
|
||||||
|
metadata:
|
||||||
|
name: amp-security-pipeline-v1.0.0
|
||||||
|
spec:
|
||||||
|
templates:
|
||||||
|
- name: enforce-policy
|
||||||
|
inputs:
|
||||||
|
parameters:
|
||||||
|
- name: fail-on-cvss
|
||||||
|
container:
|
||||||
|
image: python:3.12-alpine
|
||||||
|
command:
|
||||||
|
- sh
|
||||||
|
- -c
|
||||||
|
args:
|
||||||
|
- |
|
||||||
|
set -eu
|
||||||
|
python - <<'PY'
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import pathlib
|
||||||
|
import sys
|
||||||
|
|
||||||
|
threshold = float(os.environ["FAIL_ON_CVSS"])
|
||||||
|
reports_dir = pathlib.Path("/workspace/reports")
|
||||||
|
findings = []
|
||||||
|
|
||||||
|
for report in sorted(reports_dir.iterdir()):
|
||||||
|
if not report.is_file():
|
||||||
|
continue
|
||||||
|
text = report.read_text(errors="ignore")
|
||||||
|
if report.suffix == ".sarif":
|
||||||
|
try:
|
||||||
|
data = json.loads(text)
|
||||||
|
except json.JSONDecodeError:
|
||||||
|
continue
|
||||||
|
for run in data.get("runs", []):
|
||||||
|
for result in run.get("results", []):
|
||||||
|
for fix in result.get("properties", {}).get("security-severity", []):
|
||||||
|
pass
|
||||||
|
for level in result.get("properties", {}).values():
|
||||||
|
pass
|
||||||
|
for prop in [result.get("properties", {}), result.get("taxa", [])]:
|
||||||
|
pass
|
||||||
|
for region in result.get("locations", []):
|
||||||
|
pass
|
||||||
|
sev = result.get("properties", {}).get("security-severity")
|
||||||
|
if sev is None:
|
||||||
|
continue
|
||||||
|
try:
|
||||||
|
score = float(sev)
|
||||||
|
except (TypeError, ValueError):
|
||||||
|
continue
|
||||||
|
if score >= threshold:
|
||||||
|
findings.append((report.name, score))
|
||||||
|
elif report.suffix == ".json":
|
||||||
|
try:
|
||||||
|
data = json.loads(text)
|
||||||
|
except json.JSONDecodeError:
|
||||||
|
continue
|
||||||
|
if isinstance(data, dict):
|
||||||
|
for item in data.get("findings", data.get("vulnerabilities", [])):
|
||||||
|
score = item.get("cvss") or item.get("score")
|
||||||
|
if score is None:
|
||||||
|
continue
|
||||||
|
try:
|
||||||
|
score = float(score)
|
||||||
|
except (TypeError, ValueError):
|
||||||
|
continue
|
||||||
|
if score >= threshold:
|
||||||
|
findings.append((report.name, score))
|
||||||
|
|
||||||
|
if findings:
|
||||||
|
for name, score in findings:
|
||||||
|
print(f"{name}: CVSS {score} >= {threshold}", file=sys.stderr)
|
||||||
|
raise SystemExit(1)
|
||||||
|
|
||||||
|
print(f"No findings met or exceeded CVSS {threshold}")
|
||||||
|
PY
|
||||||
|
env:
|
||||||
|
- name: FAIL_ON_CVSS
|
||||||
|
value: "{{inputs.parameters.fail-on-cvss}}"
|
||||||
|
volumeMounts:
|
||||||
|
- name: workspace
|
||||||
|
mountPath: /workspace
|
||||||
|
{{- end }}
|
||||||
@@ -0,0 +1,37 @@
|
|||||||
|
{{- if .Values.pipeline.enabled }}
|
||||||
|
apiVersion: infisical.com/v1alpha1
|
||||||
|
kind: InfisicalSecret
|
||||||
|
metadata:
|
||||||
|
name: amp-security-pipeline-secrets
|
||||||
|
spec:
|
||||||
|
secretName: amp-security-pipeline-secrets
|
||||||
|
target:
|
||||||
|
creationPolicy: Owner
|
||||||
|
workspaceSlug: {{ .Values.infisical.workspaceSlug | quote }}
|
||||||
|
projectSlug: {{ .Values.infisical.projectSlug | quote }}
|
||||||
|
secrets:
|
||||||
|
- secretKey: SOCKET_DEV_API_KEY
|
||||||
|
remoteRef:
|
||||||
|
key: SOCKET_DEV_API_KEY
|
||||||
|
- secretKey: PULUMI_ACCESS_TOKEN
|
||||||
|
remoteRef:
|
||||||
|
key: PULUMI_ACCESS_TOKEN
|
||||||
|
- secretKey: AWS_ACCESS_KEY_ID
|
||||||
|
remoteRef:
|
||||||
|
key: AWS_ACCESS_KEY_ID
|
||||||
|
- secretKey: AWS_SECRET_ACCESS_KEY
|
||||||
|
remoteRef:
|
||||||
|
key: AWS_SECRET_ACCESS_KEY
|
||||||
|
- secretKey: MINIO_ROOT_USER
|
||||||
|
remoteRef:
|
||||||
|
key: MINIO_ROOT_USER
|
||||||
|
- secretKey: MINIO_ROOT_PASSWORD
|
||||||
|
remoteRef:
|
||||||
|
key: MINIO_ROOT_PASSWORD
|
||||||
|
- secretKey: DEFECTDOJO_URL
|
||||||
|
remoteRef:
|
||||||
|
key: DEFECTDOJO_URL
|
||||||
|
- secretKey: DEFECTDOJO_API_TOKEN
|
||||||
|
remoteRef:
|
||||||
|
key: DEFECTDOJO_API_TOKEN
|
||||||
|
{{- end }}
|
||||||
@@ -0,0 +1,39 @@
|
|||||||
|
{{- if .Values.pipeline.enabled }}
|
||||||
|
apiVersion: argoproj.io/v1alpha1
|
||||||
|
kind: ClusterWorkflowTemplate
|
||||||
|
metadata:
|
||||||
|
name: amp-security-pipeline-v1.0.0
|
||||||
|
spec:
|
||||||
|
templates:
|
||||||
|
- name: scan-crossguard
|
||||||
|
container:
|
||||||
|
image: pulumi/pulumi:3.154.0
|
||||||
|
env:
|
||||||
|
- name: PULUMI_ACCESS_TOKEN
|
||||||
|
valueFrom:
|
||||||
|
secretKeyRef:
|
||||||
|
name: amp-security-pipeline-secrets
|
||||||
|
key: PULUMI_ACCESS_TOKEN
|
||||||
|
- name: AWS_ACCESS_KEY_ID
|
||||||
|
valueFrom:
|
||||||
|
secretKeyRef:
|
||||||
|
name: amp-security-pipeline-secrets
|
||||||
|
key: AWS_ACCESS_KEY_ID
|
||||||
|
- name: AWS_SECRET_ACCESS_KEY
|
||||||
|
valueFrom:
|
||||||
|
secretKeyRef:
|
||||||
|
name: amp-security-pipeline-secrets
|
||||||
|
key: AWS_SECRET_ACCESS_KEY
|
||||||
|
command:
|
||||||
|
- sh
|
||||||
|
- -c
|
||||||
|
args:
|
||||||
|
- |
|
||||||
|
set -eu
|
||||||
|
mkdir -p /workspace/reports
|
||||||
|
cd /workspace
|
||||||
|
pulumi preview --policy-pack ./policy-pack > /workspace/reports/crossguard.json 2>&1 || true
|
||||||
|
volumeMounts:
|
||||||
|
- name: workspace
|
||||||
|
mountPath: /workspace
|
||||||
|
{{- end }}
|
||||||
@@ -0,0 +1,28 @@
|
|||||||
|
{{- if .Values.pipeline.enabled }}
|
||||||
|
apiVersion: argoproj.io/v1alpha1
|
||||||
|
kind: ClusterWorkflowTemplate
|
||||||
|
metadata:
|
||||||
|
name: amp-security-pipeline-v1.0.0
|
||||||
|
spec:
|
||||||
|
templates:
|
||||||
|
- name: scan-kics
|
||||||
|
container:
|
||||||
|
image: checkmarx/kics:1.7.14
|
||||||
|
command:
|
||||||
|
- sh
|
||||||
|
- -c
|
||||||
|
args:
|
||||||
|
- |
|
||||||
|
set -eu
|
||||||
|
mkdir -p /workspace/reports
|
||||||
|
kics scan -p /workspace -o /workspace/reports --report-formats sarif,json --output-name kics || true
|
||||||
|
if [ -f /workspace/reports/kics.sarif ]; then
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
if [ -f /workspace/reports/kics.json ]; then
|
||||||
|
cp /workspace/reports/kics.json /workspace/reports/kics.sarif
|
||||||
|
fi
|
||||||
|
volumeMounts:
|
||||||
|
- name: workspace
|
||||||
|
mountPath: /workspace
|
||||||
|
{{- end }}
|
||||||
@@ -0,0 +1,22 @@
|
|||||||
|
{{- if .Values.pipeline.enabled }}
|
||||||
|
apiVersion: argoproj.io/v1alpha1
|
||||||
|
kind: ClusterWorkflowTemplate
|
||||||
|
metadata:
|
||||||
|
name: amp-security-pipeline-v1.0.0
|
||||||
|
spec:
|
||||||
|
templates:
|
||||||
|
- name: scan-semgrep
|
||||||
|
container:
|
||||||
|
image: returntocorp/semgrep:1.85.0
|
||||||
|
command:
|
||||||
|
- sh
|
||||||
|
- -c
|
||||||
|
args:
|
||||||
|
- |
|
||||||
|
set -eu
|
||||||
|
mkdir -p /workspace/reports
|
||||||
|
semgrep scan --config auto --sarif --output /workspace/reports/semgrep.sarif /workspace || true
|
||||||
|
volumeMounts:
|
||||||
|
- name: workspace
|
||||||
|
mountPath: /workspace
|
||||||
|
{{- end }}
|
||||||
@@ -0,0 +1,28 @@
|
|||||||
|
{{- if .Values.pipeline.enabled }}
|
||||||
|
apiVersion: argoproj.io/v1alpha1
|
||||||
|
kind: ClusterWorkflowTemplate
|
||||||
|
metadata:
|
||||||
|
name: amp-security-pipeline-v1.0.0
|
||||||
|
spec:
|
||||||
|
templates:
|
||||||
|
- name: scan-socketdev
|
||||||
|
container:
|
||||||
|
image: socketdev/socketcli:latest
|
||||||
|
env:
|
||||||
|
- name: SOCKET_DEV_API_KEY
|
||||||
|
valueFrom:
|
||||||
|
secretKeyRef:
|
||||||
|
name: amp-security-pipeline-secrets
|
||||||
|
key: SOCKET_DEV_API_KEY
|
||||||
|
command:
|
||||||
|
- sh
|
||||||
|
- -c
|
||||||
|
args:
|
||||||
|
- |
|
||||||
|
set -eu
|
||||||
|
mkdir -p /workspace/reports
|
||||||
|
socketdev scan /workspace --format json --output /workspace/reports/socketdev.json || true
|
||||||
|
volumeMounts:
|
||||||
|
- name: workspace
|
||||||
|
mountPath: /workspace
|
||||||
|
{{- end }}
|
||||||
@@ -0,0 +1,23 @@
|
|||||||
|
{{- if .Values.pipeline.enabled }}
|
||||||
|
apiVersion: argoproj.io/v1alpha1
|
||||||
|
kind: ClusterWorkflowTemplate
|
||||||
|
metadata:
|
||||||
|
name: amp-security-pipeline-v1.0.0
|
||||||
|
spec:
|
||||||
|
templates:
|
||||||
|
- name: scan-syft-grype
|
||||||
|
container:
|
||||||
|
image: anchore/syft:latest
|
||||||
|
command:
|
||||||
|
- sh
|
||||||
|
- -c
|
||||||
|
args:
|
||||||
|
- |
|
||||||
|
set -eu
|
||||||
|
mkdir -p /workspace/reports
|
||||||
|
syft scan dir:/workspace -o cyclonedx-json=/workspace/reports/sbom.json || true
|
||||||
|
grype sbom:/workspace/reports/sbom.json -o sarif=/workspace/reports/grype.sarif || true
|
||||||
|
volumeMounts:
|
||||||
|
- name: workspace
|
||||||
|
mountPath: /workspace
|
||||||
|
{{- end }}
|
||||||
@@ -0,0 +1,19 @@
|
|||||||
|
{{- if .Values.pipeline.enabled }}
|
||||||
|
apiVersion: argoproj.io/v1alpha1
|
||||||
|
kind: ClusterWorkflowTemplate
|
||||||
|
metadata:
|
||||||
|
name: amp-security-pipeline-v1.0.0
|
||||||
|
spec:
|
||||||
|
templates:
|
||||||
|
- name: scan-trufflehog
|
||||||
|
container:
|
||||||
|
image: alpine:3.20
|
||||||
|
command:
|
||||||
|
- sh
|
||||||
|
- -c
|
||||||
|
args:
|
||||||
|
- mkdir -p /workspace/reports && echo "stub: trufflehog" > /workspace/reports/trufflehog.json
|
||||||
|
volumeMounts:
|
||||||
|
- name: workspace
|
||||||
|
mountPath: /workspace
|
||||||
|
{{- end }}
|
||||||
@@ -0,0 +1,66 @@
|
|||||||
|
{{- if .Values.pipeline.enabled }}
|
||||||
|
apiVersion: argoproj.io/v1alpha1
|
||||||
|
kind: ClusterWorkflowTemplate
|
||||||
|
metadata:
|
||||||
|
name: amp-security-pipeline-v1.0.0
|
||||||
|
spec:
|
||||||
|
templates:
|
||||||
|
- name: upload-defectdojo
|
||||||
|
container:
|
||||||
|
image: python:3.12-alpine
|
||||||
|
env:
|
||||||
|
- name: DEFECTDOJO_URL
|
||||||
|
valueFrom:
|
||||||
|
secretKeyRef:
|
||||||
|
name: amp-security-pipeline-secrets
|
||||||
|
key: DEFECTDOJO_URL
|
||||||
|
- name: DEFECTDOJO_API_TOKEN
|
||||||
|
valueFrom:
|
||||||
|
secretKeyRef:
|
||||||
|
name: amp-security-pipeline-secrets
|
||||||
|
key: DEFECTDOJO_API_TOKEN
|
||||||
|
command:
|
||||||
|
- sh
|
||||||
|
- -c
|
||||||
|
args:
|
||||||
|
- |
|
||||||
|
set -eu
|
||||||
|
python - <<'PY'
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import pathlib
|
||||||
|
import urllib.request
|
||||||
|
|
||||||
|
base_url = os.environ["DEFECTDOJO_URL"].rstrip("/")
|
||||||
|
api_token = os.environ["DEFECTDOJO_API_TOKEN"]
|
||||||
|
product_name = os.environ.get("DEFECTDOJO_PRODUCT_NAME", "agentguard-ci")
|
||||||
|
scan_map = {
|
||||||
|
".sarif": "SARIF",
|
||||||
|
".json": "Generic Findings Import",
|
||||||
|
}
|
||||||
|
reports_dir = pathlib.Path("/workspace/reports")
|
||||||
|
for report in sorted(reports_dir.iterdir()):
|
||||||
|
if not report.is_file():
|
||||||
|
continue
|
||||||
|
scan_type = scan_map.get(report.suffix)
|
||||||
|
if not scan_type:
|
||||||
|
continue
|
||||||
|
req = urllib.request.Request(
|
||||||
|
f"{base_url}/api/v2/import-scan/",
|
||||||
|
data=json.dumps({
|
||||||
|
"scan_type": scan_type,
|
||||||
|
"product_name": product_name,
|
||||||
|
"file_name": report.name,
|
||||||
|
}).encode(),
|
||||||
|
headers={
|
||||||
|
"Authorization": f"Token {api_token}",
|
||||||
|
"Content-Type": "application/json",
|
||||||
|
},
|
||||||
|
method="POST",
|
||||||
|
)
|
||||||
|
urllib.request.urlopen(req)
|
||||||
|
PY
|
||||||
|
volumeMounts:
|
||||||
|
- name: workspace
|
||||||
|
mountPath: /workspace
|
||||||
|
{{- end }}
|
||||||
@@ -0,0 +1,45 @@
|
|||||||
|
{{- if .Values.pipeline.enabled }}
|
||||||
|
apiVersion: argoproj.io/v1alpha1
|
||||||
|
kind: ClusterWorkflowTemplate
|
||||||
|
metadata:
|
||||||
|
name: amp-security-pipeline-v1.0.0
|
||||||
|
spec:
|
||||||
|
templates:
|
||||||
|
- name: upload-storage
|
||||||
|
container:
|
||||||
|
image: amazon/aws-cli:2.15.40
|
||||||
|
env:
|
||||||
|
- name: AWS_ACCESS_KEY_ID
|
||||||
|
valueFrom:
|
||||||
|
secretKeyRef:
|
||||||
|
name: amp-security-pipeline-secrets
|
||||||
|
key: AWS_ACCESS_KEY_ID
|
||||||
|
- name: AWS_SECRET_ACCESS_KEY
|
||||||
|
valueFrom:
|
||||||
|
secretKeyRef:
|
||||||
|
name: amp-security-pipeline-secrets
|
||||||
|
key: AWS_SECRET_ACCESS_KEY
|
||||||
|
- name: MINIO_ROOT_USER
|
||||||
|
valueFrom:
|
||||||
|
secretKeyRef:
|
||||||
|
name: amp-security-pipeline-secrets
|
||||||
|
key: MINIO_ROOT_USER
|
||||||
|
- name: MINIO_ROOT_PASSWORD
|
||||||
|
valueFrom:
|
||||||
|
secretKeyRef:
|
||||||
|
name: amp-security-pipeline-secrets
|
||||||
|
key: MINIO_ROOT_PASSWORD
|
||||||
|
command:
|
||||||
|
- sh
|
||||||
|
- -c
|
||||||
|
args:
|
||||||
|
- |
|
||||||
|
set -eu
|
||||||
|
repo_name="${REPO_NAME:-repo}"
|
||||||
|
commit_sha="${GIT_COMMIT_SHA:-unknown}"
|
||||||
|
report_date="$(date -u +%F)"
|
||||||
|
aws s3 sync /workspace/reports "s3://${REPORTS_BUCKET:-security-reports}/${repo_name}/${report_date}/${commit_sha}/"
|
||||||
|
volumeMounts:
|
||||||
|
- name: workspace
|
||||||
|
mountPath: /workspace
|
||||||
|
{{- end }}
|
||||||
@@ -0,0 +1,22 @@
|
|||||||
|
# Renovate Preset
|
||||||
|
|
||||||
|
This directory contains a shared Renovate preset that other repositories can extend.
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
In another repository's `renovate.json`:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"extends": ["github>my-org/my-repo//renovate-preset"]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Adjust `my-org/my-repo` to point at this repository.
|
||||||
|
|
||||||
|
## Behavior
|
||||||
|
|
||||||
|
- Auto-merges patch and minor updates.
|
||||||
|
- Groups common monorepo package families into single PRs.
|
||||||
|
- Schedules Renovate runs on weekends before 6am UTC.
|
||||||
|
- Keeps security alerts from auto-merging.
|
||||||
@@ -0,0 +1,48 @@
|
|||||||
|
{
|
||||||
|
"$schema": "https://docs.renovatebot.com/renovate-schema.json",
|
||||||
|
"extends": ["config:recommended"],
|
||||||
|
"timezone": "UTC",
|
||||||
|
"schedule": ["before 6am on saturday", "before 6am on sunday"],
|
||||||
|
"automerge": true,
|
||||||
|
"automergeType": "pr",
|
||||||
|
"automergeStrategy": "squash",
|
||||||
|
"automergeSchedule": ["before 6am on saturday", "before 6am on sunday"],
|
||||||
|
"packageRules": [
|
||||||
|
{
|
||||||
|
"matchUpdateTypes": ["patch", "minor"],
|
||||||
|
"automerge": true
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"matchPackagePatterns": ["^@babel/"],
|
||||||
|
"groupName": "babel packages"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"matchPackagePatterns": ["^eslint"],
|
||||||
|
"groupName": "eslint packages"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"matchPackagePatterns": ["^jest"],
|
||||||
|
"groupName": "jest packages"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"matchPackagePatterns": ["^@types/"],
|
||||||
|
"groupName": "types packages"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"matchPackagePatterns": ["^react", "^react-dom"],
|
||||||
|
"groupName": "react packages"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"matchConfidence": ["high", "very-high"],
|
||||||
|
"dependencyDashboardApproval": false
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"matchConfidence": ["low", "neutral"],
|
||||||
|
"dependencyDashboardApproval": true
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"vulnerabilityAlerts": {
|
||||||
|
"labels": ["security"],
|
||||||
|
"automerge": false
|
||||||
|
}
|
||||||
|
}
|
||||||
Reference in New Issue
Block a user