CDP Pipeline Failures & Best Practices for DefectDojo Integration
When working with the pipelines, you’ll often encounter jobs that are expected to “fail” when they detect security issues. Understanding why certain jobs can be allowed to fail—and how to handle scan results in DefectDojo—helps you keep your pipeline efficient, your reports clean, and your compliance posture strong. This article walks through the rationale behind permissive‑failure settings for specific CDP jobs and offers guidance on what scan data should be sent to DefectDojo.
Table of Contents
Why Some Jobs May Be Allowed to Fail
1. Jobs Designed to Surface Vulnerabilities
-
sast-with-vm– Runs static application security testing (SAST) inside a virtual machine. A failure indicates that the scanner discovered one or more code‑level vulnerabilities. -
sca-frontend– Executes software component analysis (SCA) on front‑end dependencies. A failing status means vulnerable libraries were found.
These jobs are intentional gatekeepers. Treating a failure as a hard pipeline break would stop the build even when the only issue is a newly discovered vulnerability that you may want to triage first.
2. Jobs That Should Remain Strict
-
sslscan– Checks TLS configurations. A failure usually points to misconfigurations that could expose data in transit. -
ansible-hardening&inspec– Enforce hardening standards and compliance checks. Failures here often indicate non‑compliant infrastructure that must be remediated before proceeding.
Bottom line: Only allow failure on jobs whose primary purpose is to report findings, not to enforce mandatory compliance.
Configuring “Allow Failure” for Specific Jobs
-
Open the
.gitlab-ci.yml(or equivalent) file in your repository. -
Locate the job definitions for
sast-with-vmandsca-frontend. -
Add the
allow_failure: trueflag:
sast-with-vm:
stage: test
script:
- ./run-sast.sh
allow_failure: true # <-- permits the job to fail without breaking the pipeline
sca-frontend:
stage: test
script:
- ./run-sca.sh
allow_failure: true # <-- same rationale as above
- Commit and push the changes. The pipeline will now continue even if these jobs report vulnerabilities, while still publishing the findings for review.
Note: Keep
allow_failureoff for jobs likesslscan,ansible-hardening, andinspecto ensure that critical security misconfigurations halt the pipeline.
DefectDojo Integration: What to Send and What to Exclude
DefectDojo is a powerful vulnerability management platform, but it expects certain formats and scan types. Sending only relevant results avoids clutter and improves triage speed.
What to Send
| Scan Type | Reason for Inclusion |
|---|---|
SAST results (sast-with-vm) |
Provides line‑level code defects that developers can fix directly. |
SCA results (sca-frontend) |
Highlights vulnerable third‑party libraries; essential for dependency management. |
| Custom security scans (e.g., OWASP ZAP, Burp) | Adds dynamic testing data that complements static findings. |
What to Exclude
| Scan Type | Reason for Exclusion |
|---|---|
ansible-hardening |
Generates configuration‑hardening reports that DefectDojo does not natively parse. |
inspec |
Produces compliance check output (e.g., CIS benchmarks) which is better stored in a compliance dashboard rather than a vulnerability tracker. |
| Non‑security artifacts (e.g., build logs, test coverage) | Irrelevant to vulnerability management and increase storage costs. |
How to Push Findings to DefectDojo
-
Export the scan results in a supported format (e.g., SARIF, JUnit XML, JSON).
-
Use the DefectDojo API or the built‑in CI integration:
curl -X POST "https://defectdojo.example.com/api/v2/import-scan/" \
-H "Authorization: Token <YOUR_API_TOKEN>" \
-F "scan_type=SAST" \
-F "file=@sast-results.sarif" \
-F "engagement=123" \
-F "product_name=MyApp"
- Verify the import in the DefectDojo UI and assign findings to the appropriate remediation sprint.
Practical Example: End‑to‑End Pipeline Setup
Below is a simplified snippet that ties everything together:
stages:
- test
- report
- upload
sast-with-vm:
stage: test
script: ./run-sast.sh
allow_failure: true
artifacts:
paths: [sast-results.sarif]
sca-frontend:
stage: test
script: ./run-sca.sh
allow_failure: true
artifacts:
paths: [sca-results.json]
sslscan:
stage: test
script: ./run-sslscan.sh
# No allow_failure – must pass
defectdojo-upload:
stage: upload
script:
- ./upload-to-defectdojo.sh sast-results.sarif SAST
- ./upload-to-defectdojo.sh sca-results.json SCA
dependencies: [sast-with-vm, sca-frontend]
only:
- main
-
allow_failure: trueensures the pipeline proceeds even when vulnerabilities are found. -
The final
defectdojo-uploadjob sends only the relevant scans to DefectDojo.
Tips & Common Questions
✅ Tips for a Smooth Integration
-
Standardize output formats across all security tools (prefer SARIF or JSON).
-
Tag each upload with the pipeline ID or commit SHA to maintain traceability.
-
Run a dry‑run of the DefectDojo import script locally before committing to CI.
❓ Common Questions
| Question | Answer |
|---|---|
Can I allow failure for sslscan? |
Not recommended. TLS misconfigurations should block the pipeline until fixed. |
| What if DefectDojo rejects a scan? | Check the API response; most rejections are due to unsupported file types or missing required fields. |
| Should I send duplicate findings from multiple scans? | No. Consolidate duplicates in DefectDojo to avoid “noise” and ensure accurate metrics. |
| How do I handle false positives? | Mark them as “false positive” in DefectDojo; this status is respected in future imports. |
Bottom Line
Allowing failure for sast-with-vm and sca-frontend is intentional—these jobs are meant to surface vulnerabilities without halting the build. Conversely, keep strict enforcement on compliance‑oriented jobs. When integrating with DefectDojo, send only the scans it can parse (SAST, SCA, dynamic tests) and omit hardening or compliance outputs. Following these practices will keep your CDP pipelines lean, your security reporting accurate, and your remediation workflow efficient.