Home Technical Support Advanced Jenkins Tagging, Enterprise‑Grade Pipelines, and Failure Handling in GitHub Actions

Advanced Jenkins Tagging, Enterprise‑Grade Pipelines, and Failure Handling in GitHub Actions

Last updated on Jan 07, 2026

Advanced Jenkins Tagging, Enterprise‑Grade Pipelines, and Failure Handling in GitHub Actions

Learn how to pass Git tags into Jenkins pipelines, design a production‑ready DevSecOps pipeline in Jenkins, and gracefully handle failures in GitHub Actions.


Introduction

In modern CI/CD workflows, metadata such as Git tags often drives release decisions, while security testing (SCA, SAST, DAST) must be baked into the pipeline to meet compliance requirements. At the same time, teams need the ability to continue a workflow even when a step fails—especially in security scanning where you want reports even on failure. This article walks you through:

  1. Ensuring tag information is available inside a Jenkins pipeline (Challenge 2).
  2. Building an “enterprise‑grade” Jenkins pipeline that stitches together SCA, SAST, and DAST stages.
  3. Using continue-on-error and conditional expressions to allow failures in GitHub Actions.

All examples are ready to copy‑paste into your own projects.


1. Passing Git Tag Information to a Jenkins Pipeline

Why It Matters

Tags mark release points, hot‑fixes, or any versioned artifact. When a pipeline runs on a tagged commit you often need that tag value to:

  • Publish a Docker image with the correct version tag.
  • Trigger downstream jobs that depend on the release identifier.
  • Store the tag in a vulnerability management platform (e.g., DefectDojo).

Prerequisites

Requirement How to Set Up
Multibranch Pipeline in Jenkins Create a Multibranch Pipeline job → Point it at your GitLab repository. Jenkins will automatically discover branches and tags.
GitLab tags In GitLab, create a tag on the commit you want to build: git tag -a v1.2.3 -m "Release 1.2.3"git push origin v1.2.3.
Jenkinsfile that reads the tag Use the built‑in env.GIT_TAG (or env.TAG_NAME depending on your Jenkins version).

Step‑by‑Step Guide

  1. Enable tag detection

    multibranchPipelineJob('my‑project') {
        branchSources {
            git {
                id('gitlab')
                remote('https://gitlab.com/your‑repo.git')
                credentialsId('gitlab‑creds')
                includes('*/')               // branches
                includes('*')                // tags (wildcard)
            }
        }
    }
    
  2. Read the tag inside Jenkinsfile

    pipeline {
        agent any
        environment {
            // When the build is triggered by a tag, GIT_TAG is populated.
            RELEASE_TAG = "${env.GIT_TAG ?: 'no-tag'}"
        }
        stages {
            stage('Show Tag') {
                steps {
                    echo "Running for tag: ${env.RELEASE_TAG}"
                }
            }
            // Example: use the tag to tag a Docker image
            stage('Build & Push Docker') {
                when { expression { return env.RELEASE_TAG != 'no-tag' } }
                steps {
                    sh """
                    docker build -t myapp:${RELEASE_TAG} .
                    docker push myapp:${RELEASE_TAG}
                    """
                }
            }
        }
    }
    
  3. Optional: Send the tag to DefectDojo

    stage('Report to DefectDojo') {
        steps {
            sh """
            curl -X POST https://defectdojo/api/v2/findings/ \
                 -H "Authorization: Token ${DOJO_TOKEN}" \
                 -d "title=Release ${RELEASE_TAG}" \
                 -d "tags=${RELEASE_TAG}"
            """
        }
    }
    

Helpful References

  • Jenkins blog: [Pipelines with Git tags] (May 2018) – https://www.jenkins.io/blog/2018/05/16/pipelines-with-git-tags/
  • Video walkthrough (starting at 1:06): https://youtu.be/HgiI-8VrxQE?t=66

If you need a visual guide, request the “CD Jenkins Part 2” support video from the staff.


2. Designing an Enterprise‑Grade DevSecOps Pipeline in Jenkins

What “Enterprise‑Grade” Means

  • End‑to‑end security coverage: combines Software Composition Analysis (SCA), Static Application Security Testing (SAST), and Dynamic Application Security Testing (DAST).
  • Reusable, modular stages that can be dropped into any project.
  • Integration with a central reporting hub (e.g., DefectDojo) for compliance dashboards.
  • Scalable – runs in parallel where possible, uses Docker containers for isolation.

Sample Jenkinsfile

pipeline {
    agent any
    options {
        timeout(time: 60, unit: 'MINUTES')
        timestamps()
    }
    environment {
        DOJO_URL   = 'https://defectdojo.example.com'
        DOJO_TOKEN = credentials('defectdojo-token')
    }
    stages {
        stage('Checkout') {
            steps {
                checkout scm
            }
        }

        // ---------- SCA ----------
        stage('Software Composition Analysis') {
            parallel {
                stage('Dependency‑Check') {
                    agent { docker 'owasp/dependency-check' }
                    steps {
                        sh 'dependency-check.sh --project "$JOB_NAME" -f JSON -o reports/sca'
                        archiveArtifacts artifacts: 'reports/sca/**', fingerprint: true
                    }
                }
                stage('Syft SBOM') {
                    agent { docker 'anchore/syft' }
                    steps {
                        sh 'syft . -o json > reports/sca/syft.json'
                        archiveArtifacts artifacts: 'reports/sca/syft.json', fingerprint: true
                    }
                }
            }
        }

        // ---------- SAST ----------
        stage('Static Application Security Testing') {
            parallel {
                stage('Bandit (Python)') {
                    agent { docker 'hysnsec/bandit' }
                    steps {
                        sh 'bandit -r . -f json -o reports/sast/bandit.json || true'
                        archiveArtifacts artifacts: 'reports/sast/bandit.json', fingerprint: true
                    }
                }
                stage('SpotBugs (Java)') {
                    agent { docker 'spotbugs/spotbugs' }
                    steps {
                        sh 'spotbugs -textui -output reports/sast/spotbugs.xml . || true'
                        archiveArtifacts artifacts: 'reports/sast/spotbugs.xml', fingerprint: true
                    }
                }
            }
        }

        // ---------- DAST ----------
        stage('Dynamic Application Security Testing') {
            agent { docker 'owasp/zap2docker-stable' }
            steps {
                sh '''
                zap-baseline.py -t http://my‑app:8080 -r reports/dast/zap-report.html || true
                '''
                archiveArtifacts artifacts: 'reports/dast/**', fingerprint: true
            }
        }

        // ---------- Reporting ----------
        stage('Upload to DefectDojo') {
            steps {
                sh '''
                python upload_to_dojo.py \
                  --url $DOJO_URL \
                  --token $DOJO_TOKEN \
                  --engagement "CI/CD $BUILD_NUMBER" \
                  --scan-type "Jenkins" \
                  --file reports/**/*.json \
                  --file reports/**/*.xml \
                  --file reports/**/*.html
                '''
            }
        }
    }

    post {
        always {
            cleanWs()
        }
    }
}

Key Points

  • Parallel execution reduces total run time.
  • Each security tool runs inside a Docker container, ensuring consistent environments.
  • The || true pattern prevents a failing security scan from aborting the pipeline; results are still uploaded for visibility.
  • All artifacts are archived for later audit.

How to Adapt the Template

  1. Swap tools – replace Bandit with Trivy for container scanning, or add a license‑compliance scanner.
  2. Add environment variables for credentials (use Jenkins Credentials Binding).
  3. Customize reporting – map the upload_to_dojo.py script to your own API client if you use a different platform.

3. Allowing Failures in GitHub Actions

Sometimes a security scan should never block the pipeline, but you still want the report. GitHub Actions provides two mechanisms:

Mechanism Scope Effect
continue-on-error: true Job or individual step Marks the job/step as successful even if it exits with a non‑zero code.
if: always() Step Guarantees the step runs regardless of previous failures (useful for uploading artifacts).

Example 1 – Whole Job Continues on Error

name: CI‑Security

on: [push, pull_request]

jobs:
  sast:
    runs-on: ubuntu-20.04
    continue-on-error: true   # <‑‑ Job never fails
    steps:
      - uses: actions/checkout@v2

      - name: Run Bandit
        run: |
          docker run --rm -v "$(pwd)":/src hysnsec/bandit \
            -r /src -f json -o /src/bandit-output.json

      # Upload the report even if Bandit fails
      - name: Upload Bandit Report
        if: always()
        uses: actions/upload-artifact@v2
        with:
          name: Bandit
          path: bandit-output.json

Example 2 – Only a Specific Step Continues

jobs:
  sast:
    runs-on: ubuntu-20.04
    steps:
      - uses: actions/checkout@v2

      - name: Run Bandit (may fail)
        id: bandit
        run: |
          docker run --rm -v "$(pwd)":/src hysnsec/bandit \
            -r /src -f json -o /src/bandit-output.json
        continue-on-error: true   # <‑‑ Step marked as success

      - name: Upload Bandit Report
        if: always()               # Runs no matter what happened before
        uses: actions/upload-artifact@v2
        with:
          name: Bandit
          path: bandit-output.json

Best Practices

  • Prefer continue-on-error at the step level when only a single tool should be “non‑blocking”.
  • Use if: always() on artifact‑upload steps to guarantee results are stored.
  • Add a comment or badge in the workflow file to remind reviewers that a step is intentionally non‑fatal.

Common Questions & Tips

Question Answer
How do I make Jenkins treat a tag as a branch? In a Multibranch Pipeline, enable “Discover tags” under Branch Sources → Behaviors. Jenkins will then create a separate job for each tag.
Can I run the same pipeline on both Jenkins and GitHub Actions? Yes – keep the core logic (e.g., Docker commands) in reusable scripts stored in the repo, then invoke them from either platform’s YAML or Jenkinsfile.
What if a security tool returns a non‑JSON output? Convert it to JSON (or JUnit XML) before uploading to DefectDojo. Most tools provide a -f json flag; otherwise use a small wrapper script.
Is continue-on-error safe for production releases? Use it only for diagnostic or reporting steps. Critical build steps (e.g., artifact publishing) should still fail the workflow if they encounter errors.

Conclusion

By correctly propagating Git tag information into Jenkins, you gain precise version control over releases. Building an enterprise‑grade pipeline that integrates SCA, SAST, and DAST ensures comprehensive security coverage while remaining modular and scalable. Finally, mastering failure‑allowance patterns in GitHub Actions lets you collect valuable security data without blocking downstream processes.

Apply these patterns to your own CI/CD environment, and you’ll have a robust, compliant, and transparent DevSecOps workflow ready for enterprise adoption.