The Jenkins controller (formerly master) is the central server that manages the Jenkins ecosystem. It schedules jobs, stores configuration, serves the web UI, and coordinates build execution across agents. The controller maintains the build queue, dispatches jobs to available agents, monitors their status, and collects results. It runs as a Java application and can be deployed on-premises or cloud. Best practice: Allocate minimum 4GB RAM for production, enable regular backups, and avoid build execution on the controller itself to keep it lightweight and dedicated to coordination tasks.
Jenkins FAQ & Answers
41 expert Jenkins answers researched from official documentation. Every answer cites authoritative sources you can verify.
unknown
41 questionsJenkins agents (nodes) are machines that execute build jobs delegated by the controller. Agents connect via JNLP, SSH, or cloud plugins. Each agent provides executors (slots for concurrent job execution) and can be labeled for job targeting (e.g., 'linux', 'docker', 'windows'). Agents can be permanent servers or dynamically provisioned cloud instances. They isolate build execution from the controller, enabling horizontal scaling, different OS environments, and resource optimization. Proper configuration requires sufficient disk space, appropriate CPU/memory allocation, and required build tools installed.
Executors are threads on Jenkins agents that run build steps concurrently. Each agent can have multiple executors allowing parallel job execution. An agent with 4 executors can run 4 builds simultaneously. Best practice: Match executor count to CPU cores, leave resources for the OS, and consider memory requirements. Set executors per agent through Jenkins UI or programmatically. Jenkins automatically allocates available executors based on job queue priority and agent availability. Proper executor management prevents resource contention and ensures optimal build performance.
Agent labels are tags that categorize agents by capabilities, allowing precise job targeting. Labels specify OS (linux, windows), tools (docker, maven, nodejs), or environment (production, staging). Assign multiple labels per agent and use logical operators (&&, ||, !) for complex matching. In declarative pipelines: agent { label 'linux && docker' }. In scripted syntax: node('linux && maven') { ... }. Labels enable environment-specific builds, tool requirement matching, and resource optimization. Use consistent labeling conventions and descriptive names.
A Jenkinsfile is a text file containing Pipeline definitions stored in source control, implementing Pipeline as Code. It defines the entire build process using declarative or scripted Groovy syntax. Jenkinsfiles provide version control for build configurations, enabling code review, audit trails, and branch-specific pipelines. They serve as the single source of truth, eliminating UI-based configuration drift. Jenkinsfiles are automatically discovered by Multibranch Pipeline jobs. Benefits: reproducible builds, easy rollback, team collaboration, and infrastructure as code. Commit Jenkinsfiles to repository root.
Declarative Pipeline structure: pipeline { agent any; stages { stage('Build') { steps { sh 'make' } } stage('Test') { steps { sh 'npm test' } } } post { always { echo 'Cleanup' } } }. Required: pipeline block, agent specification, at least one stage with steps. Optional: environment, tools, options, parameters, triggers, post. Stages organize pipeline into logical phases (Build, Test, Deploy). Steps contain build commands. Post actions define cleanup and notifications. All code must be within pipeline block with stages containing steps.
Stages are logical groupings of steps representing distinct pipeline phases (Build, Test, Deploy). They provide visual separation in Jenkins UI and enable stage-specific configuration. Syntax: stages { stage('Build') { steps { sh 'npm run build' } } stage('Test') { steps { sh 'npm test' } } }. Each stage requires at least one steps block. Stages support parallel execution, when conditions for selective execution, and post actions for stage-specific cleanup. Use descriptive names, granular division for visibility, and meaningful step organization. Stages cannot be nested but can contain parallel sections.
Steps are individual commands executing specific tasks within pipeline stages. Common steps: sh for shell commands, bat for Windows batch, git for SCM operations, junit for test results, archiveArtifacts for build outputs. Usage: steps { sh 'npm install'; sh 'npm run test'; junit 'target/surefire-reports/*.xml' }. Steps can be nested within dir, timeout, retry, and withCredentials. Keep steps focused and atomic, use appropriate step types, and combine related steps logically. Steps provide actual build logic, making stages executable.
Agent configuration specifies where pipeline execution occurs. Declarative pipelines require agent at pipeline level with optional stage-level overrides. Options: agent any (any available agent), agent { label 'linux' } (specific label), agent { docker { image 'node:16' } } (Docker container), agent { kubernetes { label 'my-pod' } } (Kubernetes pod). Stage-level agents override pipeline agent: stage('Build') { agent { label 'maven' } steps { ... } }. Use descriptive labels, Docker agents for reproducible builds, and stage-specific agents for environment requirements.
Define environment variables in the environment block: environment { NODE_VERSION = '16'; BUILD_NUMBER = env.BUILD_NUMBER }. Access using shell syntax $VAR or Groovy syntax env.VAR. Jenkins provides built-in variables (BUILD_NUMBER, WORKSPACE, GIT_COMMIT). Dynamic values: script { env.DYNAMIC = sh(script: 'command', returnStdout: true).trim() }. Environment variables support pipeline-level and stage-level scope, with stage variables overriding pipeline variables. Use uppercase by convention, document variable purposes, and avoid storing sensitive data. Enable flexible, configurable pipeline behavior.
When conditions control stage execution based on criteria. Syntax: when { branch 'main' } executes only on main branch. Available: branch (Git branch), environment (env variable), expression (Groovy), changelog (commit message), changeset (file changes), tag (Git tag). Combine with allOf, anyOf, not: when { allOf { branch 'main'; environment name: 'DEPLOY', value: 'true' } }. Use beforeAgent: true to check conditions before agent allocation. When conditions are placed inside stage blocks. Use specific branch conditions for deployment stages.
Parallel execution runs multiple stages simultaneously to reduce build time. Syntax: parallel { stage('Linux Test') { agent { label 'linux' } steps { sh 'npm test' } } stage('Windows Test') { agent { label 'windows' } steps { bat 'npm test' } } }. Use failFast: true to stop all branches if one fails. Matrix execution generates combinations: matrix { axes { axis { name 'NODE_VERSION' values '14', '16', '18' } } stages { stage('Test') { steps { sh 'npm test' } } } }. Ensure sufficient agent executors and use parallel for independent operations like multi-platform testing.
Post actions define operations running at pipeline or stage completion regardless of status. Syntax: post { always { echo 'Cleanup' } success { echo 'Success!' } failure { echo 'Failed!' } }. Available: always, success, failure, unstable, aborted, changed, fixed, regression, unsuccessful, cleanup (runs last). Example: post { success { archiveArtifacts 'target/*.jar' } failure { mail to: 'team@example.com', subject: 'Build failed' } always { cleanWs() } }. Post actions execute after all stages complete, enabling notifications, artifact archiving, cleanup, and status reporting.
Store credentials in Manage Jenkins > Credentials > Add Credentials. Types: username/password, SSH keys, secret text, certificates, secret files. Access using: withCredentials([usernamePassword(credentialsId: 'git-creds', usernameVariable: 'USER', passwordVariable: 'PASS')]) { sh 'git push https://${USER}:${PASS}@github.com/repo.git' }. Or via environment: environment { GITHUB_TOKEN = credentials('github-token') }. Credentials are encrypted and scoped globally, per-folder, or per-pipeline. Never hardcode credentials, use descriptive IDs, implement least privilege, rotate regularly, and audit usage. Jenkins masks sensitive values in logs.
Archive artifacts using: archiveArtifacts artifacts: 'target/*.jar', fingerprint: true. Use Ant-style patterns (**/*.jar, target/**/*). Fingerprinting tracks file usage across builds and enables dependency tracking. For large artifacts, use external repositories like Artifactory or Nexus. Use stash/unstash to transfer files between agents: stash includes: 'target/*.jar', name: 'build-output'. Best practices: archive only necessary files, use external storage for large artifacts, implement retention policies, and fingerprint for traceability. Artifacts are accessible through build pages, REST API, and Copy Artifact plugin.
Configure triggers to start builds automatically. Declarative syntax: triggers { cron('H 2 * * *') } for daily builds at 2 AM. For webhook triggers, configure Git/GitHub webhooks to Jenkins URL: https://jenkins.example.com/github-webhook/. Polling (less efficient): pollSCM('H/5 * * * *') checks every 5 minutes. Upstream triggers: triggers { upstream(upstreamProjects: 'dependency-job', threshold: 'SUCCESS') }. Multibranch pipelines automatically trigger on branch changes. Use webhooks over polling for efficiency, set appropriate cron schedules, and configure GitHub/GitLab integrations for PR-based builds.
Configure authentication in Manage Jenkins > Security. Options: Jenkins user database, LDAP (plugin version 733.vd3700c27b_043+), Active Directory, SAML 2.0 (4.583.585.v22ccc1139f55+ for replay attack protection), GitHub/GitLab OAuth. For authorization use Role-Based Strategy or Project-based Matrix Authorization. Matrix setup: grant Overall/Administer to administrators, Overall/Read to authenticated users, Job/Build to specific groups. LDAP config: server: 'ldap.company.com', rootDN: 'dc=company,dc=com', userSearchBase: 'ou=users', groupSearchBase: 'ou=groups'. Jenkins LTS 2.492.x requires Spring Security 6.3.4+ with Java 17/21 support. Use centralized authentication (SAML/LDAP) for enterprise, implement least privilege, audit permissions quarterly, and rotate service account credentials.
Enable HTTPS via reverse proxy (recommended) or Jenkins native HTTPS. Nginx config: server { listen 443 ssl; server_name jenkins.company.com; ssl_certificate /path/to/fullchain.pem; ssl_certificate_key /path/to/privkey.pem; ssl_protocols TLSv1.2 TLSv1.3; proxy_read_timeout 90s; location / { proxy_pass http://127.0.0.1:8080; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; } }. Critical: Jenkins must bind to localhost (127.0.0.1) only, never 0.0.0.0, to prevent unencrypted access on port 8080. Use Let's Encrypt certificates, set proxy_read_timeout to 90s (Jenkins recommendation), enable HTTP/2, implement rate limiting, and monitor certificate auto-renewal. Apache alternative: RequestHeader set X-Forwarded-Proto "https" and X-Forwarded-Port "443".
Secure agent-controller communication using JNLP4-connect protocol with TLS encryption and per-node secrets. Configure in Manage Jenkins > Security > Agent protocols. Enable only JNLP4-connect, disable legacy JNLP1/JNLP2/CLI1 (unencrypted). JNLP4 uses SSLEngine from Java Cryptography Architecture for TLS upgrade before secret exchange, with non-blocking I/O for performance. Modern alternative: WebSocket transport (recommended for Kubernetes) uses regular HTTP(S) port, no special network config needed. Agent security: (1) Run as non-root with minimal permissions. (2) Use outbound connections (agents → controller). (3) Network segmentation with firewalls. (4) Dedicated agent machines. (5) Monitor logs. For Kubernetes: WebSocket transport strongly recommended with pod security contexts and network policies. Update agents regularly, implement least privilege, and use inbound agents terminology (modern naming).
Enable Groovy Sandbox (default for Jenkins Pipeline) to restrict untrusted scripts to whitelisted operations. All method calls, object construction, and field access are checked against approved operations. Configure in Manage Jenkins > In-process Script Approval. When sandbox blocks unsafe methods, administrators approve signatures individually: approve only read-only methods (getters), never approve state-changing operations (setters, execute). Parameter validation: parameters { string(name: 'BRANCH', defaultValue: 'main', description: 'Branch to build', trim: true) }. Validate inputs: script { if (!params.BRANCH =~ /^[a-zA-Z0-9_/-]+$/) { error 'Invalid branch name' } }. Prevent command injection using withCredentials binding: withCredentials([string(credentialsId: 'api-key', variable: 'KEY')]) { sh "curl -H 'Authorization: Bearer $KEY' https://api.example.com" }. Keep sandbox enabled, review approval queue regularly, validate all external inputs, and use Script Security plugin for enforcement.
Use Workspace Cleanup plugin or built-in steps. Clean workspace: cleanWs(). Selective cleanup: cleanWs patterns: [[pattern: 'target/**', type: 'INCLUDE'], [pattern: '.git/**', type: 'EXCLUDE']]. Custom workspace: ws('/custom/workspace') { steps { ... } }. Best practice: Place cleanWs() in post cleanup block: post { cleanup { cleanWs() } }. This ensures cleanup runs even if build fails. Monitor disk usage with df -h and set retention policies. Use cleanWs disableDeferredWipeout: true, deleteDirs: true for synchronous deletion. Proper workspace management prevents disk space issues.
Set timeouts to prevent hanging builds: timeout(time: 30, unit: 'MINUTES') { steps { sh 'long-running-command' } }. Timeout can be applied at pipeline, stage, or step level. Available units: 'SECONDS', 'MINUTES', 'HOURS'. Example at pipeline level: options { timeout(time: 1, unit: 'HOURS') }. At stage level: stage('Build') { options { timeout(time: 30, unit: 'MINUTES') } steps { ... } }. Set appropriate timeouts based on expected execution time. Use shorter timeouts for fast operations and longer for integration tests or deployments.
Use retry for flaky operations: retry(3) { sh 'npm test' }. Retry wraps a block of steps and re-executes them if they fail. Useful for network-dependent operations like API calls, external service connections, or flaky tests. Example with timeout: timeout(time: 5, unit: 'MINUTES') { retry(3) { sh 'curl https://api.example.com/health' } }. Retry count should be reasonable (3-5 attempts) to avoid excessive build times. Can be combined with sleep between retries in scripted syntax: retry(3) { try { sh 'command' } catch (err) { sleep 10; throw err } }.
Use catchError to handle errors gracefully: catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') { sh 'risky-command' }. This allows pipeline to continue despite stage failures. In scripted pipelines use try-catch: try { sh 'command' } catch (Exception e) { currentBuild.result = 'UNSTABLE'; echo "Error: ${e}" }. Set failFast: true in parallel stages to stop all branches on first failure. Use meaningful error messages and log exception details. Best practice: Combine error handling with notifications to alert team of issues while allowing pipeline to complete cleanup actions.
Install Email Extension plugin from Manage Jenkins > Plugins. Configure SMTP in Manage Jenkins > System > Extended E-mail Notification. Gmail example: SMTP server smtp.gmail.com, port 465 (SSL), enable "Use SMTP authentication", enter Gmail address and app-specific password (not regular password). Test with "Test configuration" button. Pipeline usage: post { failure { emailext subject: 'Build Failed: ${env.JOB_NAME} - ${env.BUILD_NUMBER}', body: '${BUILD_URL}\n\nConsole: ${BUILD_LOG}\n\nChanges: ${CHANGES}', to: 'team@example.com' } success { emailext subject: 'Build Success: ${env.JOB_NAME}', body: '${BUILD_URL}', to: 'team@example.com' } }. Available variables: ${BUILD_URL}, ${BUILD_LOG}, ${CHANGES}, ${GIT_COMMIT}. Send on specific conditions (success, failure, unstable, changed) to avoid alert fatigue. For Gmail, create app password at Google Account Security settings.
Install Slack Notification plugin from Manage Jenkins > Plugins. Slack setup: search "Incoming Webhooks" app in Slack workspace, install, create webhook for target channel, copy webhook URL. Jenkins setup: Manage Jenkins > Manage Credentials > Global > Add Credentials, select "Secret text" kind, paste webhook URL in Secret field, assign credential ID (e.g., 'slack-webhook'). Configure in Manage Jenkins > System > Slack, enter workspace name and select credential. Pipeline usage: slackSend channel: '#builds', color: 'danger', message: "Build Started - ${env.JOB_NAME} ${env.BUILD_NUMBER} (<${env.BUILD_URL}|Open>)". Colors: 'good' (green), 'warning' (yellow), 'danger' (red). Post block: post { failure { slackSend channel: '#alerts', color: 'danger', message: 'Build failed!' } }. Security: Never expose webhook token directly, always use Jenkins Credentials. Plugin supports attachments and blocks for rich messages.
Debug using: (1) Console output for detailed logs and error traces. (2) Blue Ocean UI for visual stage breakdown with intuitive failure identification. (3) Pipeline Replay to modify and rerun failed stages without commits - Replay button appears after run completes, allows editing Jenkinsfile and Shared Library code, can be called multiple times for parallel testing. (4) Declarative Linter validates syntax: Check Pipeline Syntax in job configuration. (5) Add debug output: echo "DEBUG: Variable = ${env.VAR}" and sh 'set -x' for shell debugging. (6) Snippet Generator to verify step syntax. (7) Blue Ocean stage restart: click failed stage node, then Restart link. Common issues: syntax errors (cannot Replay), credential ID mismatches, unavailable agents, script approval blocks. Best practices: add echo statements for variable inspection, test in feature branches, save Replay changes to file before running, use version control for Jenkinsfiles to track changes and rollback.
Create structure: vars/ for global functions, src/ for Groovy classes, resources/ for non-Groovy files. Configure in Manage Jenkins > System > Global Pipeline Libraries. Use in Jenkinsfile: @Library('shared-library@v1.0') _. Example function vars/buildMaven.groovy: def call(String pomFile = 'pom.xml') { sh "mvn -f ${pomFile} clean package" }. Usage: buildMaven('app/pom.xml'). For classes: src/com/company/BuildHelper.groovy then import. Version using Git tags/branches. Best practices: version libraries, document functions, use descriptive names, test independently. Enables code reuse and standardization.
Use Docker agents for reproducible builds: agent { docker { image 'maven:3.8.4-openjdk-11' } }. For custom Dockerfile: agent { dockerfile { filename 'Dockerfile.build' } }. Private registry: agent { docker { image 'private/node:16' registryUrl 'https://registry.company.com' registryCredentialsId 'docker-creds' } }. The Dockerfile agent builds a new image from Dockerfile in repository. Use specific image versions (not 'latest') for reproducibility. Best practices: Clean up containers after builds, use .dockerignore files, specify resource limits, and use official images from trusted sources. Requires Docker Pipeline plugin.
Build Docker images using: steps { script { def image = docker.build('my-app:${BUILD_NUMBER}') } }. Push to registry: steps { script { def image = docker.build('my-app:${BUILD_NUMBER}'); image.push(); image.push('latest') } }. Use private registry: docker.withRegistry('https://registry.company.com', 'docker-creds') { def image = docker.build('my-app:${BUILD_NUMBER}'); image.push() }. Best practices: Use specific tags (not just 'latest'), scan images for vulnerabilities, use multi-stage builds for smaller images, and clean up intermediate images. Tag with BUILD_NUMBER for traceability.
Install Kubernetes plugin and configure cloud in Manage Jenkins > Manage Nodes and Clouds > Configure Clouds > Add Kubernetes. Enter Kubernetes URL and Jenkins URL. Pod template configuration methods: (1) YAML-based (recommended): agent { kubernetes { yaml '''apiVersion: v1\nkind: Pod\nspec:\n containers:\n - name: maven\n image: maven:3.8.4-openjdk-17\n command: ['cat']\n tty: true\n resources:\n requests:\n memory: "512Mi"\n cpu: "500m"\n limits:\n memory: "1Gi"\n cpu: "1"''' } }. Execute in specific container: container('maven') { sh 'mvn clean package' }. (2) podTemplate step for ephemeral pods (new users). Default: plugin uses jenkins/inbound-agent Docker image if no container template specified. Plugin allocates Jenkins agents in Kubernetes pods, starts them for builds, stops after completion. Configure namespace, service account, resource requests/limits, and pod security contexts. Best practices: use resource limits, specific image versions, pod templates for reusability.
Matrix builds test multiple configurations simultaneously: matrix { axes { axis { name 'NODE_VERSION' values '14', '16', '18' } axis { name 'OS' values 'linux', 'windows' } } stages { stage('Test') { steps { sh 'npm test' } } } }. This generates 6 combinations (3 versions × 2 OSes). Access values: ${NODE_VERSION}, ${OS}. Use excludes to skip combinations: excludes { exclude { axis { name 'NODE_VERSION' values '14' } axis { name 'OS' values 'windows' } } }. Matrix builds provide comprehensive testing across platforms, versions, and configurations.
Define parameters: parameters { string(name: 'BRANCH', defaultValue: 'main', description: 'Branch to build'); choice(name: 'ENVIRONMENT', choices: ['dev', 'staging', 'prod'], description: 'Deploy environment'); booleanParam(name: 'RUN_TESTS', defaultValue: true, description: 'Run tests') }. Access parameters: ${params.BRANCH}, ${params.ENVIRONMENT}, ${params.RUN_TESTS}. Available types: string, text, booleanParam, choice, password. Parameters enable build customization without code changes. First build uses defaults; subsequent builds prompt for values. Use descriptive names and descriptions. Validate parameter values to prevent security issues.
Use input for manual approval: input message: 'Deploy to production?', ok: 'Deploy'. With parameters: input message: 'Enter version', parameters: [string(name: 'VERSION', description: 'Release version')]. Capture input: def userInput = input message: 'Proceed?', parameters: [choice(name: 'ENVIRONMENT', choices: ['dev', 'prod'])]. Use submitter to restrict who can approve: input message: 'Deploy?', submitter: 'admin,release-team'. Input pauses pipeline execution until user responds. Best practice: Use timeout with input to prevent indefinite waits: timeout(time: 1, unit: 'HOURS') { input message: 'Deploy?' }.
Use ws step to allocate custom workspace and prevent interference: ws('/custom/workspace/path') { checkout scm; sh 'make build' }. In declarative pipelines: agent { label 'linux'; customWorkspace '/custom/path' }. Useful for builds requiring specific paths or sharing workspace between jobs. Custom workspaces persist across builds unless explicitly cleaned. Dynamic paths: ws("/workspace/${env.JOB_NAME}/${env.BUILD_NUMBER}") { ... }. For advanced disk management, use External Workspace Manager plugin with disk allocation strategy: exwsAllocate(diskPoolId: 'pool1', path: '/custom/path', estimatedWorkspaceSize: 1024) allocates workspace on disk with sufficient space. Pattern: ${physicalPathOnDisk}/${JOB_NAME}/${BUILD_NUMBER}. Best practice: Clean in post block post { always { cleanWs() } } or use exwsCleanup. Custom workspaces not auto-cleaned, implement cleanup strategy to prevent disk space exhaustion. Use absolute paths always.
Use stash to save files for use on different agents: stash includes: 'target/*.jar', name: 'build-output'. Unstash on another agent: unstash 'build-output'. Stash is stored on controller and has size limit (default 100MB). Use for transferring build artifacts between agents without external storage. Example: stage('Build') { agent { label 'linux' } steps { sh 'mvn package'; stash includes: 'target/*.jar', name: 'app' } } stage('Test') { agent { label 'windows' } steps { unstash 'app'; bat 'test.bat' } }. For large files use external artifact repository instead.
Use checkout scm for automatic checkout: steps { checkout scm }. For specific branch: checkout([$class: 'GitSCM', branches: [[name: '*/main']], userRemoteConfigs: [[url: 'https://github.com/user/repo.git']]]). With credentials: checkout([$class: 'GitSCM', branches: [[name: '*/main']], userRemoteConfigs: [[url: 'https://github.com/user/repo.git', credentialsId: 'git-creds']]]). Skip default checkout: options { skipDefaultCheckout() } then checkout manually. For sparse checkout add extensions. Git plugin automatically configures environment variables: GIT_COMMIT, GIT_BRANCH, GIT_URL. Use snippet generator to create complex checkout configurations.
Create Multibranch Pipeline job in Jenkins. Configure source: GitHub (GitHub Branch Source plugin), GitLab (GitLab Branch Source plugin), Bitbucket with repository URL and credentials. Set branch discovery strategy (all branches, only PRs, regex patterns). Jenkins automatically discovers branches with Jenkinsfiles and creates sub-jobs. Webhook setup: GitLab webhook URL https://JENKINS_URL/project/PROJECT_NAME or https://USERID:APITOKEN@JENKINS_URL/project/YOUR_JOB (with auth). GitHub webhook: install Multibranch Scan Webhook Trigger plugin, URL http://JENKINS_URL/multibranch-webhook-trigger/invoke?token=yourtoken. Important: No 'Trigger' setting needed in Multibranch config, webhooks POST to project URL trigger branch indexing. GitLab plugin listens for Push Hooks only (Merge Request hooks ignored). Each branch gets build history. Delete branch, job auto-removed. Use organization folders for multiple repositories. Webhook authentication prevents unauthorized triggering.
Install Blue Ocean plugin from Manage Jenkins > Plugins (note: new development ceased, but still provides valuable visualization). Access via Jenkins sidebar > Blue Ocean. Features: (1) Visual pipeline editor for creating Jenkinsfiles without coding, guides users through intuitive pipeline creation process. (2) Sophisticated visualization of CD Pipelines with fast, intuitive comprehension of status. (3) Pipeline run details with stage visualization, integrated logs, and failure identification. (4) Personalized dashboard showing your pipelines. (5) Native branch and PR integration with GitHub/Bitbucket. Blue Ocean automatically detects Multibranch Pipelines. Visual stage breakdown, pause/resume pipeline, rerun failed stages. URL: https://jenkins.example.com/blue/. Status: No new functionalities added, but existing features remain valuable for teams wanting better pipeline visualization. Community now focusing on alternative visualization plugins for similar functionality.
Back up critical files: JENKINS_HOME directory (includes jobs, builds, plugins, configuration). Exclude: builds/, workspace/, caches/. Use ThinBackup plugin (currently maintained, recommended by Jenkins) for scheduled backups. ThinBackup backs up global and job-specific configurations only (not archives/workspace), supports JCasC since version 2.0. Configure backup location outside JENKINS_HOME. Backup frequency: daily for active instances. Critical files: config.xml (job configs), credentials.xml (encrypted), plugins/, jobs/*/config.xml. Important: ThinBackup backs up credentials.xml but not server keys - fresh server cannot decrypt credentials, requires separate secrets backup for migration. Use Configuration as Code (JCasC) plugin to declaratively apply configuration from jenkins.yaml, enabling version control for Jenkins config. For cloud: use automated snapshots (AWS EBS, Azure Disks). Disaster recovery: store backups off-site, test restore quarterly, document procedures. Retention: 30 days recommended.
Install Configuration as Code plugin. Create jenkins.yaml: jenkins:\n systemMessage: 'Configured by JCasC'\n numExecutors: 2\ncredentials:\n system:\n domainCredentials:\n - credentials:\n - string:\n id: 'api-key'\n secret: '${API_KEY}'. Load config: Manage Jenkins > Configuration as Code > Apply new configuration. Environment variables: ${VAR_NAME}. Benefits: version control for Jenkins config, reproducible Jenkins instances, disaster recovery. Export current config: Download Configuration. Store jenkins.yaml in source control. Use with Docker: mount jenkins.yaml as volume. Reload configuration without restart.