CI/CD using Jenkins

·

14 min read

CI/CD using Jenkins

1. What is Jenkins

Jenkins is an open-source automation tool written in Java with plugins built for continuous integration. Jenkins is used to build and test your software projects continuously making it easier for developers to integrate changes to the project, and making it easier for users to obtain a fresh build. It also allows you to continuously deliver your software by integrating with a large number of testing and deployment technologies. It is a server-based system that runs in servlet containers such as Apache Tomcat.

Jenkins is an automation tool written in Java with built-in plugins for continuous integration tasks. Jenkins achieves Continuous Integration with the help of plugins. Plugins allow the integration of Various DevOps stages. If you want to integrate a particular tool, you need to install the plugins for that tool. For example Git, Maven 2 project, Amazon EC2, HTML publisher etc.

  • Features of Jenkins

    • It is an open-source tool with great community support.

    • It is easy to install.

    • It has 1000+ plugins to ease your work. If a plugin does not exist, you can code it and share it with the community.

    • It is free of cost.

    • It is built with Java and hence, it is portable to all the major platforms.

    • It is platform-independent. It is available for all platforms and different operating systems. Like OS X, Windows or Linux.

    • Jenkins also supports cloud-based architecture so that we can deploy Jenkins in cloud-based platforms.

    • Jenkins reduces the chance of errors as the complete work is automated without manual intervention. Errors caused by manual methods are decreased.

2. Uses of Jenkins

  • Deploying code into production

    If all of the tests developed for a feature or release branch are green, Jenkins or another CI system may automatically publish code to staging or production. This is often referred to as continuous deployment. Changes are done before a merging action can also be seen. One may do this in a dynamic staging environment. Then it’s distributed to a central staging system, a pre-production system, or even a production environment when combined.

  • Enabling task automation

    Another instance in which one may use Jenkins is to automate workflows and tasks. If a developer is working on several environments, they will need to install or upgrade an item on each of them. If the installation or update requires more than 100 steps to complete, it will be error-prone to do it manually. Instead, you can write down all the steps needed to complete the activity in Jenkins. It will take less time, and you can complete the installation or update without difficulty.

  • Reducing the time it takes to review a code

    Jenkins is a CI system that may communicate with other DevOps tools and notify users when a merge request is ready to merge. This is typically the case when all tests have been passed and all other conditions have been satisfied. Furthermore, the merging request may indicate the difference in code coverage. Jenkins cuts the time it takes to examine a merge request in half. The number of lines of code in a component and how many of them are executed determines code coverage. Jenkins supports a transparent development process among team members by reducing the time it takes to review a code.

  • Driving continuous integration

    Before a change to the software can be released, it must go through a series of complex processes. The Jenkins pipeline enables the interconnection of many events and tasks in a sequence to drive continuous integration. It has a collection of plugins that make integrating and implementing continuous integration and delivery pipelines a breeze. A Jenkins pipeline’s main feature is that each assignment or job relies on another task or job.

    On the other side continuous delivery pipelines have different states: test, build, release, deploy, and so on. These states are inextricably linked to one another. A CD pipeline is a series of events that allow certain states to function.

  • Increasing code coverage

    Jenkins and other CI servers may verify code to increase test coverage. Code coverage improves as a result of tests.

  • Enhancing coding efficiency

    Jenkins dramatically improves the efficiency of the development process. For example, a command prompt code may be converted into a GUI button click using Jenkins. One may accomplish this by encapsulating the script in a Jenkins task. One may parameterize Jenkins tasks to allow for customization or user input. Hundreds of lines of code can be saved as a result.

  • Simplifying audits

    When Jenkins tasks run, they collect console output from stdout and stderr parameters. This makes troubleshooting using Jenkins extremely straightforward

3. Overall Architechture

Jenkins follows Master-Slave architecture to manage distributed builds. In this architecture, slave and master communicate through TCP/IP protocol.

  • Jenkins Master/Server

  • Jenkins Slave/Node/Agent Nodes/Build Server

  • Jenkins Web Interface

    • Jenkins Master/Server

      The main server of Jenkins is the Jenkins Master. It is a web dashboard which is nothing but powered from a war file. By default, it runs on an 8080 port. With the help of Dashboard, we can configure the jobs/projects but the build takes place in Nodes/Slave. By default, one node (slave) is configured and running in the Jenkins server. We can add more nodes using IP address, user name and password using the ssh, jnlp or webstart methods.

    • The server's job or master's job is to handle:

      • Scheduling build jobs.

      • Dispatching builds to the nodes/slaves for the actual execution.

      • Monitor the nodes/slaves (possibly taking them online and offline as required).

      • Recording and presenting the build results.

      • A Master/Server instance of Jenkins can also execute build jobs directly.

    • Jenkins Slave

      Jenkins slave is used to executing the build jobs dispatched by the master. We can configure a project to always run on a particular slave machine, or a particular type of slave machine, or simply let Jenkins to pick the next available slave/node.

      As we know Jenkins is developed using Java is platform-independent thus Jenkins Master/Servers and Slave/nodes can be configured in any server including Linux, Windows, and Mac.

    • Jenkins Agent Nodes

      Next are Jenkins Agents. They are the worker nodes that actually perform all the steps mentioned in a job. They are assigned to a job when you create it and they run the actual tasks.

      When you trigger a Jenkins task from the master, Masternodes act as agents. You can run jobs on the Jenkins server without an agent, but then Master nodes will act as agents. We can attach any number of Jenkins agents to a master using a combination of Windows, Linux, and even container servers as build agents. Depending on the application, you can also limit execution to specific agents.

4. History of Jenkins

Kohsuke Kawaguchi, who is a Java developer, working at SUN Microsystems, was tired of building the code and fixing errors repetitively. In 2004, he created an automation server called Hudson that automates build and test tasks.

In 2011, Oracle who owned Sun Microsystems had a dispute with Hudson's open-source community, so they forked Hudson and renamed it as Jenkins.

Both Hudson and Jenkins continued to operate independently. But in a short span of time, Jenkins acquired a lot of contributors and projects while Hudson remained with only 32 projects. Then with time, Jenkins became more popular, and Hudson is not maintained anymore.

5. What is CI (Continuous Integration)

Continuous Integration is a process of integrating code changes from multiple developers in a single project many times. The software is tested immediately after a code commit. With each code commit, code is built and tested. If the test is passed, the build is tested for deployment. If the deployment is successful, the code is pushed to production. This commits, build, test, and deploy is a continuous process and hence the name continuous integration/deployment.

6. What is CD ( Continuous Delivery)

In order to have an effective continuous delivery process, it’s important that CI is already built into your development pipeline. The goal of continuous delivery is to have a codebase that is always ready for deployment to a production environment.

In continuous delivery, every stage—from the merger of code changes to the delivery of production-ready builds—involves test automation and code release automation. At the end of that process, the operations team is able to deploy an app to production quickly and easily.

7. Difference between CI and CD

Continuous integration (CI) is a practice that involves developers making small changes and checks to their code. Due to the scale of requirements and the number of steps involved, this process is automated to ensure that teams can build, test, and package their applications in a reliable and repeatable way. CI helps streamline code changes, thereby increasing time for developers to make changes and contribute to improved software.

Continuous delivery (CD) is the automated delivery of completed code to environments like testing and development. CD provides an automated and consistent way for code to be delivered to these environments.

8. Continuous deployment

The final stage of a mature CI/CD pipeline is continuous deployment. As an extension of continuous delivery, which automates the release of a production-ready build to a code repository, continuous deployment automates releasing an app to production. Because there is no manual gate at the stage of the pipeline before production, continuous deployment relies heavily on well-designed test automation.

In practice, continuous deployment means that a developer’s change to a cloud application could go live within minutes of writing it (assuming it passes automated testing). This makes it much easier to continuously receive and incorporate user feedback. Taken together, all of these connected CI/CD practices make deployment of an application less risky, whereby it’s easier to release changes to apps in small pieces, rather than all at once. There’s also a lot of upfront investment, though, since automated tests will need to be written to accommodate a variety of testing and release stages in the CI/CD pipeline.

9. CI/CD tools

CI/CD tools can help a team automate their development, deployment, and testing. Some tools specifically handle the integration (CI) side, some manage development and deployment (CD), while others specialize in continuous testing or related functions.

  • Jenkins: Jenkins is an open-source automation server. It helps automate the parts of software development related to building, testing, and deploying, facilitating continuous integration and continuous delivery.

  • Tekton: Tekton is a powerful yet flexible Kubernetes-native open-source framework for creating continuous integration and delivery (CI/CD) systems

  • Spinnaker: a CD platform built for multi-cloud environments.

  • GoCD: a CI/CD server with an emphasis on modeling and visualization.

  • Concourse: an open-source continuous thing-doer.

  • Screwdriver: a build platform designed for CD.

  • Azure DevOps: Azure DevOps provides a variety of CI/CD tools, like Git repo management, testing, reporting, and more. It provides support for Azure, Kubernetes, and VM-based resources.

  • AWS CodePipeline: AWS CodePipeline is a continuous delivery service that allows you to automate release pipelines. It easily integrates with third-party services like GitHub.

  • Cloud Build from Google Cloud Platform (GCP): Cloud Build from GCP is a serverless CI/CD platform that allows you to build software across all languages, such as Java and Go, deploy across multiple environments, and access cloud-hosted CI/CD workflows within your own private network.

The major public cloud providers all offer CI/CD solutions, along with GitLab, CircleCI, Travis CI, Atlassian Bamboo, and many others.

Additionally, any tool that’s foundational to DevOps is likely to be part of a CI/CD process. Tools for configuration automation (such as Ansible, Chef, and Puppet), container runtimes (such as Docker, rkt, and cri-o), and container orchestration (Kubernetes) aren’t strictly CI/CD tools, but they’ll show up in many CI/CD workflows.

10. Types of pipeline structure

A Jenkins Pipeline is a collection of jobs or events that brings the software from version control into the hands of the end users by using automation tools. It is used to incorporate continuous delivery in our software development workflow.

  • Scripted pipeline: Sequential execution, using Groovy expressions for flow control

  • Declarative pipeline: It uses a framework to control execution (pre-defined commands to ease pipeline creation)

  • Creating your first Jenkins pipelines

  • Step 1:

    Log into Jenkins and select ‘New item’ from the dashboard.

  • Step 2:

    Next, enter a name for your pipeline and select ‘pipeline’ project. Click on ‘ok’ to proceed.

  • Step 3:

    Scroll down to the pipeline and choose if you want a declarative pipeline or a scripted one.

  • Step 4:

    If you want a scripted pipeline then choose ‘pipeline script’ and start typing your code.

  • Step 5:

    If you want a declarative pipeline then select ‘pipeline script from SCM’ and choose your SCM.

  • Step 6:

    Within the script path is the name of the Jenkinsfile that is going to be accessed from your SCM to run. Finally, click on ‘apply’ and ‘save’. You have successfully created your first Jenkins pipeline.

  • Declarative Pipeline

    Since this is a declarative pipeline, I’m writing the code locally in a file named ‘Jenkinsfile’ and then pushing this file into my global git repository. While executing the ‘Declarative pipeline’ demo, this file will be accessed from my git repository. The following is a simple demonstration of building a pipeline to run multiple stages, each performing a specific task.

    • Stage one executes a simple echo command which is specified within the ‘steps’ block.

    • Stage two executes an input directive. This directive allows to prompt user input in a stage. It displays a message and waits for the user's input. If the input is approved, then the stage will trigger further deployments.

    • Stage three runs a ‘when’ directive with a ‘not’ tag. This directive allows you to execute a step depending on the conditions defined within the ‘when’ loop. If the conditions are met, the corresponding stage will be executed. It must be defined at a stage level.

        pipeline {
                 agent any
                 stages {
                         stage('One') {
                         steps {
                             echo 'Hi, this is Aditya'
                         }
                         }
                         stage('Two') {
                         steps {
                            input('Do you want to proceed?')
                         }
                         }
                         stage('Three') {
                         when {
                               not {
                                    branch "master"
                               }
                         }
                         steps {
                               echo "Hello"
                         }
                         }
                         stage('Four') {
                         parallel { 
                                    stage('Unit Test') {
                                   steps {
                                        echo "Running the unit test..."
                                   }
                                   }
                                    stage('Integration test') {
                                      agent {
                                            docker {
                                                    reuseNode true
                                                    image 'ubuntu'
                                                   }
                                            }
                                      steps {
                                        echo "Running the integration test..."
                                      }
                                   }
                                   }
                                   }
                      }
        }
      
    • Stage four runs a parallel directive. This directive allows you to run nested stages in parallel. Here, I’m running two nested stages in parallel, namely, ‘Unit test’ and ‘Integration test’. Within the integration test stage, I’m defining a stage-specific docker agent. This docker agent will execute the ‘Integration test’ stage.

    • Within the stage are two commands. The reuseNode is a Boolean and on returning true, the docker container would run on the agent specified at the top level of the pipeline, in this case, the agent specified at the top level is ‘any’ which means that the container would be executed on any available node. By default this Boolean returns false.

    • There are some restrictions while using the parallel directive:

      • A stage can either have a parallel or steps block, but not both

      • Within a parallel directive you cannot nest another parallel directive

      • If a stage has a parallel directive then you cannot define ‘agent’ or ‘tool’ directives

    • The following screenshot is the result of the pipeline. In the below image, the pipeline waits for the user input and on clicking ‘proceed’, the execution resumes.

  • Scripted Pipeline

    let's execute a simple code. & run the following script.

      node {
            for (i=0; i<2; i++) { 
                 stage "Stage #"+i
                 print 'Hello, world !'
                 if (i==0)
                 {
                     git "<a href="https://github.com/ADITYADAS1999/gitnew.git">https://github.com/ADITYADAS1999/gitnew.git</a>"
                     echo 'Running on Stage #0'
                 }
                 else {
                     build 'Declarative pipeline'
                     echo 'Running on Stage #1'
                 }
            }
      }
    

    In the above code I have defined a ‘node’ block within which I’m running the following:

    • The conditional ‘for’ loop. This for loop is for creating 2 stages namely, Stage #0 and Stage #1. Once the stages are created they print the ‘Hello world!’ message

    • Next, I’m defining a simple ‘if else’ statement. If the value of ‘i’ equals to zero, then stage #0 will execute the following commands (git and echo). A ‘git’ command is used to clone the specified git directory and the echo command simply displays the specified message

    • The else statement is executed when ‘i’ is not equal to zero. Therefore, stage #1 will run the commands within the else block. The ‘build’ command simply runs the job specified, in this case, it runs the ‘Declarative pipeline’ that we created earlier in the demo. Once it completes the execution of the job, it runs the echo command

Now that I’ve explained the code, let's run the pipeline. The following screenshot is the result of the Scripted pipeline.

  • Shows the results of Stage #0

  • Shows the logs of Stage #1 and starts building the ‘Declarative pipeline’

  • Execution of the ‘Declarative pipeline’ job.

  • Results.

11. Get Involved & Contribute to Jenkins Community

📌 GitHub

📌 Twitter

📌 Linkedin

📌 community

12. Resources

🚩 https://www.jenkins.io/

🚩 https://www.jenkins.io/doc/book/

🚩 https://www.jenkins.io/download/

🚩 https://youtu.be/f4idgaq2VqA

🚩 https://youtu.be/h8gbk7coq5g

🚩 https://www.jenkins.io/doc/tutorials/

🚩 https://plugins.jenkins.io/

That's all for this blog, I hope you will learn something new. And feel free to share your thoughts and feedback, Thanks for reading.

Connect with me on socials 👀

Twitter 🖱

LinkedIn 🖱

Github 🖱

Did you find this article valuable?

Support Hashnode by becoming a sponsor. Any amount is appreciated!