Home / 

In recent times, if you have been active on social media, you must have noticed a significant amount of discussion surrounding ChatGPT. The advanced language generation model developed by OpenAI has sparked a lot of interest among tech enthusiasts, developers, and AI experts. This is due to its ability to comprehend and generate human-like text, making it a valuable tool for many technologies.

Some experts believe that ChatGPT could revolutionize how we interact with computers and machines, making them more natural and intuitive. This has led to much discussion and excitement about AI-driven technology on social media. In this blog, we will examine how the use of ChatGPT can streamline the DevOps process.

What is ChatGPT?

ChatGPT is a cutting-edge AI model developed by OpenAI that can understand and generate human language in a conversational context. As a variant of the GPT-3 model, it uses a neural network-based language generation technology to produce text that closely mimics natural human language. With its ability to understand and respond to human language, ChatGPT has the potential to revolutionize the way we interact with technology and make it more intuitive and natural.

To understand the concept in simple terms, imagine speaking with someone who knows everything there is to know and who can understand and respond to you as naturally as another person. That's what ChatGPT does, but it's a computer program, not a human being. It can understand and generate human-language-like text and respond in a way that mimics human conversation. You can think of it as a computerized version of someone who can understand and respond to you conversationally, such as in a chatbot or virtual assistant.

What is DevOps?

DevOps is a holistic approach that combines practices, methodologies, and tools to improve collaboration, communication, and integration between development and operations teams.

The development team writes the code and the people who manage and maintain company systems comprise the operations team. DevOps aims to enable faster and more efficient delivery of software and applications by automating and streamlining processes and fostering a culture of collaboration and communication.

This can include continuous integration, continuous deployment, and infrastructure as code, among others. By implementing DevOps, organizations can improve the speed and quality of software delivery, reduce downtime, and improve the overall performance of the systems.

Now, let’s look at some typical DevOps use cases and find out how ChatGPT can help.

Problem Statement #1: Automation

As discussed above, automation is at the core of DevOps. As an illustration, we now attempt to automate a task utilizing the capabilities of ChatGPT. In this example, we wish to write an automation process that can restart a service hosted on multiple virtual machines (VMs) using a BASH shell script.

My Prompt:

“We have multiple VMs hosted on GCP compute engine where some service named application.service runs on all the machines. The VM IPs are stored in a file. I need a shell script that connects to each VM using a username and its private key. The script should then restart application.service on all VMs.”

ChatGPT’s Response:

Here is a basic shell script that should accomplish what you're describing:

#!/bin/bash   
# Read in the IP addresses of the VM instances  
while read ip; do  
  # Utilize the IP address to establish a connection to the VM via SSH  
  ssh -i /path/to/private_key -o StrictHostKeyChecking=no \ 
    username@$ip "sudo systemctl restart application.service"  
done < path/to/ip_file

The script reads the IP addresses of the VM instances from a file (specified by path/to/ip_file), and for each IP, it connects to the VM via SSH using the private key (/path/to/private_key) and the username you specified. It then executes sudo systemctl restart application.service on the remote machine.

Also, make sure to set correct permission on the private key to protect the key from any unauthorized access. It should be readable and writable by the user running the script, but not accessible by other users.

You can change the ssh connection flags as per the requirement of your infrastructure.

Problem Statement #2: Framing Single-Line Commands/Queries

DevOps engineers use command-line interface (CLI) in different situations and contexts for multiple tasks at hand, but getting the correct command for your use case can be daunting. Let's see if ChatGPT can make this quicker.

My Prompt:

“I need a command to list all the instances in GCP and only pull the instances starting with ‘prod’ and ending with the keyword ‘service’. “

ChatGPT’s Response:

You can use the gcloud compute instances list command along with the --filter flag to filter the instances by name and list only the instances that have names that start with "prod" and end with "service".

gcloud compute instances list --filter="name ~ '^prod.*service$'"

My Prompt:

“OpenShift command to patch the HPA of a particular namespace.”

ChatGPT’s Response:

You can use the oc patch command to update the Horizontal Pod Autoscaler (HPA) in OpenShift. The command should be run in the context of the namespace where the HPA is located.

Here is an example of how you can use the oc patch command to update the HPA:

oc patch hpa [HPA_NAME] -p='{"spec":{"minReplicas": [NEW_MIN_REPLICA], "maxReplicas": [NEW_MAX_REPLICA]}}' -n [NAMESPACE]  

This way, you can frame the commands or queries for multiple scenarios to make your work faster.

Problem Statement #3: Developing CI/CD Pipeline Code

Building a CI/CD pipeline is a typical DevOps engineer implementation task. Depending upon the complexity involved, it might take hours to a couple of days to get this built. Let's ask ChatGPT to construct a simple CI/CD pipeline with the associated code.

Image 1 shows a pictorial representation of the simple build pipeline we wish to create.

My Prompt:

“I need Jenkins declarative pipeline code to clean up the app workspace, check out the code from GitHub, and build it using Maven.”

ChatGPT’s Response:

Here's an example of a Jenkins Declarative Pipeline that performs a workspace cleanup, checks out code from a GitHub repository, and builds the code using Maven.

pipeline {  

    agent any  

    stages {  

        stage('Cleanup') {  

            steps {  

                dir("app") {   

                    deleteDir()   

                }  

            }  

        }   

        stage('Checkout') {  

            steps {  

                git branch: 'master',  

                     url: 'https://github.com/[OWNER]/[REPO].git',  

                     dir: 'app'  

            }  

        }  

        stage('Build') {  

            steps {  

                sh 'mvn clean install -f app/pom.xml'  

            }  

        }  

    }  

}  

This pipeline defines a single pipeline block with three stages: 'Cleanup', 'Checkout' and 'Build'. You have to make sure that the Maven and JDK are properly installed and configured in your Jenkins environment before running this pipeline. You can also pass additional command line arguments to maven to configure the build if needed.

You can make this even more conversational by asking for additional customizations on the given code. Here is how the dialog might flow.

My Prompt:

“In the above code, use the Jenkins agent with the label SLAVE01, and for tools, use Maven. The build should be discarded in 15 days and the number of builds maintained should be only 10. Pass the environment variables such as ‘app_name’ as ‘test-app’ and ‘env’ as ‘test.’ Clean up the workspace using ‘cleanWs()’ and print a message saying that the workspace is cleaned. Update the GitHub URL to ‘https://github.com/SrikanthNani/spring-framework-petclinic.git.’ In Maven, only install and skip the test. In a separate stage, print all the global variables.”

ChatGPT’s Response:

Here's an example of a Jenkins Declarative Pipeline that performs a workspace cleanup, checks out code from a GitHub repository, builds the code using Maven, discards old builds and maintains a limit of 10 builds, set the environment variables and prints the global variables:

pipeline {  

    agent {  

        label 'SLAVE01'  

    }  

    options {  

        buildDiscarder(logRotator(numToKeepStr: '10',  
            artifactNumToKeepStr: '10', daysToKeepStr: '15'))  

        timestamps()  

    }  

    tools {  

        maven 'M3'  

    }  

    environment {  

        app_name = "test-app"  

        env = "test"  

    }  

    stages {  

        stage('Cleanup') {  

            steps {  

                echo 'Cleaning workspace...'  

                cleanWs()  

            }  

        }   

        stage('Checkout') {  

            steps {  

                git branch: 'master',  

                     url: 'https://github.com/SrikanthNani/spring-framework-petclinic.git',  

                     dir: 'app'  

            }  

        }  

        stage('Build') {  

            steps {  

                sh 'mvn clean install -DskipTests -f app/pom.xml'  

            }  

        }   

        stage('Print Global Variables') {  

            steps {  

                script {  

                    echo "Environment variables: ${env}"  

                }  

            }  

        }  

    }  

}  

In this pipeline, the agent block specifies that the pipeline should run on a Jenkins agent with the label 'SLAVE01' and tools block is used to specify maven 'M3'. The options block is used to discard the old builds and maintain only 10 builds using the buildDiscarder() function. The environment block is used to set the app_name and env environment variables The cleanWs() method is used for cleaning the workspace and the message is printed to indicate the same. In the Checkout stage, the git step is used to check out the code from the GitHub repository specified in the url and the directory is set to app. The Build stage the command mvn clean install -DskipTests -f app/pom.xml is used to build the code using Maven and skips the tests. In the Print Global Variables stage it prints all the global variables by using the script block and echo "Environment variables: ${env}"

Keep in mind that this pipeline is just an example and you may need to adjust it to suit the specific needs of your build.

You could go on to create a pipeline job in Jenkins and make use of the above declarative pipeline script. Additionally, you can ensure all required configurations in Jenkins are done, like setting Maven under “Global Tool Configuration” and so on.

Another possibility is having ChatGPT produce the Terraform code needed to provision a virtual machine in any of the cloud platforms or create Ansible code to automate configuration management. As I said earlier, the possibilities are endless.

Problem-statement-4: Asking for a Tool Recommendation to Solve a Specific Problem

Assuming we need to create a backup of a Kubernetes cluster. While various tools in the market may meet this requirement, it can take time to determine the optimal option. In this scenario, we can leverage the capabilities of ChatGPT to provide us the recommendations for the best tool in this area and evaluate the recommended best practices for this task.

My Prompt:

“What is the best tool to back up a Kubernetes cluster? “

ChatGPT’s Response:

Choosing the best tool for taking backups of a Kubernetes cluster ultimately depends on the specific needs and requirements of your organization. That being said, Velero is considered to be one of the best tools available for backing up Kubernetes clusters.

Velero is an open-source backup and disaster recovery tool that provides an easy way to backup and restore your Kubernetes cluster, its associated volumes and applications. Velero's features include support for various cloud providers, various volume plugins and the ability to do application-consistent backups. It can also handle Kubernetes namespaces, resources, and individual resources.

Velero allows you to back up the entire Kubernetes cluster or specific namespaces, or even individual objects like a single pod or a specific PVC. Velero also integrates with various cloud providers, and supports various volume plugins, making it a versatile and flexible option for backing up your cluster.

Additionally, Velero supports scheduling and retention policies, and has the ability to run backups and restores on-demand, or on a schedule.

Restic, KubeVault, ClusterControl, and ArgoCD are some of the other popular tools that can be considered for backing up Kubernetes clusters. Ultimately, the best tool for you will depend on your specific requirements and the resources available. It is worth researching and testing a few options to find the best fit for your use case.

What’s Coming Down the Road?

It’s worth noting that ChatGPT is still in its early phase. The current version, ChatGPT-3, was trained on 175 billion parameters using data up to the year 2021 and is not connected to the internet. ChatGPT-4 is ready to launch soon and is being trained on 100 trillion parameters. It can generate a 60,000-word book from a single input prompt. With all this intelligence built in, it’s a game-changer. It can even revolutionize many current technologies and frameworks we currently use.

Image 2 contrasts the amount of data used to train ChatGPT-3 vs. ChatGPT-4,

How Do You Feel About It: Interested or Scared?

As one Twitter user stated, “In the short term, AI won't replace humans. But humans that use AI will replace humans that don't use AI”. However, this is not the end of the story. Let’s ask ChatGPT why AI will not replace humans:

My Prompt:

“Please explain in two sentences why AI will not replace humans.”

ChatGPT’s Response:

AI will not replace humans because it does not possess human qualities such as creativity, empathy, and emotional intelligence. Additionally, AI is still not capable of performing many tasks that require common sense and human experience.