Member post by Siva Gurunathan, CTO at Ozone Cloud Inc

Challenges in New Technology Migrations

There have been many technological advancements in the market today, but the main hindrance is the cumbersome process of switching from the existing technology/framework to the new one. This is where teams and organizations are left behind in the tech curve. In the DevOps landscape, CI/CD Pipelines are the backbone of DevOps, and how you work with them largely dictates your DevOps efficiency. 

CI/CD Pipelines, too, like other major technologies and frameworks in the past, have seen various evolutions; the latest among them is being made reusable in an attempt to standardize CI/CD. However, here too, there are several cons like:

1. The Pipelines are not proprietary to end-user enterprises and belong to the DevOps platforms which are not open-source

2. Migrating to these platforms and Pipelines is a huge task

3. They may not be as efficient as depicted

Speaking of new advancements in technology, most might be familiar with Tekton for CI/CD Pipelines: It is Google’s open-source framework that is now a graduated CDF project. It re-defines how Pipelines are built (through reusable tasks) and these tasks are open-source and community-vetted. They allow for Pipeline authorship and enterprises can re-use them across platforms that are compatible with Tekton, thus preventing vendor lock-in. 

While this mitigates the first challenge of Pipeline ownership and migrations, however, the same problem plagues DevOps teams – the transitioning from existing legacy technologies / frameworks to newer, much more capable frameworks like Tekton. 

However, what if there was a way to seamlessly automate your transition from legacy Pipelines to cutting-edge, reusable Tekton Pipelines, without toil? This is precisely what the article aims to explore – automating the conversion of Yaml scripts of existing Pipelines to Tekton Yaml scripts.

Let us consider the three most popular CI/CD platforms for this use case – Jenkins, GitLab, and Azure DevOps. Before we proceed to the proposed approach to achieve this (which is detailed in the second half of this article), let us first look at the limitations of Jenkins, GitLab, and Azure Pipelines and why one should convert:

Common Limitations of Jenkins, Azure, and GitLab Pipelines:

Resource Intensive: Legacy pipelines can be resource-intensive, especially for larger organizations or projects. Managing server resources, optimizing configurations, and ensuring sufficient hardware can be a challenge.

Complex Setup: Setting up and configuring Jenkins, GitLab, and Azure DevOps can be complex, particularly for self-hosted instances. This complexity may require experienced administrators and substantial time investment.

Plugin and Integration Compatibility: All three platforms may face compatibility issues with certain plugins or integrations. 

Scaling Challenges: Scaling in organizations, especially when dealing with multiple microservices, necessitates a shift from individualized scaling to a template-based approach. Failing to do so leads to a chaotic increase of pipelines, undermining standards and organizational processes.

Costs: While both Jenkins and GitLab offer free and open-source versions, some advanced features or capabilities may only be available in higher-tier paid plans. This can lead to cost considerations at scale. 

Customization Complexity: These platforms can be complex to customize workflows and integrate custom scripts as it requires advanced scripting knowledge.

User Interface Complexity: Though they have feature-rich interfaces, it can sometimes be overwhelming for new users to navigate the platforms and find specific settings or options. 

Limited Native Integration & Plugins: The native integrations of Jenkins, GitLab, and Azure may not cover the full spectrum of tools and services a team may need. Additional integration efforts or third-party tools may be required to bridge gaps in the development workflow.

As can be seen in the above limitations, many are common that include scalability challenges, them being resource intensive, limited integrations etc. All the challenges boil down to one common denominator: Complex Pipeline management at scale. 

This is where a framework like Tekton can help tackle all the challenges at once, by just easing Pipeline management: with its K8s-native Pipelines that are reusable, open source, and tasks that are community-vetted. 

Tekton Pipelines: Advancing CI/CD with Kubernetes – Native Automation

Tekton, an open-source project, is rapidly gaining traction as a game-changer in the world of CI/CD Pipelines. Its Kubernetes-native design is a standout feature, seamlessly integrating with Kubernetes clusters to harness its scalability and resource efficiency. 

Furthermore, Tekton’s cloud-native roots ensure it can execute tasks in lightweight, ephemeral containers, optimizing resource utilization. The ability to declare CI/CD Pipelines as code, coupled with GitOps practices, enhances automation and reproducibility. 

How are Tekton Pipelines Different?

Kubernetes-Native Automation: Tekton Pipelines are deeply integrated with Kubernetes, utilizing Custom Resources and Operators to define and execute CI/CD workflows. 

Declarative and Modular: These Pipelines are defined declaratively using YAML manifests, promoting version control, collaboration, and code reviews. The modular design of tasks and pipelines enables teams to create reusable components for building, testing, and deploying applications.

Containerization and Isolation: Each task in a Tekton pipeline runs within a dedicated container, ensuring isolation and reproducibility of the build and deployment environment. 

Event-Driven Architecture: Tekton pipelines can be triggered by various events, such as code commits, pull requests, or external events, providing a responsive and event-driven CI/CD process.

Scalability and Efficiency: Tekton pipelines leverage Kubernetes’ inherent scalability, enabling efficient execution of tasks across a dynamic pool of worker nodes. 

Visibility and Observability: They offer built-in monitoring and logging, allowing teams to gain insights into pipeline execution, track progress, and diagnose issues. 

Integration and Extensibility: Tekton pipelines can be easily extended with custom tasks and integrations, enabling teams to connect with various tools, services, and cloud platforms. 

Cloud-Native Best Practices: By embracing Tekton pipelines, organizations align with cloud-native best practices, promoting the use of Kubernetes and containerization for streamlined application delivery.

Why Transition from Jenkins, GitLab, and Azure Pipelines to Tekton Pipelines?

Migrating from Jenkins, GitLab, or Azure Pipelines to Tekton Pipelines can be a strategic choice, especially for organizations deeply entrenched in Kubernetes and cloud-native methodologies. 

Optimize Kubernetes-Centric Workflows: Projects tightly integrated with Kubernetes may find Tekton Pipelines more aligned with their ecosystem. 

Accelerate Cloud-Native DevOps for Organizations aiming to embrace a cloud-native approach and leverage Kubernetes infrastructure 

Performance and Scalability: Tekton pipelines run on clusters that can dynamically be created and destroyed as per demands, thus taking resource optimizations to the next level. 

Cloud-Native Best Practices: Best practices, such as containerization, declarative configuration, and infrastructure-as-code, are a natural fit due to its Kubernetes-native design.

Proposed Workflow for Automating Pipeline Conversions at Scale: 

Diagram showing Automated Pipeline Conversion

The overarching objective of this workflow is to facilitate a seamless conversion process using Large Language Models (LLM) without delving into the intricacies of fine-tuning but rather using the benefits of Prompt Engineering. Central to this approach is the use of the chat context and the strategic application of roles for LLM models to establish context. Along with this, sample pipelines are leveraged to train and fine-tune the model to define how corresponding YAMLs of all these platforms look like for any given pipeline. 

The use of prompts and instructions tailored to fit the use case of Generating contextual pipeline YAML and conversion of Jenkins / Gitlab pipelines to Tekton pipelines makes this workflow possible. We maintain a context and a contextual mapping of real-world examples and most appropriate prompt designs to produce the most accurate responses. The role of Users in this context adds weight and further refine the response received. 

However, before proceeding into details and examples, let’s set the base for what exactly generative AI means and its context in our use-case:

Generative AI and LLM for Automating YAML Script Conversions

Generative AI, such as LLMs, create content like text, benefiting from patterns in large datasets. Large Language Models (LLMs), with billions of parameters, excel in generating human-like text by understanding language structures. They have broad applications, including natural language understanding, summarization, translation, and content generation, promising automation and human-like interactions with AI.

Adapting the Model to Solve Real-world Use Cases 

Fine-tuning and prompt engineering are two common methods used to train large language models (LLMs) like GPT-3, BERT, and others. These methods help adapt pre-trained models for specific tasks or generate contextually relevant responses. Both fine-tuning and prompt engineering play crucial roles. Fine-tuning helps the model comprehend complex use cases, while prompt engineering provides detailed requirements, extracting the best results from the model.

Why Prompt Engineering : 

Prompt engineering stands out as a highly advantageous approach in specific use cases where precision, control, and customization over AI model behavior are paramount. This method empowers developers and users to meticulously design input prompts, effectively guiding AI models to produce contextually relevant and desired responses.

Prompt engineering is both an AI engineering technique for refining large language models (LLMs) with specific prompts and recommended outputs and the term for the process of refining input to various generative AI services to generate text or images. 

The process of Prompt Engineering to fit our use case includes :

  1. Defining the goal: The first step in the process of AI prompt engineering involves setting a clear objective.
  1. Crafting the initial prompt: With the goal in mind, it’s time to draft an initial prompt. This could take the form of a question, a command, or even a scenario, depending on the goal. 
  1. Testing the prompt: The initial prompt is then input into the language model, and the response is analyzed.
  1. Analyzing the response
  1. Refining the prompt: With the insights gathered from testing and analysis, it’s time to revise the prompt. This could involve making it more specific, adding more context, or changing the phrasing.
  1. Iterating the process: The testing, analyzing, and refining steps are repeated until we are satisfied that the prompt consistently guides the model toward generating the desired response

To conclude, here’s an example of typical YAMLs from Jenkins, GitLab, and Azure DevOps, and how the same YAML looks when converted to a Tekton YAML. This goes a long way in facilitating teams finally make a jump from their existing pipelines to cutting edge reusable pipeline frameworks:

code example

Jenkins YAML

code example


code example

Azure YAML

code example

Converted Tekton YAML