AI Debugger Demo

Yes, you’ve seen right, no more headache while deploying apps! We’ll handle the debugging, allowing you to quickly fix deployment issues on the fly and redeploy right away.

Simplifying App Debugging

Debugging deployment errors in modern applications presents significant challenges due to the complexity of environments, dependency management, and system-specific configurations. To address these issues, we developed an AI-powered debugging agent capable of automating error identification and resolution processes. This post details the technical journey of creating such an agent, including the challenges encountered and insights gained during development.

First of all, what is an AI Agent?

An AI agent is a peice of software that uses artificial intelligence techniques to perform tasks autonomously or assist humans in complex operations. These agents are designed to perceive their environment, make decisions, and take actions to achieve specific goals. They can range from simple rule-based systems to sophisticated implementations using machine learning and natural language processing. In our context of debugging, the AI agent can analyze error logs, understand code structures, and propose solutions, to accelerate the troubleshooting process.

The Challenge

Our goal was to create an AI agent capable of:

  • Understanding deployment errors
  • Analyzing logs and configuration files
  • Proposing and implementing fixes
  • Redeploying the application

This undertaking demanded more than just parsing error messages. It required a deep understanding of diverse deployment environments, the application architectures and it’s potential error, while balancing capabilities with operational efficiency presented a significant challenge. We needed to create an agent that was both powerful and cost-effective, ensuring maximum value for our users.


Our Approach

We built our solution using LangChain, a framework for developing applications powered by language models. LangChain provided a flexible architecture for creating an AI agent with a customized toolkit.

Our agent’s capabilities are rooted in a suite of custom tools we developed, enabling it to interact with the deployment environment. These tools handle tasks such as log retrieval, build script analysis, environment inspection, and dependency management. To guide the AI in understanding the debugging context, our infrastructure, and deployment steps, we implemented careful prompt engineering. This results in a systematic workflow that progresses from initial context analysis to final solution delivery.

Challenges and Solutions

In developing our AI-powered debugging agent, we encountered and addressed several key challenges:

Ease of Use

  • Challenge: Providing a one-click solution that takes users from error detection to a deployable fix with minimal effort.
  • Solution: We implemented a “Debug with AI” button that triggers a comprehensive process. This single action initiates the AI agent’s analysis, guides it through complex debugging steps, and presents users with a ready-to-deploy solution, all without requiring technical intervention from the user.

Intelligent Tool Selection

  • Challenge: Ensuring the AI chooses the most appropriate tool for each debugging step.
  • Solution: We implemented a context-aware prompting system that considers previously used tools and suggests likely next steps. This approach significantly improved the agent’s capabilities, particularly for smaller language models.

Security and Privacy

  • Challenge: Ensuring the AI agent operates effectively within existing security constraints.
  • Solution: Fortunately, our current infrastructure already implements robust security measures, exposing only necessary elements while concealing private information. We leveraged these existing safeguards, allowing the AI agent to access required resources like log files, build configurations, and dependency lists without compromising sensitive data.

By addressing these challenges, we’ve created a powerful yet user-friendly AI debugging assistant that balances effectiveness, ease of use, and security.


A Peek Under the Hood

To empower our AI agent with the capability to tackle complex debugging scenarios, we developed a suite of specialized tools. These tools serve as the agent’s interface with the deployment environment, allowing it to perform targeted actions and analyses. Let’s chkec a practical example that illustrates how we created one of these tools and how you can apply similar principles to build your own AI agent.

Imagine a scenario where a user is working with a Python package that requires additional tooling, such as GCC (GNU Compiler Collection) to compile underlying C libraries. In a basic environment with only Python installed, attempting to use this package would fail due to the missing GCC requirement.

The Solution: Customizing the Build Environment

To resolve this issue, we need to enable our AI agent to modify the Dockerfile used to create the build environment. This allows the agent to add necessary instructions for installing additional dependencies like GCC on demand.

Step 1: Define the Core Function

First, we create a function that allows the agent to rewrite the Dockerfile:

def _write_dockerfile(content: DockerFileContent) -> str:
    write_content_to_file(content=content, file="dockerfile")
    return "Content written to Dockerfile. Please inform the user that changes have been made to the Dockerfile"

This function writes the provided content to the Dockerfile and returns a confirmation message or an error if the operation fails.

Step 2: Create an Input Schema

Next, we define a Pydantic model to specify the structure of the input our function expects. This schema helps the AI understand what information it needs to provide when using the tool:

# Input schema
class DockerFileContent(BaseModel):
    content: str = Field(
        ...,
        description="The new content of the Dockerfile",
    )
    class Config:
        schema_extra = {
            "example": {
                "content": 'FROM python:3.9\nWORKDIR /app\nCOPY . /app\nRUN pip install -r requirements.txt\nCMD ["python", "app.py"]'
            }
        }

This Pydantic model ensures that the input to our function is properly structured and includes an example for clarity.

Step 3: Create the StructuredTool

Now, we use StructuredTool.from_function to wrap our function and its input schema into a tool that the AI agent can use:

# Define the tool
StructuredTool.from_function(
    name="write_dockerfile",
    func=_write_dockerfile,
    args_schema=DockerFileContent,
    description="Create or update the Dockerfile with new content",
)

This modular architecture allows us to easily expand the AI agent’s capabilities by adding new tools to its arsenal. We can introduce more sophisticated Dockerfile manipulations, implement advanced log analysis, or integrate with various parts of our product. The flexibility of this approach enables us to tailor the AI agent’s toolkit to specific deployment environments and debugging scenarios, ensuring that it has the right set of tools for any given task.

Step 4. Bringing It All Together: Building Our AI Agent

With our tools in place, it’s time to create the brain of our operation – the AI agent:

# Base prompt
prompt = OpenAIFunctionsAgent.create_prompt(
  system_message=SystemMessage("You are a helpful debugger, assist the user in debugging their deployment"),
)

# Simple in memory chat history
memory = ConversationBufferMemory(return_messages=True)

# The list of tools
tools = [
  StructuredTool.from_function(
    name="write_dockerfile",
    func=_write_dockerfile,
    args_schema=DockerFileContent,
    description="Create or update the Dockerfile with new content",
  ),
  StructuredTool.from_function(
    name="get_logs",
    func=_get_logs,
    description="Get the logs of the build",
  )
]

# Choose your model
llm = ChatOpenAI(api_key=SecretStr(openai_api_key), model=openai_model)

# Create the agent
agent = OpenAIFunctionsAgent(llm=llm, tools=tools, prompt=prompt)
executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools)

while True:
  # Take user input
  user_input = input("How can I help you debug your deployment? ")
  response = executor.run(user_input)
  print(response)

With this setup, we’ve successfully created our first AI debugging assistant. It can understand user queries, select appropriate tools, and provide initial solutions for deployment issues. However, this is just the beginning. Our next steps involve rigorous testing in real-world scenarios, iterating on and improving our tools based on actual usage, refining the agent’s decision-making processes, and expanding its capabilities to handle more complex debugging tasks.

Early Observations and Potential Impact

As we begin to deploy our AI debugging agent in real-world scenarios, we’re observing promising results that hint at its great potential. While we’re still in the early stages of implementation, several key benefits are already emerging:

  • Accelerated Problem Resolution: Our AI agent is demonstrating the ability to quickly identify and propose fixes for common deployment issues, leading to higher user satisfaction.

  • Enhanced User Empowerment: By returning both the solution (like a diff of the modified requirements.txt) and a breakdown of steps taken to fix the error, we achieve multiple benefits:

    • Users gain deeper understanding of deployment processes, empowering them to handle similar issues independently in the future.
    • Clear visibility into changes builds trust and allows for easy auditing.
    • Advanced users can review and customize solutions to better fit their specific needs.
  • Scalability of Expertise: By encapsulating debugging knowledge in an AI system, we’re effectively scaling our expertise. This allows us to provide high-quality debugging assistance to a larger number of users simultaneously.

These early observations are encouraging, suggesting that our AI debugging assistant has the potential to significantly streamline deployment processes and improve overall user satisfaction. As we continue to gather data and refine our system, we anticipate uncovering even more benefits and use cases.

Looking Ahead: Our Roadmap

We’re committed to evolving our AI debugging agent to better serve your needs:

  • Broader Error Coverage: Expanding our toolkit to address a wider range of deployment scenarios and common errors.
  • Clearer Explanations: Enhancing our AI’s ability to provide more detailed, actionable insights into your deployment issues.
  • Your Input Matters: We’ll be actively seeking your feedback to ensure our tool’s development aligns with real-world developer needs.

Our goal is to make your debugging process smoother, faster, and more efficient.