E2B Code Interpreting for AI Apps: A Comprehensive Guide

In the rapidly evolving landscape of artificial intelligence, the ability to interpret and execute code seamlessly within AI applications is a game-changer. E2B's Code Interpreter SDK offers developers a powerful tool to integrate code interpreting capabilities into their AI apps, enabling functionalities such as AI code execution, data analysis, AI tutoring, and enhanced reasoning modules for large language models (LLMs).

Quickstart with E2B Code Interpreter SDK

1. Install the SDK

To get started, you need to install the Code Interpreter SDK. This SDK provides the tools necessary to execute AI-generated code securely within a sandbox environment.

For JavaScript/TypeScript:

npm i @e2b/code-interpreter

For Python:

pip install e2b_code_interpreter

2. Execute Code within the Sandbox

Once the SDK is installed, you can begin executing code within the E2B sandbox. Here’s a simple example in JavaScript:

import { CodeInterpreter } from '@e2b/code-interpreter';

const sandbox = await CodeInterpreter.create();
await sandbox.notebook.execCell('x = 1');
const execution = await sandbox.notebook.execCell('x+=1; x');
console.log(execution.text); // outputs 2
await sandbox.close();

And in Python:

from e2b_code_interpreter import CodeInterpreter

with CodeInterpreter() as sandbox:
    sandbox.notebook.exec_cell("x = 1")
    execution = sandbox.notebook.exec_cell("x+=1; x")
    print(execution.text) # outputs 2

3. Hello World Guide

To dive deeper, E2B offers JavaScript and Python Hello World guides. These guides help you connect the code interpreter to LLMs, enabling you to prompt the LLM to generate Python code and execute it within the E2B sandbox.

Step-by-Step Guide to Using E2B Code Interpreter

Install Code Interpreter SDK

The Code Interpreter SDK allows you to run AI-generated code in a secure, small VM known as the E2B sandbox. This sandbox hosts a Jupyter server that you can control via the SDK’s notebook.execCell() method. To begin, you’ll need an API key, which you can obtain here.

Preparing the LLM

E2B supports various LLMs, but for this guide, we’ll use Anthropic's Claude 3 Opus. However, E2B is compatible with any LLM that supports tool use. If your chosen model doesn’t support tool use, you can ask the LLM to respond with Markdown or XML and parse the output manually.

Here’s an example of setting up the model in TypeScript:

import { createClient } from 'anthropic-sdk';
import { CodeInterpreter } from '@e2b/code-interpreter';

const client = createClient('your-anthropic-api-key');
const systemPrompt = 'You are a helpful assistant capable of executing Python code.';
const tools = {
    execute_python: async (code: string) => {
        const sandbox = await CodeInterpreter.create();
        const result = await sandbox.notebook.execCell(code);
        await sandbox.close();
        return result.text;
    },
};

Preparing Code Interpreting

Next, create a function to interpret the code generated by the LLM. This function will use the CodeInterpreter object from the SDK to execute code within the sandbox.

In TypeScript:

import { CodeInterpreter } from '@e2b/code-interpreter';

export async function codeInterpret(code: string) {
    const sandbox = await CodeInterpreter.create();
    const result = await sandbox.notebook.execCell(code);
    await sandbox.close();
    return result.text;
}

In Python:

from e2b_code_interpreter import CodeInterpreter

def code_interpret(code: str):
    with CodeInterpreter() as sandbox:
        result = sandbox.notebook.exec_cell(code)
        return result.text

Calling the LLM and Parsing the Response

Create a function to interact with the LLM and use the tools you’ve set up. This function will prompt the LLM to generate code and then interpret that code using the codeInterpret function.

In TypeScript:

async function chat(prompt: string) {
    const response = await client.complete({
        model: 'claude-3-opus',
        prompt: `${systemPrompt}\n\nUser: ${prompt}\nAssistant:`,
        maxTokens: 1000,
    });
    const code = extractCodeFromResponse(response);
    return await codeInterpret(code);
}

In Python:

import openai

openai.api_key = 'your-openai-api-key'

def chat(prompt: str):
    response = openai.Completion.create(
        model="text-davinci-003",
        prompt=f"{system_prompt}\n\nUser: {prompt}\nAssistant:",
        max_tokens=1000,
    )
    code = extract_code_from_response(response.choices[0].text)
    return code_interpret(code)

Running the Program

Now, you can run your program to see the results. Here’s an example of how to use the chat function to visualize a distribution of men's heights and print the median.

In TypeScript:

(async () => {
    const result = await chat("Visualize a distribution of men's heights and print the median.");
    console.log(result);
})();

In Python:

if __name__ == "__main__":
    result = chat("Visualize a distribution of men's heights and print the median.")
    print(result)

Saving the Generated Chart

If your LLM generates visual data, such as charts, you can save these outputs to a file. Update your function to handle file outputs.

In TypeScript:

import fs from 'fs';

async function saveChart(data: string, filename: string) {
    fs.writeFileSync(filename, data);
}

// In your main function
const chartData = await chat("Generate a chart visualizing the distribution of men's heights.");
saveChart(chartData, 'chart.png');

In Python:

def save_chart(data: str, filename: str):
    with open(filename, 'wb') as file:
        file.write(data)

if __name__ == "__main__":
    chart_data = chat("Generate a chart visualizing the distribution of men's heights.")
    save_chart(chart_data, 'chart.png')

Supported LLMs and AI Frameworks

One of the standout features of E2B is its agnostic approach to LLMs and AI frameworks. This means you can use any LLM model and integrate it with any AI framework. The primary requirement is that the model supports tool use, allowing the E2B code interpreter to function as a tool within your AI ecosystem.

Examples and Use Cases

E2B’s flexibility enables a wide range of applications. Here are a few examples:

  • AI Code Execution: Automate the execution of AI-generated code, streamlining workflows and enhancing productivity.
  • Data Analysis with AI: Use AI to generate and execute data analysis scripts, providing insights and visualizations.
  • AI Tutors: Develop AI tutors that can not only generate educational content but also execute code to demonstrate concepts.
  • Reasoning Modules for LLMs: Enhance LLMs with reasoning capabilities by integrating code execution, enabling more complex and accurate responses.

For detailed examples, check out the E2B Cookbook, which contains a variety of use cases and implementations.

Conclusion

E2B's Code Interpreter SDK is a powerful tool for developers looking to enhance their AI applications with code interpreting capabilities. By leveraging the secure E2B sandbox, developers can safely execute AI-generated code, integrate with various LLMs and AI frameworks, and build innovative applications that perform complex tasks with ease.

Whether you're developing AI-driven data analysis tools, creating interactive AI tutors, or enhancing the reasoning abilities of your LLMs, E2B provides the robust infrastructure and flexibility you need to bring your ideas to life. With comprehensive documentation, a supportive community, and a commitment to open-source development, E2B is poised to be a vital component in the toolkit of any AI developer.

For more information and to get started, visit the E2B Code Interpreter SDK GitHub repository and explore the E2B documentation. Happy coding!

Next Post Previous Post
No Comment
Add Comment
comment url