Code Generator SDK — Client Developer Documentation
Introduction
The Code Generator Sandbox SDK is a built-in Python framework for dynamically generating, loading, and executing Python code using LLM backends. It supports both local and remote LLMs, including:
- OpenAI (
gpt-4
,gpt-4o
, etc.) - AIOS gRPC-based inference blocks
- Custom backends via the
CodeGeneratorAPI
interface
It provides:
- Unified backend interface for any LLM provider
- Dynamic code generation from natural language task descriptions
- Automatic dependency extraction (
requirements.txt
) - Safe runtime instantiation of generated Python code
- Pluggable templates and descriptors for controlling output style
- Lifecycle management for generated code instances
Typical workflow:
- Initialize a generator backend (OpenAI, AIOS, custom).
- Register the backend in the
CodeGenerators
manager. - Generate code from a task description (with optional descriptors/templates).
- Instantiate and execute the generated class or function.
Import and Setup
from core import CodeGeneratorSandbox
from core.default import OpenAILLM, AIOSLLM
from utils.code_gen_api import CodeGenerators
Using the OpenAI Backend
openai_backend = OpenAILLM(
api_key="YOUR_OPENAI_KEY",
model="gpt-4o",
temperature=0.7,
max_tokens=1000
)
sandbox = CodeGeneratorSandbox(
global_settings={},
global_parameters={},
global_state={},
generator_api=openai_backend
)
inst = sandbox.generate_and_instantiate(
task_description="Create a method add(a, b, c) that returns their sum"
)
print(inst.function_instance.add(10, 20, 30))
Using the AIOS gRPC Backend
aios_backend = AIOSLLM(
server_address="localhost:50051",
block_id="block-codegen",
instance_id="instance-123",
session_id="session-abc",
seq_no=1,
frame_ptr=b"",
query_parameters=b"",
is_frame=False
)
sandbox = CodeGeneratorSandbox(
global_settings={},
global_parameters={},
global_state={},
generator_api=aios_backend
)
inst = sandbox.generate_and_instantiate(
task_description="Define a function greet(name) that returns 'Hello, {name}!'"
)
print(inst.function_instance.greet("Alice"))
Custom Descriptors and Templates
Descriptors help guide the LLM output.
descriptors = {
"structure": "include docstrings and typing hints",
"style": "return raw Python code only"
}
sandbox.generate_and_instantiate(
task_description="Create a class Calculator with a multiply method",
descriptors=descriptors
)
Custom templates can replace the default structure:
template = """
class MyGeneratedClass:
def run(self, x, y):
return x + y
"""
sandbox = CodeGeneratorSandbox(
global_settings={},
global_parameters={},
global_state={},
template=template,
generator_api=openai_backend
)
Pre-Processing Hooks
Backends support a pre-processor function to transform task descriptions.
def custom_pre_processor(task, descriptors):
return f"[TASK] {task}\n[DESCRIPTORS] {descriptors}"
openai_backend.set_pre_processor(custom_pre_processor)
Generated Output Format
generate_code(...)
returns:
{
"main.py": "<generated Python code>",
"requirements.txt": "<auto-extracted dependencies>"
}
Executing a Generated Function
sandbox.execute_function(
parameters={"a": 5, "b": 10},
input_data={"trigger": True},
context={}
)
Cleaning Up
sandbox.cleanup() # Remove temporary files
Registering Multiple Generators (CodeGenerators API)
code_gen_api = CodeGenerators({}, {}, {})
code_gen_api.register_code_generator("openai", openai_backend)
code_gen_api.register_code_generator("aios", aios_backend)
inst = code_gen_api.generate_and_instantiate(
name="openai",
task_description="Create a function subtract(a, b)"
)
print(inst.function_instance.subtract(10, 4))
Writing a Custom Backend
Implement CodeGeneratorAPI
:
from core.generator import CodeGeneratorAPI
from typing import Dict
class MyRESTLLM(CodeGeneratorAPI):
def generate(self, task_description: str, descriptors: Dict[str, str]) -> str:
# Call your REST API and return raw Python code as string
...
def check_execute(self, query: dict):
return True, {}
def get_current_estimates(self):
return True, {}
Register it:
code_gen_api.register_code_generator("custom", MyRESTLLM())