Stop Writing Prompts Inside Your Code

A YAML-based configuration management tool for separating code from prompts in LLM applications.

May 27, 2025

If you're building LLM applications, you've probably found yourself hardcoding prompts directly into your Python/TS files, making them look cluttered, hard to maintain, and - let's be honest - a bit ugly.

There are several benefits to separating code from config/prompts:

  • Version control: Track prompt changes separately from code changes
  • Clean separation of concerns: Code handles logic, YAML handles config/prompts
  • Prompts are language agnostic: Port your config to any language
  • Environment-specific configs: Different config files for dev/staging/prod
  • Easy experimentation: Swap model configs or prompts with one line change
  • A/B testing: Compare different combinations easily

The Problem with Embedded Prompts

Consider this typical pattern:

def analyze_sentiment(text):
    system_prompt = f"""
    You are an expert sentiment analyzer. 
    Analyze the sentiment of the following text: {text}
    Respond with only: positive, negative, or neutral
    ...
    10 more lines of prompt
    ...
    """
    response = llm.chat([
        {"role": "system", "content": system_prompt},
        {"role": "assistant", "content": "Now explain your reasoning"}
    ])
    return response

This approach has several issues:

  • No separation of concerns: Business logic and prompt content are tightly coupled
  • Hard to iterate: Every prompt change requires code changes and redeployment
  • No version control for prompts: Prompt changes get lost in code commits
  • Difficult collaboration: Non-developers can't easily modify prompts

Konfigure is a Configuration-Driven LLM Agent Development Tool

Konfigure is a YAML-based configuration management tool designed specifically for LLM applications. It solves the prompt management problem by treating prompts as configuration, not code.

Here's what makes konfigure special: every string in your YAML automatically becomes a Jinja2 template. This means you can use variables, conditionals, and filters without any extra setup.

A Practical Example

Let's create a config.yaml file for a customer support chatbot:

Note: the strings in this YAML follow the Jinja2 template syntax.

model:
  name: "gpt-4"
  temperature: 0.7
  max_tokens: 500

prompts:
  system: |
    You are a helpful customer support agent for {{ company_name }}.
    Your tone should be {{ tone | default("professional and friendly") }}.
    
    {% if escalation_available %}
    If you cannot resolve the issue, offer to escalate to a human agent.
    {% endif %}
  
  user_greeting: |
    Hello! I'm here to help with {{ issue_type }}.
    {% if customer_tier == "premium" %}
    As a premium customer, you'll receive priority support.
    {% endif %}
    
    How can I assist you today?

customer_data:
  company_name: TechCorp
  escalation_available: true
  default_tone: friendly and professional

Now your Python code becomes clean and focused:

import konfigure

# Load configuration
config = konfigure.load('config.yaml')

def handle_customer_query(customer_tier, issue_type):
    # Render templates with dynamic data
    system_prompt = config.prompts.system.render(
        company_name=config.customer_data.company_name,
        tone=config.customer_data.default_tone,
        escalation_available=config.customer_data.escalation_available
    )
    
    greeting = config.prompts.user_greeting.render(
        customer_tier=customer_tier,
        issue_type=issue_type
    )
    
    # Use with your LLM
    response = llm.chat([
        {"role": "system", "content": system_prompt},
        {"role": "assistant", "content": greeting}
    ])
    
    return response

Advanced Uses: Multi-Agent Configurations with YAML Aliases

Konfigure really shines when building complex multi-agent systems. Using YAML aliases, you can create reusable configurations and mix-and-match components:

# Define reusable model configurations
gpt_4_config: &gpt_4_config
  name: "gpt-4"
  temperature: 0.7
  max_tokens: 1000

claude_config: &claude_config
  name: "claude-3-sonnet"
  temperature: 0.5
  max_tokens: 2000

# Define reusable prompt templates
analytical_prompt: &prompt_analytical |
  You are a data analyst. Analyze the following information objectively:
  {{ data }}
  
  Provide insights in this format:
  - Key findings: {{ findings }}
  - Recommendations: {{ recommendations }}

creative_prompt: &prompt_creative |
  You are a creative writer. Transform this data into an engaging narrative:
  {{ data }}
  
  Make it {{ tone | default("professional yet engaging") }}.

# Configure different agents using aliases
agents:
  data_analyst:
    model: *gpt_4_config
    prompt: *prompt_analytical
    role: "analytical"
    
  content_writer:
    model: *claude_config
    prompt: *prompt_creative
    role: "creative"
    
  senior_analyst:
    model: *gpt_4_config  # Same model as data_analyst
    prompt: *prompt_creative  # But different prompt
    role: "senior"
    temperature_override: 0.9  # Override specific settings

Python and Typescript Support

To install konfigure for Python:

pip install konfigure

Konfigure is also available for Typescript and has the same developer experience:

npm install konfigure

Check out the full documentation and examples at github.com/sunnybak/konfigure.