Structured JSON Output

Integration with Ollama models through the agent-swarm-kit framework for structured data extraction and AI-driven processing.

Demonstrates capabilities:

  • Ollama AI model integration
  • Structured data extraction
  • JSON-based response formatting
  • Validation and error handling
  • Asynchronous processing with history tracking
  • Ollama Integration: Access to local Ollama models (e.g., Gemma3:4b)
  • Structured Data Extraction: Extracts formatted data from unstructured input
  • JSON Repair: Ensures valid JSON output using jsonrepair
  • Validation: Custom rules for data integrity (e.g., positive age)
  • History Tracking: Maintains conversation context for processing
  • Runtime: Bun
  • Language: TypeScript
  • AI Framework: agent-swarm-kit
  • AI Provider: Ollama
  • Protocols: HTTP (Ollama API)
src/
├── repl.ts # REPL interface for testing
└── lib/
└── swarm.ts # Swarm and completion configuration
# Install dependencies
bun install

# Start REPL
bun run src/repl.ts

Create a .env file:

OLLAMA_HOST=http://127.0.0.1:11434
OLLAMA_MODEL=gemma3:4b
KEEP_ALIVE=24h
  1. Install Ollama locally
  2. Pull the desired model (e.g., ollama pull gemma3:4b)
  3. Ensure Ollama server is running on http://127.0.0.1:11434
  • Gemma3:4b: Lightweight, efficient model for structured tasks
  • Other Models: Any model supported by Ollama (e.g., LLaMA, Mistral)
  • "Extract name and age from: John Doe is 30 years old"
  • "Parse: Alice Smith, age 25, lives in New York"
  • "Extract data from: Bob is a 40-year-old engineer"
{
"name": "John Doe",
"age": 30
}
  • Input: "Mike is -5 years old"
    • Output: Error: "Age must be positive"
Feature Gemma3:4b
Reasoning Good
Speed Fast
Cost Free (local)
Multimodal No
# Extract structured data
POST /api/completion
{
"message": "John Doe is 30 years old",
"completion": "test_completion"
}
  • JSON object with name and age
  • Validated to ensure positive age
  • Repaired JSON for consistency
  • Gemma3:4b: Ideal for lightweight, local processing
  • Keep-Alive: Configurable model persistence (default: 24h)
  • Validation: Prevents invalid data from propagating
  • Outline history for context-aware processing
  • Local model execution for low latency
  • JSON repair for robust output

Ideal for:

  • Data extraction from unstructured text
  • Local AI processing without cloud dependency
  • Rapid prototyping of AI-driven applications
  • Educational tools for structured data handling
  • Lightweight automation scripts
# Development mode with auto-reload
bun --watch src/repl.ts
// Enable debug logging
process.env.DEBUG = "true"
# Test data extraction
bun run test:completion
FROM oven/bun:latest
COPY . .
RUN bun install
EXPOSE 3000
CMD ["bun", "run", "src/index.ts"]
# Production environment
NODE_ENV=production
LOG_LEVEL=info
RATE_LIMIT=100
  • onValidDocument: Saves valid results to ./dump/outline
  • Validations: Ensures age is positive, throws error if invalid
  • History Tracking: Pushes user input to conversation history
  • Requires local Ollama server running
  • JSON output is automatically repaired for consistency
  • Extensible for additional validations or models
  • Lightweight and suitable for low-resource environments