Gemma 3N GenAI

Integration with Google Gemini and xAI models through modern web framework Hono with WebSocket support.

Demonstrates capabilities:

  • Google Gemini AI integration
  • xAI (Grok) model support
  • Modern web API with Hono framework
  • WebSocket real-time communication
  • Multi-provider AI architecture
  • Google Gemini Integration: Access to latest Gemini models
  • xAI Support: Grok models for advanced reasoning
  • Hono Web Framework: Fast, modern web API
  • WebSocket Support: Real-time bidirectional communication
  • REPL Interface: Interactive development environment
  • Runtime: Bun
  • Language: TypeScript
  • AI Framework: agent-swarm-kit
  • Web Framework: Hono
  • AI Providers: Google Gemini, xAI
  • Protocols: HTTP, WebSocket
src/
├── repl.ts # REPL interface
└── lib/
└── swarm.ts # Swarm configuration
# Install dependencies
bun install

# Start REPL
bun run src/repl.ts

Create a .env file:

GOOGLE_API_KEY=your_google_api_key
XAI_API_KEY=your_xai_api_key
GEMINI_MODEL=gemini-pro
GROK_MODEL=grok-2
PORT=3000
  1. Go to Google AI Studio
  2. Create a new project
  3. Get API key
  4. Add to .env file
  • Gemini Pro: General purpose model
  • Gemini Pro Vision: Multimodal с image support
  • Gemini Ultra: Most capable model (when available)
  1. Get access to xAI API
  2. Create API key
  3. Select Grok model
  • Grok-2: Latest reasoning model
  • Grok-2-Mini: Faster, cost-effective option
  • Grok-1: Previous generation
  • "Explain quantum computing"
  • "Write a business plan for a startup"
  • "Help solve a mathematical problem"
  • "Write a poem about nature"
  • "Create a movie plot"
  • "Develop a character description"
  • "Explain microservices architecture"
  • "How to optimize a React application?"
  • "Best practices for TypeScript"
Feature Gemini Pro Grok-2 Grok-2-Mini
Reasoning Excellent Superior Good
Speed Fast Medium Very Fast
Cost Low High Medium
Multimodal Yes Limited No
# Chat completion
POST /api/chat
{
"message": "Hello, how are you?",
"model": "gemini-pro"
}
const ws = new WebSocket('ws://localhost:3000/ws');
ws.send(JSON.stringify({
message: "Hello via WebSocket",
model: "grok-2"
}));
  • Fast Responses: Grok-2-Mini for quick interactions
  • Complex Reasoning: Grok-2 for advanced tasks
  • Multimodal: Gemini Pro Vision for image tasks
  • Cost Optimization: Gemini Pro for general use
  • Response caching for common queries
  • Model warming for faster startup
  • Connection pooling for efficiency

Ideal for:

  • Advanced AI applications
  • Research and experimentation
  • Educational platforms
  • Creative writing tools
  • Technical consulting
# Development mode with auto-reload
bun --watch src/repl.ts
// Enable debug logging
process.env.DEBUG = "true"
# Test different models
bun run test:gemini
bun run test:grok
FROM oven/bun:latest
COPY . .
RUN bun install
EXPOSE 3000
CMD ["bun", "run", "src/index.ts"]
# Production environment
NODE_ENV=production
LOG_LEVEL=info
RATE_LIMIT=100