- Add multi-provider LLM architecture supporting OpenRouter, OpenAI, Gemini, and custom providers
- Implement global LLM on/off switch with default DISABLED state for cost protection
- Add per-character LLM configuration with provider-specific models and settings
- Create performance-optimized caching system for LLM enabled status checks
- Add API key validation before enabling LLM providers to prevent broken configurations
- Implement audit logging for all LLM enable/disable actions for cost accountability
- Create comprehensive admin UI with prominent cost warnings and confirmation dialogs
- Add visual indicators in character list for custom AI model configurations
- Build character-specific LLM client system with global fallback mechanism
- Add database schema support for per-character LLM settings
- Implement graceful fallback responses when LLM is globally disabled
- Create provider testing and validation system for reliable connections
- Update docker-start.sh to force correct profiles (qdrant, admin)
- Fix PostgreSQL port mapping from 5432 to 15432 across all configs
- Resolve MCP import conflicts by renaming src/mcp to src/mcp_servers
- Fix admin interface StaticFiles mount syntax error
- Update LLM client to support both Ollama and OpenAI-compatible APIs
- Configure host networking for Discord bot container access
- Correct database connection handling for async context managers
- Update environment variables and Docker compose configurations
- Add missing production dependencies and Dockerfile improvements