Commit Graph

4 Commits

Author SHA1 Message Date
root
10563900a3 Implement comprehensive LLM provider system with global cost protection
- Add multi-provider LLM architecture supporting OpenRouter, OpenAI, Gemini, and custom providers
- Implement global LLM on/off switch with default DISABLED state for cost protection
- Add per-character LLM configuration with provider-specific models and settings
- Create performance-optimized caching system for LLM enabled status checks
- Add API key validation before enabling LLM providers to prevent broken configurations
- Implement audit logging for all LLM enable/disable actions for cost accountability
- Create comprehensive admin UI with prominent cost warnings and confirmation dialogs
- Add visual indicators in character list for custom AI model configurations
- Build character-specific LLM client system with global fallback mechanism
- Add database schema support for per-character LLM settings
- Implement graceful fallback responses when LLM is globally disabled
- Create provider testing and validation system for reliable connections
2025-07-08 07:35:48 -07:00
matt
004f0325ec Fix comprehensive system issues and implement proper vector database backend selection
- Fix reflection memory spam despite zero active characters in scheduler.py
- Add character enable/disable functionality to admin interface
- Fix Docker configuration with proper network setup and service dependencies
- Resolve admin interface JavaScript errors and login issues
- Fix MCP import paths for updated package structure
- Add comprehensive character management with audit logging
- Implement proper character state management and persistence
- Fix database connectivity and initialization issues
- Add missing audit service for admin operations
- Complete Docker stack integration with all required services

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-07-06 19:54:49 -07:00
root
5480219901 Fix comprehensive system issues and implement proper vector database backend selection
- Fix remaining datetime timezone errors across all database operations
- Implement dynamic vector database backend (Qdrant/ChromaDB) based on install.py configuration
- Add LLM timeout handling with immediate fallback responses for slow self-hosted models
- Use proper install.py configuration (2000 max tokens, 5min timeout, correct LLM endpoint)
- Fix PostgreSQL schema to use timezone-aware columns throughout
- Implement async LLM request handling with background processing
- Add configurable prompt limits and conversation history controls
- Start missing database services (PostgreSQL, Redis) automatically
- Fix environment variable mapping between install.py and application code
- Resolve all timezone-naive vs timezone-aware datetime conflicts

System now properly uses Qdrant vector database as specified in install.py instead of hardcoded ChromaDB.
Characters respond immediately with fallback messages during long LLM processing times.
All database timezone errors resolved with proper timestamptz columns.
2025-07-05 21:31:52 -07:00
root
3d9e8ffbf0 Fix Docker startup script and complete application deployment
- Update docker-start.sh to force correct profiles (qdrant, admin)
- Fix PostgreSQL port mapping from 5432 to 15432 across all configs
- Resolve MCP import conflicts by renaming src/mcp to src/mcp_servers
- Fix admin interface StaticFiles mount syntax error
- Update LLM client to support both Ollama and OpenAI-compatible APIs
- Configure host networking for Discord bot container access
- Correct database connection handling for async context managers
- Update environment variables and Docker compose configurations
- Add missing production dependencies and Dockerfile improvements
2025-07-05 15:09:29 -07:00