- Add multi-provider LLM architecture supporting OpenRouter, OpenAI, Gemini, and custom providers - Implement global LLM on/off switch with default DISABLED state for cost protection - Add per-character LLM configuration with provider-specific models and settings - Create performance-optimized caching system for LLM enabled status checks - Add API key validation before enabling LLM providers to prevent broken configurations - Implement audit logging for all LLM enable/disable actions for cost accountability - Create comprehensive admin UI with prominent cost warnings and confirmation dialogs - Add visual indicators in character list for custom AI model configurations - Build character-specific LLM client system with global fallback mechanism - Add database schema support for per-character LLM settings - Implement graceful fallback responses when LLM is globally disabled - Create provider testing and validation system for reliable connections
25 lines
480 B
Plaintext
25 lines
480 B
Plaintext
# Minimal requirements for admin interface only
|
|
discord.py>=2.3.2
|
|
pydantic>=2.5.0
|
|
sqlalchemy>=2.0.23
|
|
alembic>=1.13.1
|
|
pyyaml>=6.0.1
|
|
httpx>=0.25.2
|
|
python-dotenv>=1.0.0
|
|
aiosqlite>=0.19.0
|
|
loguru>=0.7.2
|
|
|
|
# Admin Interface essentials
|
|
fastapi>=0.104.1
|
|
uvicorn>=0.24.0
|
|
python-multipart>=0.0.6
|
|
pyjwt>=2.8.0
|
|
python-jose[cryptography]>=3.3.0
|
|
passlib[bcrypt]>=1.7.4
|
|
websockets>=12.0
|
|
psutil>=5.9.6
|
|
python-socketio>=5.10.0,<6.0.0
|
|
python-engineio>=4.7.0,<5.0.0
|
|
|
|
# Database driver
|
|
asyncpg>=0.29.0 |