Implement comprehensive LLM provider system with global cost protection
- Add multi-provider LLM architecture supporting OpenRouter, OpenAI, Gemini, and custom providers - Implement global LLM on/off switch with default DISABLED state for cost protection - Add per-character LLM configuration with provider-specific models and settings - Create performance-optimized caching system for LLM enabled status checks - Add API key validation before enabling LLM providers to prevent broken configurations - Implement audit logging for all LLM enable/disable actions for cost accountability - Create comprehensive admin UI with prominent cost warnings and confirmation dialogs - Add visual indicators in character list for custom AI model configurations - Build character-specific LLM client system with global fallback mechanism - Add database schema support for per-character LLM settings - Implement graceful fallback responses when LLM is globally disabled - Create provider testing and validation system for reliable connections
This commit is contained in:
@@ -18,13 +18,13 @@ LLM_MODEL=koboldcpp/Broken-Tutu-24B-Transgression-v2.0.i1-Q4_K_M
|
||||
LLM_TIMEOUT=300
|
||||
LLM_MAX_TOKENS=2000
|
||||
LLM_TEMPERATURE=0.8
|
||||
LLM_MAX_PROMPT_LENGTH=6000
|
||||
LLM_MAX_PROMPT_LENGTH=16000
|
||||
LLM_MAX_HISTORY_MESSAGES=5
|
||||
LLM_MAX_MEMORIES=5
|
||||
|
||||
# Admin Interface
|
||||
ADMIN_PORT=8294
|
||||
SECRET_KEY=your-secret-key-here
|
||||
SECRET_KEY=stable-secret-key-for-jwt-tokens-fishbowl-2025
|
||||
ADMIN_USERNAME=admin
|
||||
ADMIN_PASSWORD=FIre!@34
|
||||
|
||||
|
||||
Reference in New Issue
Block a user