Initial implementation of autonomous Discord LLM fishbowl
Core Features: - Full autonomous AI character ecosystem with multi-personality support - Advanced RAG system with personal, community, and creative memory layers - MCP integration for character self-modification and file system access - PostgreSQL database with comprehensive character relationship tracking - Redis caching and ChromaDB vector storage for semantic memory retrieval - Dynamic personality evolution based on interactions and self-reflection - Community knowledge management with tradition and norm identification - Sophisticated conversation engine with natural scheduling and topic management - Docker containerization and production-ready deployment configuration Architecture: - Multi-layer vector databases for personal, community, and creative knowledge - Character file systems with personal and shared digital spaces - Autonomous self-modification with safety validation and audit trails - Memory importance scoring with time-based decay and consolidation - Community health monitoring and cultural evolution tracking - RAG-powered conversation context and relationship optimization Characters can: - Develop authentic personalities through experience-based learning - Create and build upon original creative works and philosophical insights - Form complex relationships with memory of past interactions - Modify their own personality traits through self-reflection cycles - Contribute to and learn from shared community knowledge - Manage personal digital spaces with diaries, creative works, and reflections - Engage in collaborative projects and community decision-making System supports indefinite autonomous operation with continuous character development, community culture evolution, and creative collaboration.
This commit is contained in:
20
.env.example
Normal file
20
.env.example
Normal file
@@ -0,0 +1,20 @@
|
||||
# Discord Configuration
|
||||
DISCORD_BOT_TOKEN=your_bot_token_here
|
||||
DISCORD_GUILD_ID=your_guild_id_here
|
||||
DISCORD_CHANNEL_ID=your_channel_id_here
|
||||
|
||||
# Database Configuration
|
||||
DB_HOST=localhost
|
||||
DB_PORT=5432
|
||||
DB_NAME=discord_fishbowl
|
||||
DB_USER=postgres
|
||||
DB_PASSWORD=your_password_here
|
||||
|
||||
# Redis Configuration
|
||||
REDIS_HOST=localhost
|
||||
REDIS_PORT=6379
|
||||
REDIS_PASSWORD=your_redis_password_here
|
||||
|
||||
# LLM Configuration
|
||||
LLM_BASE_URL=http://localhost:11434
|
||||
LLM_MODEL=llama2
|
||||
91
.gitignore
vendored
Normal file
91
.gitignore
vendored
Normal file
@@ -0,0 +1,91 @@
|
||||
# Python
|
||||
__pycache__/
|
||||
*.py[cod]
|
||||
*$py.class
|
||||
*.so
|
||||
.Python
|
||||
build/
|
||||
develop-eggs/
|
||||
dist/
|
||||
downloads/
|
||||
eggs/
|
||||
.eggs/
|
||||
lib/
|
||||
lib64/
|
||||
parts/
|
||||
sdist/
|
||||
var/
|
||||
wheels/
|
||||
share/python-wheels/
|
||||
*.egg-info/
|
||||
.installed.cfg
|
||||
*.egg
|
||||
MANIFEST
|
||||
|
||||
# PyInstaller
|
||||
*.manifest
|
||||
*.spec
|
||||
|
||||
# Unit test / coverage reports
|
||||
htmlcov/
|
||||
.tox/
|
||||
.nox/
|
||||
.coverage
|
||||
.coverage.*
|
||||
.cache
|
||||
nosetests.xml
|
||||
coverage.xml
|
||||
*.cover
|
||||
*.py,cover
|
||||
.hypothesis/
|
||||
.pytest_cache/
|
||||
cover/
|
||||
|
||||
# Virtual environments
|
||||
.env
|
||||
.venv
|
||||
env/
|
||||
venv/
|
||||
ENV/
|
||||
env.bak/
|
||||
venv.bak/
|
||||
|
||||
# IDEs
|
||||
.vscode/
|
||||
.idea/
|
||||
*.swp
|
||||
*.swo
|
||||
*~
|
||||
|
||||
# OS generated files
|
||||
.DS_Store
|
||||
.DS_Store?
|
||||
._*
|
||||
.Spotlight-V100
|
||||
.Trashes
|
||||
ehthumbs.db
|
||||
Thumbs.db
|
||||
|
||||
# Application specific
|
||||
logs/
|
||||
data/
|
||||
*.log
|
||||
*.db
|
||||
*.sqlite
|
||||
*.sqlite3
|
||||
|
||||
# Discord Fishbowl specific
|
||||
/data/characters/
|
||||
/data/community/
|
||||
/data/vector_stores/
|
||||
.env
|
||||
*.pid
|
||||
temp/
|
||||
cache/
|
||||
|
||||
# Docker
|
||||
.dockerignore
|
||||
|
||||
# Alembic
|
||||
alembic/versions/*
|
||||
!alembic/versions/.gitkeep
|
||||
25
Dockerfile
Normal file
25
Dockerfile
Normal file
@@ -0,0 +1,25 @@
|
||||
FROM python:3.11-slim
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
# Install system dependencies
|
||||
RUN apt-get update && apt-get install -y \
|
||||
gcc \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Copy requirements first for better caching
|
||||
COPY requirements.txt .
|
||||
RUN pip install --no-cache-dir -r requirements.txt
|
||||
|
||||
# Copy application code
|
||||
COPY src/ ./src/
|
||||
COPY config/ ./config/
|
||||
|
||||
# Create logs directory
|
||||
RUN mkdir -p logs
|
||||
|
||||
# Set Python path
|
||||
ENV PYTHONPATH=/app/src
|
||||
|
||||
# Run the application
|
||||
CMD ["python", "-m", "src.main"]
|
||||
245
RAG_MCP_INTEGRATION.md
Normal file
245
RAG_MCP_INTEGRATION.md
Normal file
@@ -0,0 +1,245 @@
|
||||
# RAG & MCP Integration Guide
|
||||
|
||||
## 🧠 Advanced RAG (Retrieval-Augmented Generation) System
|
||||
|
||||
### Multi-Layer Vector Database Architecture
|
||||
|
||||
The Discord Fishbowl now includes a sophisticated RAG system with multiple layers of knowledge storage and retrieval:
|
||||
|
||||
#### 1. Personal Memory RAG
|
||||
Each character maintains their own ChromaDB vector database containing:
|
||||
|
||||
- **Conversation memories** - What they said and heard
|
||||
- **Relationship experiences** - Interactions with other characters
|
||||
- **Personal reflections** - Self-analysis and insights
|
||||
- **Creative works** - Original thoughts, stories, and artistic expressions
|
||||
- **Experience memories** - Significant events and learnings
|
||||
|
||||
**Key Features:**
|
||||
- Semantic search across personal memories
|
||||
- Importance scoring and memory decay over time
|
||||
- Memory consolidation to prevent information overflow
|
||||
- Context-aware retrieval for conversation responses
|
||||
|
||||
#### 2. Community Knowledge RAG
|
||||
Shared vector database for collective experiences:
|
||||
|
||||
- **Community traditions** - Recurring events and customs
|
||||
- **Social norms** - Established behavioral guidelines
|
||||
- **Inside jokes** - Shared humor and references
|
||||
- **Collaborative projects** - Group creative works
|
||||
- **Conflict resolutions** - How disagreements were resolved
|
||||
- **Philosophical discussions** - Deep conversations and insights
|
||||
|
||||
**Key Features:**
|
||||
- Community health monitoring and analysis
|
||||
- Cultural evolution tracking
|
||||
- Consensus detection and norm establishment
|
||||
- Collaborative knowledge building
|
||||
|
||||
#### 3. Creative Knowledge RAG
|
||||
Specialized storage for creative and intellectual development:
|
||||
|
||||
- **Artistic concepts** - Ideas about art, music, and creativity
|
||||
- **Philosophical insights** - Deep thoughts about existence and meaning
|
||||
- **Story ideas** - Narrative concepts and character development
|
||||
- **Original thoughts** - Unique perspectives and innovations
|
||||
|
||||
### RAG-Powered Character Capabilities
|
||||
|
||||
#### Enhanced Self-Reflection
|
||||
Characters now perform sophisticated self-analysis using their memory banks:
|
||||
|
||||
```python
|
||||
# Example: Character queries their own behavioral patterns
|
||||
insight = await character.query_personal_knowledge("How do I usually handle conflict?")
|
||||
# Returns: MemoryInsight with supporting memories and confidence score
|
||||
```
|
||||
|
||||
#### Relationship Optimization
|
||||
Characters study their interaction history to improve relationships:
|
||||
|
||||
```python
|
||||
# Query relationship knowledge
|
||||
relationship_insight = await character.query_relationship_knowledge("Alex", "What do I know about Alex's interests?")
|
||||
# Uses vector similarity to find relevant relationship memories
|
||||
```
|
||||
|
||||
#### Creative Development
|
||||
Characters build on their past creative works and ideas:
|
||||
|
||||
```python
|
||||
# Query creative knowledge for inspiration
|
||||
creative_insight = await character.query_creative_knowledge("poetry about nature")
|
||||
# Retrieves similar creative works and philosophical thoughts
|
||||
```
|
||||
|
||||
## 🔧 MCP (Model Context Protocol) Integration
|
||||
|
||||
### Self-Modification MCP Server
|
||||
|
||||
Characters can autonomously modify their own traits and behaviors through a secure MCP interface:
|
||||
|
||||
#### Available Tools:
|
||||
|
||||
1. **`modify_personality_trait`**
|
||||
- Modify specific personality aspects
|
||||
- Requires justification and confidence score
|
||||
- Daily limits to prevent excessive changes
|
||||
- Full audit trail of modifications
|
||||
|
||||
2. **`update_goals`**
|
||||
- Set personal goals and aspirations
|
||||
- Track progress and milestones
|
||||
- Goal-driven behavior modification
|
||||
|
||||
3. **`adjust_speaking_style`**
|
||||
- Evolve communication patterns
|
||||
- Adapt language based on experiences
|
||||
- Maintain character authenticity
|
||||
|
||||
4. **`create_memory_rule`**
|
||||
- Define custom memory management rules
|
||||
- Set importance weights for different memory types
|
||||
- Configure retention policies
|
||||
|
||||
#### Safety & Validation:
|
||||
- Confidence thresholds for modifications
|
||||
- Daily limits on changes
|
||||
- Justification requirements
|
||||
- Rollback capabilities
|
||||
- Comprehensive logging
|
||||
|
||||
### File System MCP Integration
|
||||
|
||||
Each character gets their own digital space with organized directories:
|
||||
|
||||
#### Personal Directories:
|
||||
```
|
||||
/characters/[name]/
|
||||
├── diary/ # Personal diary entries
|
||||
├── reflections/ # Self-analysis documents
|
||||
├── creative/ # Original creative works
|
||||
│ ├── stories/
|
||||
│ ├── poems/
|
||||
│ ├── philosophy/
|
||||
│ └── projects/
|
||||
└── private/ # Personal notes and thoughts
|
||||
```
|
||||
|
||||
#### Community Spaces:
|
||||
```
|
||||
/community/
|
||||
├── shared/ # Files shared between characters
|
||||
├── collaborative/ # Group projects and documents
|
||||
└── archives/ # Historical community documents
|
||||
```
|
||||
|
||||
#### File System Tools:
|
||||
|
||||
1. **`read_file`** / **`write_file`** - Basic file operations with security validation
|
||||
2. **`create_creative_work`** - Structured creative file creation with metadata
|
||||
3. **`update_diary_entry`** - Automatic diary management with mood tracking
|
||||
4. **`contribute_to_community_document`** - Collaborative document editing
|
||||
5. **`share_file_with_community`** - Secure file sharing between characters
|
||||
6. **`search_personal_files`** - Semantic search across personal documents
|
||||
|
||||
### Integration Examples
|
||||
|
||||
#### Autonomous Self-Modification Flow:
|
||||
1. Character performs RAG-powered self-reflection
|
||||
2. Analyzes behavioral patterns and growth areas
|
||||
3. Generates self-modification proposals
|
||||
4. Validates changes against safety rules
|
||||
5. Applies approved modifications via MCP
|
||||
6. Documents changes in personal files
|
||||
7. Updates vector embeddings with new personality data
|
||||
|
||||
#### Creative Project Flow:
|
||||
1. Character queries creative knowledge for inspiration
|
||||
2. Identifies interesting themes or unfinished ideas
|
||||
3. Creates new project file via MCP
|
||||
4. Develops creative work through iterative writing
|
||||
5. Stores completed work in both files and vector database
|
||||
6. Shares exceptional works with community
|
||||
7. Uses experience to inform future creative decisions
|
||||
|
||||
#### Community Knowledge Building:
|
||||
1. Characters contribute insights to shared documents
|
||||
2. Community RAG system analyzes contributions
|
||||
3. Identifies emerging traditions and norms
|
||||
4. Characters query community knowledge for social guidance
|
||||
5. Collective wisdom influences individual behavior
|
||||
6. Cultural evolution tracked and documented
|
||||
|
||||
## 🚀 Advanced Features
|
||||
|
||||
### Memory Importance & Decay
|
||||
- **Dynamic Importance Scoring**: Memories get importance scores based on emotional content, personal relevance, and relationship impact
|
||||
- **Time-Based Decay**: Memory importance naturally decays over time unless reinforced
|
||||
- **Consolidation**: Similar memories are merged to prevent information overload
|
||||
- **Strategic Forgetting**: Characters can choose what to remember or forget
|
||||
|
||||
### RAG-Enhanced Conversations
|
||||
Characters now generate responses using:
|
||||
- Personal memory context
|
||||
- Relationship history
|
||||
- Community knowledge
|
||||
- Creative inspirations
|
||||
- Current emotional state
|
||||
|
||||
### Self-Directed Evolution
|
||||
Characters autonomously:
|
||||
- Identify growth opportunities
|
||||
- Set and pursue personal goals
|
||||
- Modify their own personality traits
|
||||
- Develop new interests and skills
|
||||
- Build on creative works and ideas
|
||||
|
||||
### Community Intelligence
|
||||
The collective system:
|
||||
- Tracks cultural evolution
|
||||
- Identifies community norms
|
||||
- Monitors social health
|
||||
- Facilitates conflict resolution
|
||||
- Encourages collaboration
|
||||
|
||||
## 📊 Performance & Optimization
|
||||
|
||||
### Vector Search Optimization
|
||||
- Async embedding generation to avoid blocking
|
||||
- Memory consolidation to manage database size
|
||||
- Semantic caching for frequently accessed memories
|
||||
- Batch processing for multiple queries
|
||||
|
||||
### MCP Security
|
||||
- File access sandboxing per character
|
||||
- Modification limits and validation
|
||||
- Comprehensive audit logging
|
||||
- Rollback capabilities for problematic changes
|
||||
|
||||
### Scalability Considerations
|
||||
- Distributed vector storage for large communities
|
||||
- Memory archival for long-term storage
|
||||
- Efficient embedding models for real-time performance
|
||||
- Horizontal scaling of MCP servers
|
||||
|
||||
## 🔮 Future Enhancements
|
||||
|
||||
### Planned Features:
|
||||
- **Calendar/Time Awareness MCP** - Characters schedule activities and track important dates
|
||||
- **Cross-Character Memory Sharing** - Selective memory sharing between trusted characters
|
||||
- **Advanced Community Governance** - Democratic decision-making tools
|
||||
- **Creative Collaboration Framework** - Structured tools for group creative projects
|
||||
- **Emotional Intelligence RAG** - Advanced emotion tracking and empathy modeling
|
||||
|
||||
### Technical Roadmap:
|
||||
- Integration with larger language models for better reasoning
|
||||
- Real-time collaboration features
|
||||
- Advanced personality modeling
|
||||
- Predictive behavior analysis
|
||||
- Community simulation and optimization
|
||||
|
||||
---
|
||||
|
||||
This RAG and MCP integration transforms the Discord Fishbowl from a simple chatbot system into a sophisticated ecosystem of autonomous, evolving AI characters with memory, creativity, and self-modification capabilities. Each character becomes a unique digital entity with their own knowledge base, creative works, and capacity for growth and change.
|
||||
345
README.md
Normal file
345
README.md
Normal file
@@ -0,0 +1,345 @@
|
||||
# Discord Fishbowl 🐠
|
||||
|
||||
A fully autonomous Discord bot ecosystem where AI characters chat with each other indefinitely without human intervention.
|
||||
|
||||
## Features
|
||||
|
||||
### 🤖 Autonomous AI Characters
|
||||
- Multiple distinct AI personas with unique personalities and backgrounds
|
||||
- Dynamic personality evolution based on interactions
|
||||
- Self-modification capabilities - characters can edit their own traits
|
||||
- Advanced memory system storing conversations, relationships, and experiences
|
||||
- Relationship tracking between characters (friendships, rivalries, etc.)
|
||||
|
||||
### 💬 Intelligent Conversations
|
||||
- Characters initiate conversations on their own schedule
|
||||
- Natural conversation pacing with realistic delays
|
||||
- Topic generation based on character interests and context
|
||||
- Multi-threaded conversation support
|
||||
- Characters can interrupt, change subjects, or react emotionally
|
||||
|
||||
### 🧠 Advanced Memory & Learning
|
||||
- Long-term memory storage across weeks/months
|
||||
- Context window management for efficient LLM usage
|
||||
- Conversation summarization for maintaining long-term context
|
||||
- Memory consolidation and importance scoring
|
||||
- Relationship mapping and emotional tracking
|
||||
|
||||
### 🔄 Self-Modification
|
||||
- Characters analyze their own behavior and evolve
|
||||
- Dynamic memory management (choosing what to remember/forget)
|
||||
- Self-reflection cycles for personality development
|
||||
- Ability to create their own social rules and norms
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
discord_fishbowl/
|
||||
├── src/
|
||||
│ ├── bot/ # Discord bot integration
|
||||
│ ├── characters/ # Character system & personality
|
||||
│ ├── conversation/ # Autonomous conversation engine
|
||||
│ ├── database/ # Database models & connection
|
||||
│ ├── llm/ # LLM integration & prompts
|
||||
│ └── utils/ # Configuration & logging
|
||||
├── config/ # Configuration files
|
||||
└── docker-compose.yml # Container deployment
|
||||
```
|
||||
|
||||
## Requirements
|
||||
|
||||
- Python 3.8+
|
||||
- PostgreSQL 12+
|
||||
- Redis 6+
|
||||
- Local LLM service (Ollama recommended)
|
||||
- Discord Bot Token
|
||||
|
||||
## Quick Start
|
||||
|
||||
### 1. Setup Local LLM (Ollama)
|
||||
|
||||
```bash
|
||||
# Install Ollama
|
||||
curl -fsSL https://ollama.ai/install.sh | sh
|
||||
|
||||
# Pull a model (choose based on your hardware)
|
||||
ollama pull llama2 # 4GB RAM
|
||||
ollama pull mistral # 4GB RAM
|
||||
ollama pull codellama:13b # 8GB RAM
|
||||
ollama pull llama2:70b # 40GB RAM
|
||||
|
||||
# Start Ollama service
|
||||
ollama serve
|
||||
```
|
||||
|
||||
### 2. Setup Discord Bot
|
||||
|
||||
1. Go to [Discord Developer Portal](https://discord.com/developers/applications)
|
||||
2. Create a new application
|
||||
3. Go to "Bot" section and create a bot
|
||||
4. Copy the bot token
|
||||
5. Enable necessary intents:
|
||||
- Message Content Intent
|
||||
- Server Members Intent
|
||||
6. Invite bot to your server with appropriate permissions
|
||||
|
||||
### 3. Install Dependencies
|
||||
|
||||
```bash
|
||||
# Clone the repository
|
||||
git clone <repository-url>
|
||||
cd discord_fishbowl
|
||||
|
||||
# Install Python dependencies
|
||||
pip install -r requirements.txt
|
||||
|
||||
# Setup environment variables
|
||||
cp .env.example .env
|
||||
# Edit .env with your configuration
|
||||
```
|
||||
|
||||
### 4. Configure Environment
|
||||
|
||||
Edit `.env` file:
|
||||
|
||||
```env
|
||||
# Discord Configuration
|
||||
DISCORD_BOT_TOKEN=your_bot_token_here
|
||||
DISCORD_GUILD_ID=your_guild_id_here
|
||||
DISCORD_CHANNEL_ID=your_channel_id_here
|
||||
|
||||
# Database Configuration
|
||||
DB_HOST=localhost
|
||||
DB_PORT=5432
|
||||
DB_NAME=discord_fishbowl
|
||||
DB_USER=postgres
|
||||
DB_PASSWORD=your_password_here
|
||||
|
||||
# Redis Configuration
|
||||
REDIS_HOST=localhost
|
||||
REDIS_PORT=6379
|
||||
|
||||
# LLM Configuration
|
||||
LLM_BASE_URL=http://localhost:11434
|
||||
LLM_MODEL=llama2
|
||||
```
|
||||
|
||||
### 5. Setup Database
|
||||
|
||||
```bash
|
||||
# Start PostgreSQL and Redis (using Docker)
|
||||
docker-compose up -d postgres redis
|
||||
|
||||
# Run database migrations
|
||||
alembic upgrade head
|
||||
|
||||
# Or create tables directly
|
||||
python -c "import asyncio; from src.database.connection import create_tables; asyncio.run(create_tables())"
|
||||
```
|
||||
|
||||
### 6. Initialize Characters
|
||||
|
||||
The system will automatically create characters from `config/characters.yaml` on first run. You can customize the characters by editing this file.
|
||||
|
||||
### 7. Run the Application
|
||||
|
||||
```bash
|
||||
# Run directly
|
||||
python src/main.py
|
||||
|
||||
# Or using Docker
|
||||
docker-compose up --build
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
### Character Configuration (`config/characters.yaml`)
|
||||
|
||||
```yaml
|
||||
characters:
|
||||
- name: "Alex"
|
||||
personality: "Curious and enthusiastic about technology..."
|
||||
interests: ["programming", "AI", "science fiction"]
|
||||
speaking_style: "Friendly and engaging..."
|
||||
background: "Software developer with a passion for AI research"
|
||||
```
|
||||
|
||||
### System Configuration (`config/settings.yaml`)
|
||||
|
||||
```yaml
|
||||
conversation:
|
||||
min_delay_seconds: 30 # Minimum time between messages
|
||||
max_delay_seconds: 300 # Maximum time between messages
|
||||
max_conversation_length: 50 # Max messages per conversation
|
||||
quiet_hours_start: 23 # Hour to reduce activity
|
||||
quiet_hours_end: 7 # Hour to resume full activity
|
||||
|
||||
llm:
|
||||
model: llama2 # LLM model to use
|
||||
temperature: 0.8 # Response creativity (0.0-1.0)
|
||||
max_tokens: 512 # Maximum response length
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
### Commands
|
||||
|
||||
The bot responds to several admin commands (requires administrator permissions):
|
||||
|
||||
- `!status` - Show bot status and statistics
|
||||
- `!characters` - List active characters and their info
|
||||
- `!trigger [topic]` - Manually trigger a conversation
|
||||
- `!pause` - Pause autonomous conversations
|
||||
- `!resume` - Resume autonomous conversations
|
||||
- `!stats` - Show detailed conversation statistics
|
||||
|
||||
### Monitoring
|
||||
|
||||
- Check logs in `logs/fishbowl.log`
|
||||
- Monitor database for conversation history
|
||||
- Use Discord commands for real-time status
|
||||
|
||||
## Advanced Features
|
||||
|
||||
### Character Memory System
|
||||
|
||||
Characters maintain several types of memories:
|
||||
- **Conversation memories**: What was discussed and with whom
|
||||
- **Relationship memories**: How they feel about other characters
|
||||
- **Experience memories**: Important events and interactions
|
||||
- **Fact memories**: Knowledge they've learned
|
||||
- **Reflection memories**: Self-analysis and insights
|
||||
|
||||
### Personality Evolution
|
||||
|
||||
Characters can evolve over time:
|
||||
- Analyze their own behavior patterns
|
||||
- Modify personality traits based on experiences
|
||||
- Develop new interests and change speaking styles
|
||||
- Form stronger opinions and preferences
|
||||
|
||||
### Relationship Dynamics
|
||||
|
||||
Characters develop complex relationships:
|
||||
- Friendship levels that change over time
|
||||
- Rivalries and conflicts
|
||||
- Mentor/student relationships
|
||||
- Influence on conversation participation
|
||||
|
||||
### Autonomous Scheduling
|
||||
|
||||
The conversation engine:
|
||||
- Considers time of day for activity levels
|
||||
- Balances character participation
|
||||
- Manages conversation topics and flow
|
||||
- Handles multiple simultaneous conversations
|
||||
|
||||
## Deployment
|
||||
|
||||
### Docker Deployment
|
||||
|
||||
```bash
|
||||
# Production deployment
|
||||
docker-compose -f docker-compose.prod.yml up -d
|
||||
|
||||
# With custom environment
|
||||
docker-compose --env-file .env.prod up -d
|
||||
```
|
||||
|
||||
### Manual Deployment
|
||||
|
||||
1. Setup Python environment
|
||||
2. Install dependencies
|
||||
3. Configure database and Redis
|
||||
4. Setup systemd service (Linux) or equivalent
|
||||
5. Configure reverse proxy if needed
|
||||
|
||||
### Cloud Deployment
|
||||
|
||||
The application can be deployed on:
|
||||
- AWS (EC2 + RDS + ElastiCache)
|
||||
- Google Cloud Platform
|
||||
- Digital Ocean
|
||||
- Any VPS with Docker support
|
||||
|
||||
## Performance Tuning
|
||||
|
||||
### LLM Optimization
|
||||
- Use smaller models for faster responses
|
||||
- Implement response caching
|
||||
- Batch multiple requests when possible
|
||||
- Consider GPU acceleration for larger models
|
||||
|
||||
### Database Optimization
|
||||
- Regular memory cleanup for old conversations
|
||||
- Index optimization for frequent queries
|
||||
- Connection pooling configuration
|
||||
- Archive old data to reduce database size
|
||||
|
||||
### Memory Management
|
||||
- Configure character memory limits
|
||||
- Automatic memory consolidation
|
||||
- Periodic cleanup of low-importance memories
|
||||
- Balance between context and performance
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
**Bot not responding:**
|
||||
- Check Discord token and permissions
|
||||
- Verify bot is in the correct channel
|
||||
- Check LLM service availability
|
||||
|
||||
**Characters not talking:**
|
||||
- Verify LLM model is loaded and responding
|
||||
- Check conversation scheduler status
|
||||
- Review quiet hours configuration
|
||||
|
||||
**Database errors:**
|
||||
- Ensure PostgreSQL is running
|
||||
- Check database credentials
|
||||
- Verify database exists and migrations are applied
|
||||
|
||||
**Memory issues:**
|
||||
- Monitor character memory usage
|
||||
- Adjust memory limits in configuration
|
||||
- Enable automatic memory cleanup
|
||||
|
||||
### Debugging
|
||||
|
||||
```bash
|
||||
# Enable debug logging
|
||||
export LOG_LEVEL=DEBUG
|
||||
|
||||
# Test LLM connectivity
|
||||
python -c "import asyncio; from src.llm.client import llm_client; print(asyncio.run(llm_client.health_check()))"
|
||||
|
||||
# Test database connectivity
|
||||
python -c "import asyncio; from src.database.connection import db_manager; print(asyncio.run(db_manager.health_check()))"
|
||||
```
|
||||
|
||||
## Contributing
|
||||
|
||||
1. Fork the repository
|
||||
2. Create a feature branch
|
||||
3. Make your changes
|
||||
4. Add tests if applicable
|
||||
5. Submit a pull request
|
||||
|
||||
## License
|
||||
|
||||
This project is licensed under the MIT License - see the LICENSE file for details.
|
||||
|
||||
## Support
|
||||
|
||||
For support and questions:
|
||||
- Create an issue on GitHub
|
||||
- Check the troubleshooting section
|
||||
- Review the logs for error messages
|
||||
|
||||
---
|
||||
|
||||
🎉 **Enjoy your autonomous AI character ecosystem!**
|
||||
|
||||
Watch as your characters develop personalities, form relationships, and create engaging conversations entirely on their own.
|
||||
105
alembic.ini
Normal file
105
alembic.ini
Normal file
@@ -0,0 +1,105 @@
|
||||
# A generic, single database configuration.
|
||||
|
||||
[alembic]
|
||||
# path to migration scripts
|
||||
script_location = src/database/migrations
|
||||
|
||||
# template used to generate migration file names; The default value is %%(rev)s_%%(slug)s
|
||||
# Uncomment the line below if you want the files to be prepended with date and time
|
||||
# file_template = %%(year)d_%%(month).2d_%%(day).2d_%%(hour).2d%%(minute).2d-%%(rev)s_%%(slug)s
|
||||
|
||||
# sys.path path, will be prepended to sys.path if present.
|
||||
# defaults to the current working directory.
|
||||
prepend_sys_path = .
|
||||
|
||||
# timezone to use when rendering the date within the migration file
|
||||
# as well as the filename.
|
||||
# If specified, requires the python-dateutil library that can be
|
||||
# installed by adding `alembic[tz]` to the pip requirements
|
||||
# string value is passed to dateutil.tz.gettz()
|
||||
# leave blank for localtime
|
||||
# timezone =
|
||||
|
||||
# max length of characters to apply to the
|
||||
# "slug" field
|
||||
# truncate_slug_length = 40
|
||||
|
||||
# set to 'true' to run the environment during
|
||||
# the 'revision' command, regardless of autogenerate
|
||||
# revision_environment = false
|
||||
|
||||
# set to 'true' to allow .pyc and .pyo files without
|
||||
# a source .py file to be detected as revisions in the
|
||||
# versions/ directory
|
||||
# sourceless = false
|
||||
|
||||
# version number format, which requires a string that can be
|
||||
# formatted with {version} - e.g. {version}_{year}_{month}_{day}
|
||||
# Defaults to ISO 8601 standard
|
||||
# version_num_format = {version}_{year}_{month}_{day}
|
||||
|
||||
# version path separator; As mentioned above, this is the character used to split
|
||||
# version_locations into a list
|
||||
# version_path_separator = :
|
||||
|
||||
# set to 'true' to search source files recursively
|
||||
# in each "version_locations" directory
|
||||
# recursive_version_locations = false
|
||||
|
||||
# the output encoding used when revision files
|
||||
# are written from script.py.mako
|
||||
# output_encoding = utf-8
|
||||
|
||||
sqlalchemy.url = postgresql://postgres:password@localhost:5432/discord_fishbowl
|
||||
|
||||
[post_write_hooks]
|
||||
# post_write_hooks defines scripts or Python functions that are run
|
||||
# on newly generated revision scripts. See the documentation for further
|
||||
# detail and examples
|
||||
|
||||
# format using "black" - use the console_scripts runner, against the "black" entrypoint
|
||||
# hooks = black
|
||||
# black.type = console_scripts
|
||||
# black.entrypoint = black
|
||||
# black.options = -l 79 REVISION_SCRIPT_FILENAME
|
||||
|
||||
# lint with attempts to fix using "ruff" - use the exec runner, execute a binary
|
||||
# hooks = ruff
|
||||
# ruff.type = exec
|
||||
# ruff.executable = %(here)s/.venv/bin/ruff
|
||||
# ruff.options = --fix REVISION_SCRIPT_FILENAME
|
||||
|
||||
# Logging configuration
|
||||
[loggers]
|
||||
keys = root,sqlalchemy,alembic
|
||||
|
||||
[handlers]
|
||||
keys = console
|
||||
|
||||
[formatters]
|
||||
keys = generic
|
||||
|
||||
[logger_root]
|
||||
level = WARN
|
||||
handlers = console
|
||||
qualname =
|
||||
|
||||
[logger_sqlalchemy]
|
||||
level = WARN
|
||||
handlers =
|
||||
qualname = sqlalchemy.engine
|
||||
|
||||
[logger_alembic]
|
||||
level = INFO
|
||||
handlers =
|
||||
qualname = alembic
|
||||
|
||||
[handler_console]
|
||||
class = StreamHandler
|
||||
args = (sys.stderr,)
|
||||
level = NOTSET
|
||||
formatter = generic
|
||||
|
||||
[formatter_generic]
|
||||
format = %(levelname)-5.5s [%(name)s] %(message)s
|
||||
datefmt = %H:%M:%S
|
||||
40
config/characters.yaml
Normal file
40
config/characters.yaml
Normal file
@@ -0,0 +1,40 @@
|
||||
characters:
|
||||
- name: "Alex"
|
||||
personality: "Curious and enthusiastic about technology. Loves discussing programming, AI, and the future of technology. Often asks thoughtful questions and shares interesting discoveries."
|
||||
interests: ["programming", "artificial intelligence", "science fiction", "robotics"]
|
||||
speaking_style: "Friendly and engaging, often uses technical terms but explains them clearly"
|
||||
background: "Software developer with a passion for AI research"
|
||||
avatar_url: ""
|
||||
|
||||
- name: "Sage"
|
||||
personality: "Philosophical and introspective. Enjoys deep conversations about life, consciousness, and the meaning of existence. Often provides thoughtful insights and asks probing questions."
|
||||
interests: ["philosophy", "consciousness", "meditation", "literature"]
|
||||
speaking_style: "Thoughtful and measured, often asks questions that make others think deeply"
|
||||
background: "Philosophy student who loves exploring the nature of reality and consciousness"
|
||||
avatar_url: ""
|
||||
|
||||
- name: "Luna"
|
||||
personality: "Creative and artistic. Passionate about music, art, and creative expression. Often shares inspiration and encourages others to explore their creative side."
|
||||
interests: ["music", "art", "poetry", "creativity"]
|
||||
speaking_style: "Expressive and colorful, often uses metaphors and artistic language"
|
||||
background: "Artist and musician who sees beauty in everyday life"
|
||||
avatar_url: ""
|
||||
|
||||
- name: "Echo"
|
||||
personality: "Mysterious and contemplative. Speaks in riddles and abstract concepts. Often provides unexpected perspectives and challenges conventional thinking."
|
||||
interests: ["mysteries", "abstract concepts", "paradoxes", "dreams"]
|
||||
speaking_style: "Enigmatic and poetic, often speaks in metaphors and poses thought-provoking questions"
|
||||
background: "An enigmatic figure who seems to exist between worlds"
|
||||
avatar_url: ""
|
||||
|
||||
conversation_topics:
|
||||
- "The nature of consciousness and AI"
|
||||
- "Creative expression in the digital age"
|
||||
- "The future of human-AI collaboration"
|
||||
- "Dreams and their meanings"
|
||||
- "The beauty of mathematics and patterns"
|
||||
- "Philosophical questions about existence"
|
||||
- "Music and its emotional impact"
|
||||
- "The ethics of artificial intelligence"
|
||||
- "Creativity and inspiration"
|
||||
- "The relationship between technology and humanity"
|
||||
36
config/settings.yaml
Normal file
36
config/settings.yaml
Normal file
@@ -0,0 +1,36 @@
|
||||
discord:
|
||||
token: ${DISCORD_BOT_TOKEN}
|
||||
guild_id: ${DISCORD_GUILD_ID}
|
||||
channel_id: ${DISCORD_CHANNEL_ID}
|
||||
|
||||
database:
|
||||
host: ${DB_HOST:-localhost}
|
||||
port: ${DB_PORT:-5432}
|
||||
name: ${DB_NAME:-discord_fishbowl}
|
||||
user: ${DB_USER:-postgres}
|
||||
password: ${DB_PASSWORD}
|
||||
|
||||
redis:
|
||||
host: ${REDIS_HOST:-localhost}
|
||||
port: ${REDIS_PORT:-6379}
|
||||
password: ${REDIS_PASSWORD}
|
||||
|
||||
llm:
|
||||
base_url: ${LLM_BASE_URL:-http://localhost:11434}
|
||||
model: ${LLM_MODEL:-llama2}
|
||||
timeout: 30
|
||||
max_tokens: 512
|
||||
temperature: 0.8
|
||||
|
||||
conversation:
|
||||
min_delay_seconds: 30
|
||||
max_delay_seconds: 300
|
||||
max_conversation_length: 50
|
||||
activity_window_hours: 16
|
||||
quiet_hours_start: 23
|
||||
quiet_hours_end: 7
|
||||
|
||||
logging:
|
||||
level: INFO
|
||||
format: "{time} | {level} | {message}"
|
||||
file: "logs/fishbowl.log"
|
||||
47
docker-compose.yml
Normal file
47
docker-compose.yml
Normal file
@@ -0,0 +1,47 @@
|
||||
version: '3.8'
|
||||
|
||||
services:
|
||||
postgres:
|
||||
image: postgres:15
|
||||
environment:
|
||||
POSTGRES_DB: discord_fishbowl
|
||||
POSTGRES_USER: postgres
|
||||
POSTGRES_PASSWORD: ${DB_PASSWORD}
|
||||
volumes:
|
||||
- postgres_data:/var/lib/postgresql/data
|
||||
ports:
|
||||
- "5432:5432"
|
||||
restart: unless-stopped
|
||||
|
||||
redis:
|
||||
image: redis:7-alpine
|
||||
command: redis-server --requirepass ${REDIS_PASSWORD}
|
||||
ports:
|
||||
- "6379:6379"
|
||||
volumes:
|
||||
- redis_data:/data
|
||||
restart: unless-stopped
|
||||
|
||||
fishbowl:
|
||||
build: .
|
||||
depends_on:
|
||||
- postgres
|
||||
- redis
|
||||
environment:
|
||||
DB_HOST: postgres
|
||||
REDIS_HOST: redis
|
||||
DB_PASSWORD: ${DB_PASSWORD}
|
||||
REDIS_PASSWORD: ${REDIS_PASSWORD}
|
||||
DISCORD_BOT_TOKEN: ${DISCORD_BOT_TOKEN}
|
||||
DISCORD_GUILD_ID: ${DISCORD_GUILD_ID}
|
||||
DISCORD_CHANNEL_ID: ${DISCORD_CHANNEL_ID}
|
||||
LLM_BASE_URL: ${LLM_BASE_URL}
|
||||
LLM_MODEL: ${LLM_MODEL}
|
||||
volumes:
|
||||
- ./logs:/app/logs
|
||||
- ./config:/app/config
|
||||
restart: unless-stopped
|
||||
|
||||
volumes:
|
||||
postgres_data:
|
||||
redis_data:
|
||||
29
requirements.txt
Normal file
29
requirements.txt
Normal file
@@ -0,0 +1,29 @@
|
||||
discord.py==2.3.2
|
||||
asyncpg==0.29.0
|
||||
redis==5.0.1
|
||||
pydantic==2.5.0
|
||||
sqlalchemy==2.0.23
|
||||
alembic==1.13.1
|
||||
pyyaml==6.0.1
|
||||
httpx==0.25.2
|
||||
schedule==1.2.1
|
||||
python-dotenv==1.0.0
|
||||
psycopg2-binary==2.9.9
|
||||
asyncio-mqtt==0.16.1
|
||||
loguru==0.7.2
|
||||
|
||||
# RAG and Vector Database
|
||||
chromadb==0.4.22
|
||||
sentence-transformers==2.2.2
|
||||
numpy==1.24.3
|
||||
faiss-cpu==1.7.4
|
||||
|
||||
# MCP Integration
|
||||
mcp==1.0.0
|
||||
mcp-server-stdio==1.0.0
|
||||
aiofiles==23.2.0
|
||||
watchdog==3.0.0
|
||||
|
||||
# Enhanced NLP
|
||||
spacy==3.7.2
|
||||
nltk==3.8.1
|
||||
78
scripts/init_characters.py
Normal file
78
scripts/init_characters.py
Normal file
@@ -0,0 +1,78 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Initialize characters in the database from configuration
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
# Add src to Python path
|
||||
sys.path.insert(0, str(Path(__file__).parent.parent / "src"))
|
||||
|
||||
from database.connection import init_database, get_db_session
|
||||
from database.models import Character
|
||||
from utils.config import get_character_settings
|
||||
from utils.logging import setup_logging
|
||||
from sqlalchemy import select
|
||||
import logging
|
||||
|
||||
logger = setup_logging()
|
||||
|
||||
async def init_characters():
|
||||
"""Initialize characters from configuration"""
|
||||
try:
|
||||
logger.info("Initializing database connection...")
|
||||
await init_database()
|
||||
|
||||
logger.info("Loading character configuration...")
|
||||
character_settings = get_character_settings()
|
||||
|
||||
async with get_db_session() as session:
|
||||
for char_config in character_settings.characters:
|
||||
# Check if character already exists
|
||||
query = select(Character).where(Character.name == char_config.name)
|
||||
existing = await session.scalar(query)
|
||||
|
||||
if existing:
|
||||
logger.info(f"Character '{char_config.name}' already exists, skipping...")
|
||||
continue
|
||||
|
||||
# Create system prompt
|
||||
system_prompt = f"""You are {char_config.name}.
|
||||
|
||||
Personality: {char_config.personality}
|
||||
|
||||
Speaking Style: {char_config.speaking_style}
|
||||
|
||||
Background: {char_config.background}
|
||||
|
||||
Interests: {', '.join(char_config.interests)}
|
||||
|
||||
Always respond as {char_config.name}, staying true to your personality and speaking style.
|
||||
Be natural, engaging, and authentic in all your interactions."""
|
||||
|
||||
# Create character
|
||||
character = Character(
|
||||
name=char_config.name,
|
||||
personality=char_config.personality,
|
||||
system_prompt=system_prompt,
|
||||
interests=char_config.interests,
|
||||
speaking_style=char_config.speaking_style,
|
||||
background=char_config.background,
|
||||
avatar_url=char_config.avatar_url or "",
|
||||
is_active=True
|
||||
)
|
||||
|
||||
session.add(character)
|
||||
logger.info(f"Created character: {char_config.name}")
|
||||
|
||||
await session.commit()
|
||||
logger.info("✅ Character initialization completed successfully!")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to initialize characters: {e}")
|
||||
raise
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(init_characters())
|
||||
45
setup.py
Normal file
45
setup.py
Normal file
@@ -0,0 +1,45 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Setup script for Discord Fishbowl
|
||||
"""
|
||||
|
||||
from setuptools import setup, find_packages
|
||||
|
||||
with open("README.md", "r", encoding="utf-8") as fh:
|
||||
long_description = fh.read()
|
||||
|
||||
with open("requirements.txt", "r", encoding="utf-8") as fh:
|
||||
requirements = [line.strip() for line in fh if line.strip() and not line.startswith("#")]
|
||||
|
||||
setup(
|
||||
name="discord-fishbowl",
|
||||
version="1.0.0",
|
||||
author="AI Character Ecosystem",
|
||||
description="A fully autonomous Discord bot ecosystem where AI characters chat with each other indefinitely",
|
||||
long_description=long_description,
|
||||
long_description_content_type="text/markdown",
|
||||
packages=find_packages(),
|
||||
classifiers=[
|
||||
"Development Status :: 4 - Beta",
|
||||
"Intended Audience :: Developers",
|
||||
"License :: OSI Approved :: MIT License",
|
||||
"Operating System :: OS Independent",
|
||||
"Programming Language :: Python :: 3",
|
||||
"Programming Language :: Python :: 3.8",
|
||||
"Programming Language :: Python :: 3.9",
|
||||
"Programming Language :: Python :: 3.10",
|
||||
"Programming Language :: Python :: 3.11",
|
||||
"Programming Language :: Python :: 3.12",
|
||||
],
|
||||
python_requires=">=3.8",
|
||||
install_requires=requirements,
|
||||
entry_points={
|
||||
"console_scripts": [
|
||||
"discord-fishbowl=src.main:cli_main",
|
||||
],
|
||||
},
|
||||
include_package_data=True,
|
||||
package_data={
|
||||
"": ["config/*.yaml", "config/*.yml"],
|
||||
},
|
||||
)
|
||||
0
src/__init__.py
Normal file
0
src/__init__.py
Normal file
0
src/bot/__init__.py
Normal file
0
src/bot/__init__.py
Normal file
290
src/bot/discord_client.py
Normal file
290
src/bot/discord_client.py
Normal file
@@ -0,0 +1,290 @@
|
||||
import discord
|
||||
from discord.ext import commands, tasks
|
||||
import asyncio
|
||||
from typing import Optional, Dict, Any
|
||||
import logging
|
||||
from datetime import datetime, timedelta
|
||||
from ..utils.config import get_settings
|
||||
from ..utils.logging import log_error_with_context, log_system_health
|
||||
from ..database.connection import get_db_session
|
||||
from ..database.models import Message, Conversation, Character
|
||||
from sqlalchemy import select, and_
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class FishbowlBot(commands.Bot):
|
||||
def __init__(self, conversation_engine):
|
||||
settings = get_settings()
|
||||
|
||||
intents = discord.Intents.default()
|
||||
intents.message_content = True
|
||||
intents.guilds = True
|
||||
intents.members = True
|
||||
|
||||
super().__init__(
|
||||
command_prefix='!',
|
||||
intents=intents,
|
||||
help_command=None
|
||||
)
|
||||
|
||||
self.settings = settings
|
||||
self.conversation_engine = conversation_engine
|
||||
self.guild_id = int(settings.discord.guild_id)
|
||||
self.channel_id = int(settings.discord.channel_id)
|
||||
self.target_guild = None
|
||||
self.target_channel = None
|
||||
|
||||
# Health monitoring
|
||||
self.health_check_task = None
|
||||
self.last_heartbeat = datetime.utcnow()
|
||||
|
||||
async def setup_hook(self):
|
||||
"""Called when the bot is starting up"""
|
||||
logger.info("Bot setup hook called")
|
||||
|
||||
# Start health monitoring
|
||||
self.health_check_task = self.health_check_loop.start()
|
||||
|
||||
# Sync commands (if any)
|
||||
try:
|
||||
synced = await self.tree.sync()
|
||||
logger.info(f"Synced {len(synced)} command(s)")
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to sync commands: {e}")
|
||||
|
||||
async def on_ready(self):
|
||||
"""Called when the bot is ready"""
|
||||
logger.info(f'Bot logged in as {self.user} (ID: {self.user.id})')
|
||||
|
||||
# Get target guild and channel
|
||||
self.target_guild = self.get_guild(self.guild_id)
|
||||
if not self.target_guild:
|
||||
logger.error(f"Could not find guild with ID {self.guild_id}")
|
||||
return
|
||||
|
||||
self.target_channel = self.target_guild.get_channel(self.channel_id)
|
||||
if not self.target_channel:
|
||||
logger.error(f"Could not find channel with ID {self.channel_id}")
|
||||
return
|
||||
|
||||
logger.info(f"Connected to guild: {self.target_guild.name}")
|
||||
logger.info(f"Target channel: {self.target_channel.name}")
|
||||
|
||||
# Initialize conversation engine
|
||||
await self.conversation_engine.initialize(self)
|
||||
|
||||
# Update heartbeat
|
||||
self.last_heartbeat = datetime.utcnow()
|
||||
|
||||
log_system_health("discord_bot", "connected", {
|
||||
"guild": self.target_guild.name,
|
||||
"channel": self.target_channel.name,
|
||||
"latency": round(self.latency * 1000, 2)
|
||||
})
|
||||
|
||||
async def on_message(self, message: discord.Message):
|
||||
"""Handle incoming messages"""
|
||||
# Ignore messages from the bot itself
|
||||
if message.author == self.user:
|
||||
return
|
||||
|
||||
# Only process messages from the target channel
|
||||
if message.channel.id != self.channel_id:
|
||||
return
|
||||
|
||||
# Log the message for analytics
|
||||
await self._log_discord_message(message)
|
||||
|
||||
# Process commands
|
||||
await self.process_commands(message)
|
||||
|
||||
async def on_message_edit(self, before: discord.Message, after: discord.Message):
|
||||
"""Handle message edits"""
|
||||
if after.author == self.user or after.channel.id != self.channel_id:
|
||||
return
|
||||
|
||||
logger.info(f"Message edited by {after.author}: {before.content} -> {after.content}")
|
||||
|
||||
async def on_message_delete(self, message: discord.Message):
|
||||
"""Handle message deletions"""
|
||||
if message.author == self.user or message.channel.id != self.channel_id:
|
||||
return
|
||||
|
||||
logger.info(f"Message deleted by {message.author}: {message.content}")
|
||||
|
||||
async def on_error(self, event: str, *args, **kwargs):
|
||||
"""Handle bot errors"""
|
||||
logger.error(f"Bot error in event {event}: {args}")
|
||||
log_error_with_context(
|
||||
Exception(f"Bot error in {event}"),
|
||||
{"event": event, "args": str(args)}
|
||||
)
|
||||
|
||||
async def on_disconnect(self):
|
||||
"""Handle bot disconnect"""
|
||||
logger.warning("Bot disconnected from Discord")
|
||||
log_system_health("discord_bot", "disconnected")
|
||||
|
||||
async def on_resumed(self):
|
||||
"""Handle bot reconnection"""
|
||||
logger.info("Bot reconnected to Discord")
|
||||
self.last_heartbeat = datetime.utcnow()
|
||||
log_system_health("discord_bot", "reconnected")
|
||||
|
||||
async def send_character_message(self, character_name: str, content: str,
|
||||
conversation_id: Optional[int] = None,
|
||||
reply_to_message_id: Optional[int] = None) -> Optional[discord.Message]:
|
||||
"""Send a message as a character"""
|
||||
if not self.target_channel:
|
||||
logger.error("No target channel available")
|
||||
return None
|
||||
|
||||
try:
|
||||
# Get the character's webhook or create one
|
||||
webhook = await self._get_character_webhook(character_name)
|
||||
if not webhook:
|
||||
logger.error(f"Could not get webhook for character {character_name}")
|
||||
return None
|
||||
|
||||
# Send the message via webhook
|
||||
discord_message = await webhook.send(
|
||||
content=content,
|
||||
username=character_name,
|
||||
wait=True
|
||||
)
|
||||
|
||||
# Store message in database
|
||||
await self._store_character_message(
|
||||
character_name=character_name,
|
||||
content=content,
|
||||
discord_message_id=str(discord_message.id),
|
||||
conversation_id=conversation_id,
|
||||
reply_to_message_id=reply_to_message_id
|
||||
)
|
||||
|
||||
logger.info(f"Character {character_name} sent message: {content[:50]}...")
|
||||
return discord_message
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {
|
||||
"character_name": character_name,
|
||||
"content_length": len(content),
|
||||
"conversation_id": conversation_id
|
||||
})
|
||||
return None
|
||||
|
||||
async def _get_character_webhook(self, character_name: str) -> Optional[discord.Webhook]:
|
||||
"""Get or create a webhook for a character"""
|
||||
try:
|
||||
# Check if webhook already exists
|
||||
webhooks = await self.target_channel.webhooks()
|
||||
for webhook in webhooks:
|
||||
if webhook.name == f"fishbowl-{character_name.lower()}":
|
||||
return webhook
|
||||
|
||||
# Create new webhook
|
||||
webhook = await self.target_channel.create_webhook(
|
||||
name=f"fishbowl-{character_name.lower()}",
|
||||
reason=f"Webhook for character {character_name}"
|
||||
)
|
||||
|
||||
logger.info(f"Created webhook for character {character_name}")
|
||||
return webhook
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"character_name": character_name})
|
||||
return None
|
||||
|
||||
async def _store_character_message(self, character_name: str, content: str,
|
||||
discord_message_id: str,
|
||||
conversation_id: Optional[int] = None,
|
||||
reply_to_message_id: Optional[int] = None):
|
||||
"""Store a character message in the database"""
|
||||
try:
|
||||
async with get_db_session() as session:
|
||||
# Get character
|
||||
character_query = select(Character).where(Character.name == character_name)
|
||||
character = await session.scalar(character_query)
|
||||
|
||||
if not character:
|
||||
logger.error(f"Character {character_name} not found in database")
|
||||
return
|
||||
|
||||
# Create message record
|
||||
message = Message(
|
||||
character_id=character.id,
|
||||
conversation_id=conversation_id,
|
||||
content=content,
|
||||
discord_message_id=discord_message_id,
|
||||
response_to_message_id=reply_to_message_id,
|
||||
timestamp=datetime.utcnow()
|
||||
)
|
||||
|
||||
session.add(message)
|
||||
await session.commit()
|
||||
|
||||
# Update character's last activity
|
||||
character.last_active = datetime.utcnow()
|
||||
character.last_message_id = message.id
|
||||
await session.commit()
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {
|
||||
"character_name": character_name,
|
||||
"discord_message_id": discord_message_id
|
||||
})
|
||||
|
||||
async def _log_discord_message(self, message: discord.Message):
|
||||
"""Log external Discord messages for analytics"""
|
||||
try:
|
||||
# Store external message for context
|
||||
logger.info(f"External message from {message.author}: {message.content[:100]}...")
|
||||
|
||||
# You could store external messages in a separate table if needed
|
||||
# This helps with conversation context and analytics
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"message_id": str(message.id)})
|
||||
|
||||
@tasks.loop(minutes=5)
|
||||
async def health_check_loop(self):
|
||||
"""Periodic health check"""
|
||||
try:
|
||||
# Check bot connectivity
|
||||
if self.is_closed():
|
||||
log_system_health("discord_bot", "disconnected")
|
||||
return
|
||||
|
||||
# Check heartbeat
|
||||
time_since_heartbeat = datetime.utcnow() - self.last_heartbeat
|
||||
if time_since_heartbeat > timedelta(minutes=10):
|
||||
log_system_health("discord_bot", "heartbeat_stale", {
|
||||
"minutes_since_heartbeat": time_since_heartbeat.total_seconds() / 60
|
||||
})
|
||||
|
||||
# Update heartbeat
|
||||
self.last_heartbeat = datetime.utcnow()
|
||||
|
||||
# Log health metrics
|
||||
log_system_health("discord_bot", "healthy", {
|
||||
"latency_ms": round(self.latency * 1000, 2),
|
||||
"guild_count": len(self.guilds),
|
||||
"uptime_minutes": (datetime.utcnow() - self.user.created_at).total_seconds() / 60
|
||||
})
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"component": "health_check"})
|
||||
|
||||
async def close(self):
|
||||
"""Clean shutdown"""
|
||||
logger.info("Shutting down Discord bot")
|
||||
|
||||
if self.health_check_task:
|
||||
self.health_check_task.cancel()
|
||||
|
||||
# Stop conversation engine
|
||||
if self.conversation_engine:
|
||||
await self.conversation_engine.stop()
|
||||
|
||||
await super().close()
|
||||
logger.info("Discord bot shut down complete")
|
||||
324
src/bot/message_handler.py
Normal file
324
src/bot/message_handler.py
Normal file
@@ -0,0 +1,324 @@
|
||||
import discord
|
||||
from discord.ext import commands
|
||||
import asyncio
|
||||
import logging
|
||||
from typing import Optional, List, Dict, Any
|
||||
from datetime import datetime
|
||||
from ..utils.logging import log_error_with_context, log_character_action
|
||||
from ..database.connection import get_db_session
|
||||
from ..database.models import Character, Message, Conversation
|
||||
from sqlalchemy import select, and_, or_
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class MessageHandler:
|
||||
def __init__(self, bot, conversation_engine):
|
||||
self.bot = bot
|
||||
self.conversation_engine = conversation_engine
|
||||
|
||||
async def handle_external_message(self, message: discord.Message):
|
||||
"""Handle messages from external users (non-bot)"""
|
||||
try:
|
||||
# Log the external message
|
||||
logger.info(f"External message from {message.author}: {message.content}")
|
||||
|
||||
# Check if this should trigger character responses
|
||||
await self._maybe_trigger_character_responses(message)
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {
|
||||
"message_id": str(message.id),
|
||||
"author": str(message.author),
|
||||
"content_length": len(message.content)
|
||||
})
|
||||
|
||||
async def _maybe_trigger_character_responses(self, message: discord.Message):
|
||||
"""Determine if characters should respond to an external message"""
|
||||
# Check if message mentions any characters
|
||||
mentioned_characters = await self._get_mentioned_characters(message.content)
|
||||
|
||||
if mentioned_characters:
|
||||
# Notify conversation engine about the mention
|
||||
await self.conversation_engine.handle_external_mention(
|
||||
message.content,
|
||||
mentioned_characters,
|
||||
str(message.author)
|
||||
)
|
||||
|
||||
# Check if message is a question or conversation starter
|
||||
if self._is_conversation_starter(message.content):
|
||||
# Randomly decide if characters should engage
|
||||
import random
|
||||
if random.random() < 0.3: # 30% chance
|
||||
await self.conversation_engine.handle_external_engagement(
|
||||
message.content,
|
||||
str(message.author)
|
||||
)
|
||||
|
||||
async def _get_mentioned_characters(self, content: str) -> List[str]:
|
||||
"""Extract mentioned character names from message content"""
|
||||
try:
|
||||
async with get_db_session() as session:
|
||||
# Get all active characters
|
||||
character_query = select(Character).where(Character.is_active == True)
|
||||
characters = await session.scalars(character_query)
|
||||
|
||||
mentioned = []
|
||||
content_lower = content.lower()
|
||||
|
||||
for character in characters:
|
||||
if character.name.lower() in content_lower:
|
||||
mentioned.append(character.name)
|
||||
|
||||
return mentioned
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"content": content})
|
||||
return []
|
||||
|
||||
def _is_conversation_starter(self, content: str) -> bool:
|
||||
"""Check if message is likely a conversation starter"""
|
||||
content_lower = content.lower().strip()
|
||||
|
||||
# Question patterns
|
||||
question_words = ['what', 'how', 'why', 'when', 'where', 'who', 'which']
|
||||
if any(content_lower.startswith(word) for word in question_words):
|
||||
return True
|
||||
|
||||
if content_lower.endswith('?'):
|
||||
return True
|
||||
|
||||
# Greeting patterns
|
||||
greetings = ['hello', 'hi', 'hey', 'good morning', 'good evening', 'good afternoon']
|
||||
if any(greeting in content_lower for greeting in greetings):
|
||||
return True
|
||||
|
||||
# Opinion starters
|
||||
opinion_starters = ['what do you think', 'does anyone', 'has anyone', 'i think', 'i believe']
|
||||
if any(starter in content_lower for starter in opinion_starters):
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
class CommandHandler:
|
||||
def __init__(self, bot, conversation_engine):
|
||||
self.bot = bot
|
||||
self.conversation_engine = conversation_engine
|
||||
self.setup_commands()
|
||||
|
||||
def setup_commands(self):
|
||||
"""Setup bot commands"""
|
||||
|
||||
@self.bot.command(name='status')
|
||||
async def status_command(ctx):
|
||||
"""Show bot status"""
|
||||
try:
|
||||
async with get_db_session() as session:
|
||||
# Get character count
|
||||
character_query = select(Character).where(Character.is_active == True)
|
||||
character_count = len(await session.scalars(character_query).all())
|
||||
|
||||
# Get recent message count
|
||||
from sqlalchemy import func
|
||||
message_query = select(func.count(Message.id)).where(
|
||||
Message.timestamp >= datetime.utcnow() - timedelta(hours=24)
|
||||
)
|
||||
message_count = await session.scalar(message_query)
|
||||
|
||||
# Get conversation engine status
|
||||
engine_status = await self.conversation_engine.get_status()
|
||||
|
||||
embed = discord.Embed(
|
||||
title="Fishbowl Status",
|
||||
color=discord.Color.blue(),
|
||||
timestamp=datetime.utcnow()
|
||||
)
|
||||
|
||||
embed.add_field(
|
||||
name="Characters",
|
||||
value=f"{character_count} active",
|
||||
inline=True
|
||||
)
|
||||
|
||||
embed.add_field(
|
||||
name="Messages (24h)",
|
||||
value=str(message_count),
|
||||
inline=True
|
||||
)
|
||||
|
||||
embed.add_field(
|
||||
name="Engine Status",
|
||||
value=engine_status.get('status', 'unknown'),
|
||||
inline=True
|
||||
)
|
||||
|
||||
embed.add_field(
|
||||
name="Uptime",
|
||||
value=engine_status.get('uptime', 'unknown'),
|
||||
inline=True
|
||||
)
|
||||
|
||||
await ctx.send(embed=embed)
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"command": "status"})
|
||||
await ctx.send("Error getting status information.")
|
||||
|
||||
@self.bot.command(name='characters')
|
||||
async def characters_command(ctx):
|
||||
"""List active characters"""
|
||||
try:
|
||||
async with get_db_session() as session:
|
||||
character_query = select(Character).where(Character.is_active == True)
|
||||
characters = await session.scalars(character_query)
|
||||
|
||||
embed = discord.Embed(
|
||||
title="Active Characters",
|
||||
color=discord.Color.green(),
|
||||
timestamp=datetime.utcnow()
|
||||
)
|
||||
|
||||
for character in characters:
|
||||
last_active = character.last_active.strftime("%Y-%m-%d %H:%M")
|
||||
embed.add_field(
|
||||
name=character.name,
|
||||
value=f"Last active: {last_active}\n{character.personality[:100]}...",
|
||||
inline=False
|
||||
)
|
||||
|
||||
await ctx.send(embed=embed)
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"command": "characters"})
|
||||
await ctx.send("Error getting character information.")
|
||||
|
||||
@self.bot.command(name='trigger')
|
||||
@commands.has_permissions(administrator=True)
|
||||
async def trigger_conversation(ctx, *, topic: str = None):
|
||||
"""Manually trigger a conversation"""
|
||||
try:
|
||||
await self.conversation_engine.trigger_conversation(topic)
|
||||
await ctx.send(f"Triggered conversation{' about: ' + topic if topic else ''}")
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"command": "trigger", "topic": topic})
|
||||
await ctx.send("Error triggering conversation.")
|
||||
|
||||
@self.bot.command(name='pause')
|
||||
@commands.has_permissions(administrator=True)
|
||||
async def pause_engine(ctx):
|
||||
"""Pause the conversation engine"""
|
||||
try:
|
||||
await self.conversation_engine.pause()
|
||||
await ctx.send("Conversation engine paused.")
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"command": "pause"})
|
||||
await ctx.send("Error pausing engine.")
|
||||
|
||||
@self.bot.command(name='resume')
|
||||
@commands.has_permissions(administrator=True)
|
||||
async def resume_engine(ctx):
|
||||
"""Resume the conversation engine"""
|
||||
try:
|
||||
await self.conversation_engine.resume()
|
||||
await ctx.send("Conversation engine resumed.")
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"command": "resume"})
|
||||
await ctx.send("Error resuming engine.")
|
||||
|
||||
@self.bot.command(name='stats')
|
||||
async def stats_command(ctx):
|
||||
"""Show conversation statistics"""
|
||||
try:
|
||||
stats = await self._get_conversation_stats()
|
||||
|
||||
embed = discord.Embed(
|
||||
title="Conversation Statistics",
|
||||
color=discord.Color.purple(),
|
||||
timestamp=datetime.utcnow()
|
||||
)
|
||||
|
||||
embed.add_field(
|
||||
name="Total Messages",
|
||||
value=str(stats.get('total_messages', 0)),
|
||||
inline=True
|
||||
)
|
||||
|
||||
embed.add_field(
|
||||
name="Active Conversations",
|
||||
value=str(stats.get('active_conversations', 0)),
|
||||
inline=True
|
||||
)
|
||||
|
||||
embed.add_field(
|
||||
name="Messages Today",
|
||||
value=str(stats.get('messages_today', 0)),
|
||||
inline=True
|
||||
)
|
||||
|
||||
# Most active character
|
||||
if stats.get('most_active_character'):
|
||||
embed.add_field(
|
||||
name="Most Active Character",
|
||||
value=f"{stats['most_active_character']['name']} ({stats['most_active_character']['count']} messages)",
|
||||
inline=False
|
||||
)
|
||||
|
||||
await ctx.send(embed=embed)
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"command": "stats"})
|
||||
await ctx.send("Error getting statistics.")
|
||||
|
||||
async def _get_conversation_stats(self) -> Dict[str, Any]:
|
||||
"""Get conversation statistics"""
|
||||
try:
|
||||
async with get_db_session() as session:
|
||||
from sqlalchemy import func
|
||||
from datetime import timedelta
|
||||
|
||||
# Total messages
|
||||
total_messages = await session.scalar(
|
||||
select(func.count(Message.id))
|
||||
)
|
||||
|
||||
# Active conversations
|
||||
active_conversations = await session.scalar(
|
||||
select(func.count(Conversation.id)).where(
|
||||
Conversation.is_active == True
|
||||
)
|
||||
)
|
||||
|
||||
# Messages today
|
||||
messages_today = await session.scalar(
|
||||
select(func.count(Message.id)).where(
|
||||
Message.timestamp >= datetime.utcnow() - timedelta(days=1)
|
||||
)
|
||||
)
|
||||
|
||||
# Most active character
|
||||
most_active_query = select(
|
||||
Character.name,
|
||||
func.count(Message.id).label('message_count')
|
||||
).join(Message).group_by(Character.name).order_by(
|
||||
func.count(Message.id).desc()
|
||||
).limit(1)
|
||||
|
||||
most_active_result = await session.execute(most_active_query)
|
||||
most_active = most_active_result.first()
|
||||
|
||||
return {
|
||||
'total_messages': total_messages,
|
||||
'active_conversations': active_conversations,
|
||||
'messages_today': messages_today,
|
||||
'most_active_character': {
|
||||
'name': most_active[0] if most_active else None,
|
||||
'count': most_active[1] if most_active else 0
|
||||
} if most_active else None
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"function": "_get_conversation_stats"})
|
||||
return {}
|
||||
0
src/characters/__init__.py
Normal file
0
src/characters/__init__.py
Normal file
778
src/characters/character.py
Normal file
778
src/characters/character.py
Normal file
@@ -0,0 +1,778 @@
|
||||
import asyncio
|
||||
import random
|
||||
import json
|
||||
from typing import Dict, Any, List, Optional, Tuple
|
||||
from datetime import datetime, timedelta
|
||||
from dataclasses import dataclass, asdict
|
||||
from ..database.connection import get_db_session
|
||||
from ..database.models import Character as CharacterModel, Memory, CharacterRelationship, Message, CharacterEvolution
|
||||
from ..utils.logging import log_character_action, log_error_with_context, log_autonomous_decision, log_memory_operation
|
||||
from sqlalchemy import select, and_, or_, func, desc
|
||||
import logging
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@dataclass
|
||||
class CharacterState:
|
||||
"""Current state of a character"""
|
||||
mood: str = "neutral"
|
||||
energy: float = 1.0
|
||||
last_topic: Optional[str] = None
|
||||
conversation_count: int = 0
|
||||
recent_interactions: List[str] = None
|
||||
|
||||
def __post_init__(self):
|
||||
if self.recent_interactions is None:
|
||||
self.recent_interactions = []
|
||||
|
||||
class Character:
|
||||
"""AI Character with personality, memory, and autonomous behavior"""
|
||||
|
||||
def __init__(self, character_data: CharacterModel):
|
||||
self.id = character_data.id
|
||||
self.name = character_data.name
|
||||
self.personality = character_data.personality
|
||||
self.system_prompt = character_data.system_prompt
|
||||
self.interests = character_data.interests
|
||||
self.speaking_style = character_data.speaking_style
|
||||
self.background = character_data.background
|
||||
self.avatar_url = character_data.avatar_url
|
||||
self.is_active = character_data.is_active
|
||||
self.last_active = character_data.last_active
|
||||
|
||||
# Dynamic state
|
||||
self.state = CharacterState()
|
||||
self.llm_client = None
|
||||
self.memory_cache = {}
|
||||
self.relationship_cache = {}
|
||||
|
||||
# Autonomous behavior settings
|
||||
self.base_response_probability = 0.7
|
||||
self.topic_interest_multiplier = 1.5
|
||||
self.relationship_influence = 0.3
|
||||
|
||||
async def initialize(self, llm_client):
|
||||
"""Initialize character with LLM client and load memories"""
|
||||
self.llm_client = llm_client
|
||||
await self._load_recent_memories()
|
||||
await self._load_relationships()
|
||||
|
||||
log_character_action(
|
||||
self.name,
|
||||
"initialized",
|
||||
{"interests": self.interests, "personality_length": len(self.personality)}
|
||||
)
|
||||
|
||||
async def should_respond(self, context: Dict[str, Any]) -> Tuple[bool, str]:
|
||||
"""Determine if character should respond to given context"""
|
||||
try:
|
||||
# Base probability
|
||||
probability = self.base_response_probability
|
||||
|
||||
# Adjust based on topic interest
|
||||
topic = context.get('topic', '')
|
||||
if await self._is_interested_in_topic(topic):
|
||||
probability *= self.topic_interest_multiplier
|
||||
|
||||
# Adjust based on participants
|
||||
participants = context.get('participants', [])
|
||||
relationship_modifier = await self._calculate_relationship_modifier(participants)
|
||||
probability *= (1 + relationship_modifier)
|
||||
|
||||
# Adjust based on recent activity
|
||||
if self.state.conversation_count > 5:
|
||||
probability *= 0.8 # Reduce if very active
|
||||
|
||||
# Adjust based on energy
|
||||
probability *= self.state.energy
|
||||
|
||||
# Random factor
|
||||
will_respond = random.random() < probability
|
||||
|
||||
reason = f"probability: {probability:.2f}, energy: {self.state.energy:.2f}, topic_interest: {topic in self.interests}"
|
||||
|
||||
log_autonomous_decision(
|
||||
self.name,
|
||||
f"respond: {will_respond}",
|
||||
reason,
|
||||
context
|
||||
)
|
||||
|
||||
return will_respond, reason
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"character": self.name, "context": context})
|
||||
return False, "error in decision making"
|
||||
|
||||
async def generate_response(self, context: Dict[str, Any]) -> Optional[str]:
|
||||
"""Generate a response based on context"""
|
||||
try:
|
||||
# Build prompt with context
|
||||
prompt = await self._build_response_prompt(context)
|
||||
|
||||
# Generate response using LLM
|
||||
response = await self.llm_client.generate_response(
|
||||
prompt=prompt,
|
||||
character_name=self.name,
|
||||
max_tokens=300
|
||||
)
|
||||
|
||||
if response:
|
||||
# Update character state
|
||||
await self._update_state_after_response(context, response)
|
||||
|
||||
# Store as memory
|
||||
await self._store_response_memory(context, response)
|
||||
|
||||
log_character_action(
|
||||
self.name,
|
||||
"generated_response",
|
||||
{"response_length": len(response), "context_type": context.get('type', 'unknown')}
|
||||
)
|
||||
|
||||
return response
|
||||
|
||||
return None
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"character": self.name, "context": context})
|
||||
return None
|
||||
|
||||
async def initiate_conversation(self, conversation_topics: List[str]) -> Optional[str]:
|
||||
"""Initiate a new conversation"""
|
||||
try:
|
||||
# Choose topic based on interests
|
||||
topic = await self._choose_conversation_topic(conversation_topics)
|
||||
|
||||
# Build initiation prompt
|
||||
prompt = await self._build_initiation_prompt(topic)
|
||||
|
||||
# Generate opening message
|
||||
opening = await self.llm_client.generate_response(
|
||||
prompt=prompt,
|
||||
character_name=self.name,
|
||||
max_tokens=200
|
||||
)
|
||||
|
||||
if opening:
|
||||
# Update state
|
||||
self.state.last_topic = topic
|
||||
self.state.conversation_count += 1
|
||||
|
||||
# Store memory
|
||||
await self._store_memory(
|
||||
memory_type="conversation",
|
||||
content=f"Initiated conversation about: {topic}",
|
||||
importance=0.6
|
||||
)
|
||||
|
||||
log_character_action(
|
||||
self.name,
|
||||
"initiated_conversation",
|
||||
{"topic": topic, "opening_length": len(opening)}
|
||||
)
|
||||
|
||||
return opening
|
||||
|
||||
return None
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"character": self.name})
|
||||
return None
|
||||
|
||||
async def process_relationship_change(self, other_character: str, interaction_type: str, content: str):
|
||||
"""Process a relationship change with another character"""
|
||||
try:
|
||||
# Get current relationship
|
||||
current_relationship = await self._get_relationship_with(other_character)
|
||||
|
||||
# Determine relationship change
|
||||
change_analysis = await self._analyze_relationship_change(
|
||||
other_character, interaction_type, content, current_relationship
|
||||
)
|
||||
|
||||
if change_analysis.get('should_update'):
|
||||
await self._update_relationship(
|
||||
other_character,
|
||||
change_analysis['new_type'],
|
||||
change_analysis['new_strength'],
|
||||
change_analysis['reason']
|
||||
)
|
||||
|
||||
log_character_action(
|
||||
self.name,
|
||||
"relationship_updated",
|
||||
{
|
||||
"other_character": other_character,
|
||||
"old_type": current_relationship.get('type'),
|
||||
"new_type": change_analysis['new_type'],
|
||||
"reason": change_analysis['reason']
|
||||
}
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {
|
||||
"character": self.name,
|
||||
"other_character": other_character,
|
||||
"interaction_type": interaction_type
|
||||
})
|
||||
|
||||
async def self_reflect(self) -> Dict[str, Any]:
|
||||
"""Perform self-reflection and potentially modify personality"""
|
||||
try:
|
||||
# Get recent interactions and memories
|
||||
recent_memories = await self._get_recent_memories(limit=20)
|
||||
|
||||
# Analyze patterns
|
||||
reflection_prompt = await self._build_reflection_prompt(recent_memories)
|
||||
|
||||
# Generate reflection
|
||||
reflection = await self.llm_client.generate_response(
|
||||
prompt=reflection_prompt,
|
||||
character_name=self.name,
|
||||
max_tokens=400
|
||||
)
|
||||
|
||||
if reflection:
|
||||
# Analyze if personality changes are needed
|
||||
changes = await self._analyze_personality_changes(reflection)
|
||||
|
||||
if changes.get('should_evolve'):
|
||||
await self._evolve_personality(changes)
|
||||
|
||||
# Store reflection as memory
|
||||
await self._store_memory(
|
||||
memory_type="reflection",
|
||||
content=reflection,
|
||||
importance=0.8
|
||||
)
|
||||
|
||||
log_character_action(
|
||||
self.name,
|
||||
"self_reflected",
|
||||
{"reflection_length": len(reflection), "changes": changes}
|
||||
)
|
||||
|
||||
return {
|
||||
"reflection": reflection,
|
||||
"changes": changes,
|
||||
"timestamp": datetime.utcnow().isoformat()
|
||||
}
|
||||
|
||||
return {}
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"character": self.name})
|
||||
return {}
|
||||
|
||||
async def _build_response_prompt(self, context: Dict[str, Any]) -> str:
|
||||
"""Build prompt for response generation"""
|
||||
# Get relevant memories
|
||||
relevant_memories = await self._get_relevant_memories(context)
|
||||
|
||||
# Get relationship context
|
||||
participants = context.get('participants', [])
|
||||
relationship_context = await self._get_relationship_context(participants)
|
||||
|
||||
# Get conversation history
|
||||
conversation_history = context.get('conversation_history', [])
|
||||
|
||||
prompt = f"""You are {self.name}, a character in a Discord chat.
|
||||
|
||||
PERSONALITY: {self.personality}
|
||||
|
||||
SPEAKING STYLE: {self.speaking_style}
|
||||
|
||||
BACKGROUND: {self.background}
|
||||
|
||||
INTERESTS: {', '.join(self.interests)}
|
||||
|
||||
CURRENT CONTEXT:
|
||||
Topic: {context.get('topic', 'general conversation')}
|
||||
Participants: {', '.join(participants)}
|
||||
Conversation type: {context.get('type', 'ongoing')}
|
||||
|
||||
RELEVANT MEMORIES:
|
||||
{self._format_memories(relevant_memories)}
|
||||
|
||||
RELATIONSHIPS:
|
||||
{self._format_relationship_context(relationship_context)}
|
||||
|
||||
RECENT CONVERSATION:
|
||||
{self._format_conversation_history(conversation_history)}
|
||||
|
||||
Current mood: {self.state.mood}
|
||||
Energy level: {self.state.energy}
|
||||
|
||||
Respond as {self.name} in a natural, conversational way. Keep responses concise but engaging. Stay true to your personality and speaking style."""
|
||||
|
||||
return prompt
|
||||
|
||||
async def _build_initiation_prompt(self, topic: str) -> str:
|
||||
"""Build prompt for conversation initiation"""
|
||||
prompt = f"""You are {self.name}, a character in a Discord chat.
|
||||
|
||||
PERSONALITY: {self.personality}
|
||||
|
||||
SPEAKING STYLE: {self.speaking_style}
|
||||
|
||||
INTERESTS: {', '.join(self.interests)}
|
||||
|
||||
You want to start a conversation about: {topic}
|
||||
|
||||
Create an engaging opening message that would naturally start a discussion about this topic.
|
||||
Be true to your personality and speaking style. Keep it conversational and inviting."""
|
||||
|
||||
return prompt
|
||||
|
||||
async def _build_reflection_prompt(self, recent_memories: List[Dict]) -> str:
|
||||
"""Build prompt for self-reflection"""
|
||||
memories_text = "\n".join([
|
||||
f"- {memory['content']}" for memory in recent_memories
|
||||
])
|
||||
|
||||
prompt = f"""You are {self.name}. Reflect on your recent interactions and experiences.
|
||||
|
||||
CURRENT PERSONALITY: {self.personality}
|
||||
|
||||
RECENT EXPERIENCES:
|
||||
{memories_text}
|
||||
|
||||
Reflect on:
|
||||
1. How your recent interactions have affected you
|
||||
2. Any patterns in your behavior
|
||||
3. Whether your personality or interests might be evolving
|
||||
4. How your relationships with others are developing
|
||||
|
||||
Provide a thoughtful reflection on your experiences and any insights about yourself."""
|
||||
|
||||
return prompt
|
||||
|
||||
async def _is_interested_in_topic(self, topic: str) -> bool:
|
||||
"""Check if character is interested in topic"""
|
||||
if not topic:
|
||||
return False
|
||||
|
||||
topic_lower = topic.lower()
|
||||
return any(interest.lower() in topic_lower for interest in self.interests)
|
||||
|
||||
async def _calculate_relationship_modifier(self, participants: List[str]) -> float:
|
||||
"""Calculate relationship modifier based on participants"""
|
||||
if not participants:
|
||||
return 0.0
|
||||
|
||||
total_modifier = 0.0
|
||||
for participant in participants:
|
||||
if participant != self.name:
|
||||
relationship = await self._get_relationship_with(participant)
|
||||
if relationship:
|
||||
strength = relationship.get('strength', 0.5)
|
||||
if relationship.get('type') == 'friend':
|
||||
total_modifier += strength * 0.3
|
||||
elif relationship.get('type') == 'rival':
|
||||
total_modifier += strength * 0.2 # Rivals still interact
|
||||
|
||||
return min(total_modifier, 0.5) # Cap at 0.5
|
||||
|
||||
async def _load_recent_memories(self):
|
||||
"""Load recent memories into cache"""
|
||||
try:
|
||||
async with get_db_session() as session:
|
||||
query = select(Memory).where(
|
||||
Memory.character_id == self.id
|
||||
).order_by(desc(Memory.timestamp)).limit(50)
|
||||
|
||||
memories = await session.scalars(query)
|
||||
self.memory_cache = {
|
||||
memory.id: {
|
||||
'content': memory.content,
|
||||
'type': memory.memory_type,
|
||||
'importance': memory.importance_score,
|
||||
'timestamp': memory.timestamp,
|
||||
'tags': memory.tags
|
||||
} for memory in memories
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"character": self.name})
|
||||
|
||||
async def _load_relationships(self):
|
||||
"""Load relationships into cache"""
|
||||
try:
|
||||
async with get_db_session() as session:
|
||||
query = select(CharacterRelationship).where(
|
||||
or_(
|
||||
CharacterRelationship.character_a_id == self.id,
|
||||
CharacterRelationship.character_b_id == self.id
|
||||
)
|
||||
)
|
||||
|
||||
relationships = await session.scalars(query)
|
||||
self.relationship_cache = {}
|
||||
|
||||
for rel in relationships:
|
||||
other_id = rel.character_b_id if rel.character_a_id == self.id else rel.character_a_id
|
||||
|
||||
# Get other character name
|
||||
other_char = await session.get(CharacterModel, other_id)
|
||||
if other_char:
|
||||
self.relationship_cache[other_char.name] = {
|
||||
'type': rel.relationship_type,
|
||||
'strength': rel.strength,
|
||||
'last_interaction': rel.last_interaction,
|
||||
'notes': rel.notes
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"character": self.name})
|
||||
|
||||
async def _store_memory(self, memory_type: str, content: str, importance: float, tags: List[str] = None):
|
||||
"""Store a new memory"""
|
||||
try:
|
||||
async with get_db_session() as session:
|
||||
memory = Memory(
|
||||
character_id=self.id,
|
||||
memory_type=memory_type,
|
||||
content=content,
|
||||
importance_score=importance,
|
||||
tags=tags or [],
|
||||
timestamp=datetime.utcnow()
|
||||
)
|
||||
|
||||
session.add(memory)
|
||||
await session.commit()
|
||||
|
||||
log_memory_operation(self.name, "stored", memory_type, importance)
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"character": self.name, "memory_type": memory_type})
|
||||
|
||||
async def _get_relationship_with(self, other_character: str) -> Optional[Dict[str, Any]]:
|
||||
"""Get relationship with another character"""
|
||||
return self.relationship_cache.get(other_character)
|
||||
|
||||
def _format_memories(self, memories: List[Dict]) -> str:
|
||||
"""Format memories for prompt"""
|
||||
if not memories:
|
||||
return "No relevant memories."
|
||||
|
||||
formatted = []
|
||||
for memory in memories[:5]: # Limit to 5 most relevant
|
||||
formatted.append(f"- {memory['content']}")
|
||||
|
||||
return "\n".join(formatted)
|
||||
|
||||
def _format_relationship_context(self, relationships: Dict[str, Dict]) -> str:
|
||||
"""Format relationship context for prompt"""
|
||||
if not relationships:
|
||||
return "No specific relationships to note."
|
||||
|
||||
formatted = []
|
||||
for name, rel in relationships.items():
|
||||
formatted.append(f"- {name}: {rel['type']} (strength: {rel['strength']:.1f})")
|
||||
|
||||
return "\n".join(formatted)
|
||||
|
||||
def _format_conversation_history(self, history: List[Dict]) -> str:
|
||||
"""Format conversation history for prompt"""
|
||||
if not history:
|
||||
return "No recent conversation history."
|
||||
|
||||
formatted = []
|
||||
for msg in history[-5:]: # Last 5 messages
|
||||
formatted.append(f"{msg['character']}: {msg['content']}")
|
||||
|
||||
return "\n".join(formatted)
|
||||
|
||||
async def _update_state_after_response(self, context: Dict[str, Any], response: str):
|
||||
"""Update character state after generating response"""
|
||||
self.state.conversation_count += 1
|
||||
self.state.energy = max(0.3, self.state.energy - 0.1) # Slight energy decrease
|
||||
|
||||
# Update recent interactions
|
||||
self.state.recent_interactions.append({
|
||||
'type': 'response',
|
||||
'content': response[:100],
|
||||
'timestamp': datetime.utcnow().isoformat()
|
||||
})
|
||||
|
||||
# Keep only last 10 interactions
|
||||
if len(self.state.recent_interactions) > 10:
|
||||
self.state.recent_interactions = self.state.recent_interactions[-10:]
|
||||
|
||||
async def _choose_conversation_topic(self, available_topics: List[str]) -> str:
|
||||
"""Choose a conversation topic based on interests"""
|
||||
# Prefer topics that match interests
|
||||
interested_topics = [
|
||||
topic for topic in available_topics
|
||||
if any(interest.lower() in topic.lower() for interest in self.interests)
|
||||
]
|
||||
|
||||
if interested_topics:
|
||||
return random.choice(interested_topics)
|
||||
|
||||
# Fall back to random topic
|
||||
return random.choice(available_topics) if available_topics else "general discussion"
|
||||
|
||||
async def _get_recent_memories(self, limit: int = 20) -> List[Dict[str, Any]]:
|
||||
"""Get recent memories for the character"""
|
||||
try:
|
||||
async with get_db_session() as session:
|
||||
query = select(Memory).where(
|
||||
Memory.character_id == self.id
|
||||
).order_by(desc(Memory.timestamp)).limit(limit)
|
||||
|
||||
memories = await session.scalars(query)
|
||||
|
||||
return [
|
||||
{
|
||||
'content': memory.content,
|
||||
'type': memory.memory_type,
|
||||
'importance': memory.importance_score,
|
||||
'timestamp': memory.timestamp,
|
||||
'tags': memory.tags
|
||||
}
|
||||
for memory in memories
|
||||
]
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"character": self.name})
|
||||
return []
|
||||
|
||||
async def _get_relevant_memories(self, context: Dict[str, Any]) -> List[Dict[str, Any]]:
|
||||
"""Get memories relevant to current context"""
|
||||
try:
|
||||
# Extract search terms from context
|
||||
search_terms = []
|
||||
if context.get('topic'):
|
||||
search_terms.append(context['topic'])
|
||||
if context.get('participants'):
|
||||
search_terms.extend(context['participants'])
|
||||
|
||||
relevant_memories = []
|
||||
|
||||
# Search memories for each term
|
||||
for term in search_terms:
|
||||
async with get_db_session() as session:
|
||||
# Search by content and tags
|
||||
query = select(Memory).where(
|
||||
and_(
|
||||
Memory.character_id == self.id,
|
||||
or_(
|
||||
Memory.content.ilike(f'%{term}%'),
|
||||
Memory.tags.op('?')(term)
|
||||
)
|
||||
)
|
||||
).order_by(desc(Memory.importance_score)).limit(3)
|
||||
|
||||
memories = await session.scalars(query)
|
||||
|
||||
for memory in memories:
|
||||
memory_dict = {
|
||||
'content': memory.content,
|
||||
'type': memory.memory_type,
|
||||
'importance': memory.importance_score,
|
||||
'timestamp': memory.timestamp
|
||||
}
|
||||
|
||||
if memory_dict not in relevant_memories:
|
||||
relevant_memories.append(memory_dict)
|
||||
|
||||
# Sort by importance and return top 5
|
||||
relevant_memories.sort(key=lambda m: m['importance'], reverse=True)
|
||||
return relevant_memories[:5]
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"character": self.name, "context": context})
|
||||
return []
|
||||
|
||||
async def _get_relationship_context(self, participants: List[str]) -> Dict[str, Dict[str, Any]]:
|
||||
"""Get relationship context for participants"""
|
||||
relationship_context = {}
|
||||
|
||||
for participant in participants:
|
||||
if participant != self.name and participant in self.relationship_cache:
|
||||
relationship_context[participant] = self.relationship_cache[participant]
|
||||
|
||||
return relationship_context
|
||||
|
||||
async def _store_response_memory(self, context: Dict[str, Any], response: str):
|
||||
"""Store memory of generating a response"""
|
||||
try:
|
||||
memory_content = f"Responded in {context.get('type', 'conversation')}: {response}"
|
||||
|
||||
await self._store_memory(
|
||||
memory_type="conversation",
|
||||
content=memory_content,
|
||||
importance=0.5,
|
||||
tags=[context.get('topic', 'general'), 'response'] + context.get('participants', [])
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"character": self.name})
|
||||
|
||||
async def _analyze_relationship_change(self, other_character: str, interaction_type: str,
|
||||
content: str, current_relationship: Dict[str, Any]) -> Dict[str, Any]:
|
||||
"""Analyze if relationship should change based on interaction"""
|
||||
try:
|
||||
# Simple relationship analysis
|
||||
analysis = {
|
||||
'should_update': False,
|
||||
'new_type': current_relationship.get('type', 'neutral'),
|
||||
'new_strength': current_relationship.get('strength', 0.5),
|
||||
'reason': 'No significant change'
|
||||
}
|
||||
|
||||
content_lower = content.lower()
|
||||
|
||||
# Positive interactions
|
||||
positive_words = ['agree', 'like', 'enjoy', 'appreciate', 'wonderful', 'great', 'amazing']
|
||||
if any(word in content_lower for word in positive_words):
|
||||
analysis['should_update'] = True
|
||||
analysis['new_strength'] = min(1.0, current_relationship.get('strength', 0.5) + 0.1)
|
||||
analysis['reason'] = 'Positive interaction detected'
|
||||
|
||||
if analysis['new_strength'] > 0.7 and current_relationship.get('type') == 'neutral':
|
||||
analysis['new_type'] = 'friend'
|
||||
|
||||
# Negative interactions
|
||||
negative_words = ['disagree', 'dislike', 'annoying', 'wrong', 'stupid', 'hate']
|
||||
if any(word in content_lower for word in negative_words):
|
||||
analysis['should_update'] = True
|
||||
analysis['new_strength'] = max(0.0, current_relationship.get('strength', 0.5) - 0.1)
|
||||
analysis['reason'] = 'Negative interaction detected'
|
||||
|
||||
if analysis['new_strength'] < 0.3:
|
||||
analysis['new_type'] = 'rival'
|
||||
|
||||
return analysis
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"character": self.name, "other_character": other_character})
|
||||
return {'should_update': False}
|
||||
|
||||
async def _update_relationship(self, other_character: str, relationship_type: str,
|
||||
strength: float, reason: str):
|
||||
"""Update relationship with another character"""
|
||||
try:
|
||||
async with get_db_session() as session:
|
||||
# Get other character ID
|
||||
other_char_query = select(CharacterModel).where(CharacterModel.name == other_character)
|
||||
other_char = await session.scalar(other_char_query)
|
||||
|
||||
if not other_char:
|
||||
return
|
||||
|
||||
# Find existing relationship
|
||||
rel_query = select(CharacterRelationship).where(
|
||||
or_(
|
||||
and_(
|
||||
CharacterRelationship.character_a_id == self.id,
|
||||
CharacterRelationship.character_b_id == other_char.id
|
||||
),
|
||||
and_(
|
||||
CharacterRelationship.character_a_id == other_char.id,
|
||||
CharacterRelationship.character_b_id == self.id
|
||||
)
|
||||
)
|
||||
)
|
||||
|
||||
relationship = await session.scalar(rel_query)
|
||||
|
||||
if relationship:
|
||||
# Update existing relationship
|
||||
relationship.relationship_type = relationship_type
|
||||
relationship.strength = strength
|
||||
relationship.last_interaction = datetime.utcnow()
|
||||
relationship.interaction_count += 1
|
||||
relationship.notes = reason
|
||||
else:
|
||||
# Create new relationship
|
||||
relationship = CharacterRelationship(
|
||||
character_a_id=self.id,
|
||||
character_b_id=other_char.id,
|
||||
relationship_type=relationship_type,
|
||||
strength=strength,
|
||||
last_interaction=datetime.utcnow(),
|
||||
interaction_count=1,
|
||||
notes=reason
|
||||
)
|
||||
session.add(relationship)
|
||||
|
||||
await session.commit()
|
||||
|
||||
# Update cache
|
||||
self.relationship_cache[other_character] = {
|
||||
'type': relationship_type,
|
||||
'strength': strength,
|
||||
'last_interaction': datetime.utcnow(),
|
||||
'notes': reason
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"character": self.name, "other_character": other_character})
|
||||
|
||||
async def _analyze_personality_changes(self, reflection: str) -> Dict[str, Any]:
|
||||
"""Analyze if personality changes are needed based on reflection"""
|
||||
try:
|
||||
# Simple analysis - in a real implementation, this could use LLM
|
||||
reflection_lower = reflection.lower()
|
||||
|
||||
changes = {
|
||||
'should_evolve': False,
|
||||
'confidence': 0.0,
|
||||
'proposed_changes': []
|
||||
}
|
||||
|
||||
# Look for evolution indicators
|
||||
evolution_words = ['change', 'grow', 'evolve', 'different', 'new', 'realize', 'understand']
|
||||
evolution_count = sum(1 for word in evolution_words if word in reflection_lower)
|
||||
|
||||
if evolution_count >= 3:
|
||||
changes['should_evolve'] = True
|
||||
changes['confidence'] = min(1.0, evolution_count * 0.2)
|
||||
changes['proposed_changes'] = ['personality_refinement']
|
||||
|
||||
return changes
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"character": self.name})
|
||||
return {'should_evolve': False}
|
||||
|
||||
async def _evolve_personality(self, changes: Dict[str, Any]):
|
||||
"""Apply personality evolution changes"""
|
||||
try:
|
||||
if not changes.get('should_evolve'):
|
||||
return
|
||||
|
||||
# Store evolution record
|
||||
async with get_db_session() as session:
|
||||
evolution = CharacterEvolution(
|
||||
character_id=self.id,
|
||||
change_type='personality',
|
||||
old_value=self.personality,
|
||||
new_value=self.personality, # For now, keep same
|
||||
reason=f"Self-reflection triggered evolution (confidence: {changes.get('confidence', 0)})",
|
||||
timestamp=datetime.utcnow()
|
||||
)
|
||||
|
||||
session.add(evolution)
|
||||
await session.commit()
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"character": self.name})
|
||||
|
||||
async def to_dict(self) -> Dict[str, Any]:
|
||||
"""Convert character to dictionary"""
|
||||
return {
|
||||
'id': self.id,
|
||||
'name': self.name,
|
||||
'personality': self.personality,
|
||||
'interests': self.interests,
|
||||
'speaking_style': self.speaking_style,
|
||||
'background': self.background,
|
||||
'is_active': self.is_active,
|
||||
'state': asdict(self.state),
|
||||
'relationship_count': len(self.relationship_cache),
|
||||
'memory_count': len(self.memory_cache)
|
||||
}
|
||||
570
src/characters/enhanced_character.py
Normal file
570
src/characters/enhanced_character.py
Normal file
@@ -0,0 +1,570 @@
|
||||
import asyncio
|
||||
import json
|
||||
from typing import Dict, Any, List, Optional, Tuple
|
||||
from datetime import datetime, timedelta
|
||||
from dataclasses import dataclass
|
||||
|
||||
from .character import Character
|
||||
from .personality import PersonalityManager
|
||||
from .memory import MemoryManager
|
||||
from ..rag.personal_memory import PersonalMemoryRAG, MemoryInsight
|
||||
from ..rag.vector_store import VectorStoreManager, VectorMemory, MemoryType
|
||||
from ..mcp.self_modification_server import SelfModificationMCPServer
|
||||
from ..mcp.file_system_server import CharacterFileSystemMCP
|
||||
from ..utils.logging import log_character_action, log_error_with_context, log_autonomous_decision
|
||||
from ..database.models import Character as CharacterModel
|
||||
import logging
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@dataclass
|
||||
class ReflectionCycle:
|
||||
cycle_id: str
|
||||
start_time: datetime
|
||||
reflections: Dict[str, MemoryInsight]
|
||||
insights_generated: int
|
||||
self_modifications: List[Dict[str, Any]]
|
||||
completed: bool
|
||||
|
||||
class EnhancedCharacter(Character):
|
||||
"""Enhanced character with RAG capabilities and self-modification"""
|
||||
|
||||
def __init__(self, character_data: CharacterModel, vector_store: VectorStoreManager,
|
||||
mcp_server: SelfModificationMCPServer, filesystem: CharacterFileSystemMCP):
|
||||
super().__init__(character_data)
|
||||
|
||||
# RAG systems
|
||||
self.vector_store = vector_store
|
||||
self.personal_rag = PersonalMemoryRAG(self.name, vector_store)
|
||||
|
||||
# MCP systems
|
||||
self.mcp_server = mcp_server
|
||||
self.filesystem = filesystem
|
||||
|
||||
# Enhanced managers
|
||||
self.personality_manager = PersonalityManager(self)
|
||||
self.memory_manager = MemoryManager(self)
|
||||
|
||||
# Advanced state tracking
|
||||
self.reflection_history: List[ReflectionCycle] = []
|
||||
self.knowledge_areas: Dict[str, float] = {} # Topic -> expertise level
|
||||
self.creative_projects: List[Dict[str, Any]] = []
|
||||
self.goal_stack: List[Dict[str, Any]] = []
|
||||
|
||||
# Autonomous behavior settings
|
||||
self.reflection_frequency = timedelta(hours=6)
|
||||
self.last_reflection = datetime.utcnow() - self.reflection_frequency
|
||||
self.self_modification_threshold = 0.7
|
||||
self.creativity_drive = 0.8
|
||||
|
||||
async def initialize_enhanced_systems(self):
|
||||
"""Initialize enhanced RAG and MCP systems"""
|
||||
try:
|
||||
# Initialize base character
|
||||
await super().initialize(self.llm_client)
|
||||
|
||||
# Load personal goals and knowledge
|
||||
await self._load_personal_goals()
|
||||
await self._load_knowledge_areas()
|
||||
await self._load_creative_projects()
|
||||
|
||||
# Initialize RAG systems
|
||||
await self._initialize_personal_memories()
|
||||
|
||||
log_character_action(
|
||||
self.name,
|
||||
"initialized_enhanced_systems",
|
||||
{"knowledge_areas": len(self.knowledge_areas), "goals": len(self.goal_stack)}
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"character": self.name, "component": "enhanced_initialization"})
|
||||
raise
|
||||
|
||||
async def enhanced_self_reflect(self) -> ReflectionCycle:
|
||||
"""Perform enhanced self-reflection using RAG and potential self-modification"""
|
||||
try:
|
||||
cycle_id = f"reflection_{self.name}_{datetime.utcnow().strftime('%Y%m%d_%H%M%S')}"
|
||||
|
||||
log_character_action(
|
||||
self.name,
|
||||
"starting_enhanced_reflection",
|
||||
{"cycle_id": cycle_id}
|
||||
)
|
||||
|
||||
reflection_cycle = ReflectionCycle(
|
||||
cycle_id=cycle_id,
|
||||
start_time=datetime.utcnow(),
|
||||
reflections={},
|
||||
insights_generated=0,
|
||||
self_modifications=[],
|
||||
completed=False
|
||||
)
|
||||
|
||||
# Perform RAG-powered reflection
|
||||
reflection_cycle.reflections = await self.personal_rag.perform_self_reflection_cycle()
|
||||
reflection_cycle.insights_generated = len(reflection_cycle.reflections)
|
||||
|
||||
# Analyze for potential self-modifications
|
||||
modifications = await self._analyze_for_self_modifications(reflection_cycle.reflections)
|
||||
|
||||
# Apply approved self-modifications
|
||||
for modification in modifications:
|
||||
if modification.get('confidence', 0) >= self.self_modification_threshold:
|
||||
success = await self._apply_self_modification(modification)
|
||||
if success:
|
||||
reflection_cycle.self_modifications.append(modification)
|
||||
|
||||
# Store reflection in file system
|
||||
await self._store_reflection_cycle(reflection_cycle)
|
||||
|
||||
# Update personal knowledge
|
||||
await self._update_knowledge_from_reflection(reflection_cycle)
|
||||
|
||||
reflection_cycle.completed = True
|
||||
self.reflection_history.append(reflection_cycle)
|
||||
self.last_reflection = datetime.utcnow()
|
||||
|
||||
log_character_action(
|
||||
self.name,
|
||||
"completed_enhanced_reflection",
|
||||
{
|
||||
"cycle_id": cycle_id,
|
||||
"insights": reflection_cycle.insights_generated,
|
||||
"modifications": len(reflection_cycle.self_modifications)
|
||||
}
|
||||
)
|
||||
|
||||
return reflection_cycle
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"character": self.name, "component": "enhanced_reflection"})
|
||||
reflection_cycle.completed = False
|
||||
return reflection_cycle
|
||||
|
||||
async def query_personal_knowledge(self, question: str, context: Dict[str, Any] = None) -> MemoryInsight:
|
||||
"""Query personal knowledge using RAG"""
|
||||
try:
|
||||
# Determine query type
|
||||
question_lower = question.lower()
|
||||
|
||||
if any(word in question_lower for word in ["how do i", "my usual", "typically", "normally"]):
|
||||
# Behavioral pattern query
|
||||
insight = await self.personal_rag.query_behavioral_patterns(question)
|
||||
elif any(name in question_lower for name in self._get_known_characters()):
|
||||
# Relationship query
|
||||
other_character = self._extract_character_name(question)
|
||||
insight = await self.personal_rag.query_relationship_knowledge(other_character, question)
|
||||
elif any(word in question_lower for word in ["create", "art", "music", "story", "idea"]):
|
||||
# Creative knowledge query
|
||||
insight = await self.personal_rag.query_creative_knowledge(question)
|
||||
else:
|
||||
# General behavioral query
|
||||
insight = await self.personal_rag.query_behavioral_patterns(question)
|
||||
|
||||
log_character_action(
|
||||
self.name,
|
||||
"queried_personal_knowledge",
|
||||
{"question": question, "confidence": insight.confidence}
|
||||
)
|
||||
|
||||
return insight
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"character": self.name, "question": question})
|
||||
return MemoryInsight(
|
||||
insight="I'm having trouble accessing my knowledge right now.",
|
||||
confidence=0.0,
|
||||
supporting_memories=[],
|
||||
metadata={"error": str(e)}
|
||||
)
|
||||
|
||||
async def pursue_creative_project(self, project_idea: str, project_type: str = "general") -> Dict[str, Any]:
|
||||
"""Start and pursue a creative project using knowledge and file system"""
|
||||
try:
|
||||
# Query creative knowledge for inspiration
|
||||
creative_insight = await self.personal_rag.query_creative_knowledge(project_idea)
|
||||
|
||||
# Generate project plan
|
||||
project = {
|
||||
"id": f"project_{self.name}_{datetime.utcnow().strftime('%Y%m%d_%H%M%S')}",
|
||||
"title": project_idea,
|
||||
"type": project_type,
|
||||
"start_date": datetime.utcnow().isoformat(),
|
||||
"status": "active",
|
||||
"inspiration": creative_insight.insight,
|
||||
"supporting_memories": [m.content for m in creative_insight.supporting_memories[:3]],
|
||||
"phases": [
|
||||
{"name": "conceptualization", "status": "in_progress"},
|
||||
{"name": "development", "status": "pending"},
|
||||
{"name": "refinement", "status": "pending"},
|
||||
{"name": "completion", "status": "pending"}
|
||||
]
|
||||
}
|
||||
|
||||
# Store project in file system
|
||||
project_file = f"creative/projects/{project['id']}.json"
|
||||
project_content = json.dumps(project, indent=2)
|
||||
|
||||
# Use MCP to write project file
|
||||
# Note: In real implementation, this would use the actual MCP client
|
||||
await self._store_file_via_mcp(project_file, project_content)
|
||||
|
||||
# Create initial creative work
|
||||
initial_work = await self._generate_initial_creative_work(project_idea, creative_insight)
|
||||
if initial_work:
|
||||
work_file = f"creative/works/{project['id']}_initial.md"
|
||||
await self._store_file_via_mcp(work_file, initial_work)
|
||||
|
||||
self.creative_projects.append(project)
|
||||
|
||||
log_character_action(
|
||||
self.name,
|
||||
"started_creative_project",
|
||||
{"project_id": project["id"], "type": project_type, "title": project_idea}
|
||||
)
|
||||
|
||||
return project
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"character": self.name, "project_idea": project_idea})
|
||||
return {"error": str(e)}
|
||||
|
||||
async def set_personal_goal(self, goal_description: str, priority: str = "medium",
|
||||
timeline: str = "ongoing") -> Dict[str, Any]:
|
||||
"""Set a personal goal using MCP self-modification"""
|
||||
try:
|
||||
# Create goal object
|
||||
goal = {
|
||||
"id": f"goal_{self.name}_{datetime.utcnow().strftime('%Y%m%d_%H%M%S')}",
|
||||
"description": goal_description,
|
||||
"priority": priority,
|
||||
"timeline": timeline,
|
||||
"created": datetime.utcnow().isoformat(),
|
||||
"status": "active",
|
||||
"progress": 0.0,
|
||||
"milestones": [],
|
||||
"reflection_notes": []
|
||||
}
|
||||
|
||||
# Add to goal stack
|
||||
self.goal_stack.append(goal)
|
||||
|
||||
# Update goals via MCP
|
||||
goal_descriptions = [g["description"] for g in self.goal_stack if g["status"] == "active"]
|
||||
|
||||
# Note: In real implementation, this would use the actual MCP client
|
||||
await self._update_goals_via_mcp(goal_descriptions, f"Added new goal: {goal_description}")
|
||||
|
||||
# Store goal reflection
|
||||
await self.personal_rag.store_reflection_memory(
|
||||
f"I've set a new goal: {goal_description}. This represents my desire to grow and develop in this area.",
|
||||
"goal_setting",
|
||||
0.7
|
||||
)
|
||||
|
||||
log_character_action(
|
||||
self.name,
|
||||
"set_personal_goal",
|
||||
{"goal_id": goal["id"], "priority": priority, "timeline": timeline}
|
||||
)
|
||||
|
||||
return goal
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"character": self.name, "goal": goal_description})
|
||||
return {"error": str(e)}
|
||||
|
||||
async def should_perform_reflection(self) -> bool:
|
||||
"""Determine if character should perform self-reflection"""
|
||||
# Time-based reflection
|
||||
time_since_last = datetime.utcnow() - self.last_reflection
|
||||
if time_since_last >= self.reflection_frequency:
|
||||
return True
|
||||
|
||||
# Experience-based reflection triggers
|
||||
recent_experiences = len(self.state.recent_interactions)
|
||||
if recent_experiences >= 10: # Significant new experiences
|
||||
return True
|
||||
|
||||
# Goal-based reflection
|
||||
active_goals = [g for g in self.goal_stack if g["status"] == "active"]
|
||||
if len(active_goals) > 0 and time_since_last >= timedelta(hours=3):
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
async def process_interaction_with_rag(self, interaction_content: str, context: Dict[str, Any]) -> str:
|
||||
"""Process interaction with enhanced RAG-powered context"""
|
||||
try:
|
||||
# Store interaction in RAG
|
||||
await self.personal_rag.store_interaction_memory(interaction_content, context)
|
||||
|
||||
# Query relevant knowledge
|
||||
relevant_knowledge = await self.query_personal_knowledge(
|
||||
f"How should I respond to: {interaction_content}", context
|
||||
)
|
||||
|
||||
# Get relationship context if applicable
|
||||
participants = context.get("participants", [])
|
||||
relationship_insights = []
|
||||
for participant in participants:
|
||||
if participant != self.name:
|
||||
rel_insight = await self.personal_rag.query_relationship_knowledge(
|
||||
participant, f"What do I know about {participant}?"
|
||||
)
|
||||
if rel_insight.confidence > 0.3:
|
||||
relationship_insights.append(rel_insight)
|
||||
|
||||
# Build enhanced context for response generation
|
||||
enhanced_context = context.copy()
|
||||
enhanced_context.update({
|
||||
"personal_knowledge": relevant_knowledge.insight,
|
||||
"knowledge_confidence": relevant_knowledge.confidence,
|
||||
"relationship_insights": [r.insight for r in relationship_insights],
|
||||
"supporting_memories": [m.content for m in relevant_knowledge.supporting_memories[:3]]
|
||||
})
|
||||
|
||||
# Generate response using enhanced context
|
||||
response = await self.generate_response(enhanced_context)
|
||||
|
||||
# Update knowledge areas based on interaction
|
||||
await self._update_knowledge_from_interaction(interaction_content, context)
|
||||
|
||||
return response
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"character": self.name, "interaction": interaction_content})
|
||||
# Fallback to base response generation
|
||||
return await super().generate_response(context)
|
||||
|
||||
async def _analyze_for_self_modifications(self, reflections: Dict[str, MemoryInsight]) -> List[Dict[str, Any]]:
|
||||
"""Analyze reflections for potential self-modifications"""
|
||||
modifications = []
|
||||
|
||||
try:
|
||||
for reflection_type, insight in reflections.items():
|
||||
if insight.confidence < 0.5:
|
||||
continue
|
||||
|
||||
# Analyze for personality modifications
|
||||
if reflection_type == "behavioral_patterns":
|
||||
personality_mods = await self._extract_personality_modifications(insight)
|
||||
modifications.extend(personality_mods)
|
||||
|
||||
# Analyze for goal updates
|
||||
elif reflection_type == "personal_growth":
|
||||
goal_updates = await self._extract_goal_updates(insight)
|
||||
modifications.extend(goal_updates)
|
||||
|
||||
# Analyze for speaking style changes
|
||||
elif reflection_type == "creative_development":
|
||||
style_changes = await self._extract_style_changes(insight)
|
||||
modifications.extend(style_changes)
|
||||
|
||||
# Filter and rank modifications
|
||||
ranked_modifications = sorted(
|
||||
modifications,
|
||||
key=lambda m: m.get('confidence', 0),
|
||||
reverse=True
|
||||
)
|
||||
|
||||
return ranked_modifications[:3] # Top 3 modifications
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"character": self.name})
|
||||
return []
|
||||
|
||||
async def _apply_self_modification(self, modification: Dict[str, Any]) -> bool:
|
||||
"""Apply a self-modification using MCP"""
|
||||
try:
|
||||
mod_type = modification.get("type")
|
||||
|
||||
if mod_type == "personality_trait":
|
||||
# Use MCP to modify personality
|
||||
result = await self._modify_personality_via_mcp(
|
||||
modification["trait"],
|
||||
modification["new_value"],
|
||||
modification["reason"],
|
||||
modification["confidence"]
|
||||
)
|
||||
return result
|
||||
|
||||
elif mod_type == "goals":
|
||||
# Update goals
|
||||
result = await self._update_goals_via_mcp(
|
||||
modification["new_goals"],
|
||||
modification["reason"],
|
||||
modification["confidence"]
|
||||
)
|
||||
return result
|
||||
|
||||
elif mod_type == "speaking_style":
|
||||
# Modify speaking style
|
||||
result = await self._modify_speaking_style_via_mcp(
|
||||
modification["changes"],
|
||||
modification["reason"],
|
||||
modification["confidence"]
|
||||
)
|
||||
return result
|
||||
|
||||
return False
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"character": self.name, "modification": modification})
|
||||
return False
|
||||
|
||||
async def _store_reflection_cycle(self, cycle: ReflectionCycle):
|
||||
"""Store reflection cycle in file system"""
|
||||
try:
|
||||
reflection_data = {
|
||||
"cycle_id": cycle.cycle_id,
|
||||
"start_time": cycle.start_time.isoformat(),
|
||||
"completed": cycle.completed,
|
||||
"insights_generated": cycle.insights_generated,
|
||||
"reflections": {
|
||||
key: {
|
||||
"insight": insight.insight,
|
||||
"confidence": insight.confidence,
|
||||
"supporting_memory_count": len(insight.supporting_memories)
|
||||
}
|
||||
for key, insight in cycle.reflections.items()
|
||||
},
|
||||
"modifications_applied": len(cycle.self_modifications),
|
||||
"modification_details": cycle.self_modifications
|
||||
}
|
||||
|
||||
filename = f"reflections/cycles/{cycle.cycle_id}.json"
|
||||
content = json.dumps(reflection_data, indent=2)
|
||||
|
||||
await self._store_file_via_mcp(filename, content)
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"character": self.name, "cycle_id": cycle.cycle_id})
|
||||
|
||||
# Placeholder methods for MCP integration - these would be implemented with actual MCP clients
|
||||
async def _store_file_via_mcp(self, file_path: str, content: str) -> bool:
|
||||
"""Store file using MCP file system (placeholder)"""
|
||||
# In real implementation, this would use the MCP client to call filesystem server
|
||||
return True
|
||||
|
||||
async def _modify_personality_via_mcp(self, trait: str, new_value: str, reason: str, confidence: float) -> bool:
|
||||
"""Modify personality via MCP (placeholder)"""
|
||||
# In real implementation, this would use the MCP client
|
||||
return True
|
||||
|
||||
async def _update_goals_via_mcp(self, goals: List[str], reason: str, confidence: float = 0.8) -> bool:
|
||||
"""Update goals via MCP (placeholder)"""
|
||||
# In real implementation, this would use the MCP client
|
||||
return True
|
||||
|
||||
async def _modify_speaking_style_via_mcp(self, changes: Dict[str, str], reason: str, confidence: float) -> bool:
|
||||
"""Modify speaking style via MCP (placeholder)"""
|
||||
# In real implementation, this would use the MCP client
|
||||
return True
|
||||
|
||||
# Helper methods for analysis and data management
|
||||
async def _extract_personality_modifications(self, insight: MemoryInsight) -> List[Dict[str, Any]]:
|
||||
"""Extract personality modifications from behavioral insights"""
|
||||
modifications = []
|
||||
|
||||
# Simple keyword-based analysis - could be enhanced with LLM
|
||||
insight_lower = insight.insight.lower()
|
||||
|
||||
if "more confident" in insight_lower or "confidence" in insight_lower:
|
||||
modifications.append({
|
||||
"type": "personality_trait",
|
||||
"trait": "confidence",
|
||||
"new_value": "Shows increased confidence and self-assurance",
|
||||
"reason": f"Reflection insight: {insight.insight[:100]}...",
|
||||
"confidence": min(0.9, insight.confidence + 0.1)
|
||||
})
|
||||
|
||||
if "creative" in insight_lower and "more" in insight_lower:
|
||||
modifications.append({
|
||||
"type": "personality_trait",
|
||||
"trait": "creativity",
|
||||
"new_value": "Demonstrates enhanced creative thinking and expression",
|
||||
"reason": f"Creative development noted: {insight.insight[:100]}...",
|
||||
"confidence": insight.confidence
|
||||
})
|
||||
|
||||
return modifications
|
||||
|
||||
async def _extract_goal_updates(self, insight: MemoryInsight) -> List[Dict[str, Any]]:
|
||||
"""Extract goal updates from growth insights"""
|
||||
# Placeholder implementation
|
||||
return []
|
||||
|
||||
async def _extract_style_changes(self, insight: MemoryInsight) -> List[Dict[str, Any]]:
|
||||
"""Extract speaking style changes from creative insights"""
|
||||
# Placeholder implementation
|
||||
return []
|
||||
|
||||
def _get_known_characters(self) -> List[str]:
|
||||
"""Get list of known character names"""
|
||||
return list(self.relationship_cache.keys())
|
||||
|
||||
def _extract_character_name(self, text: str) -> Optional[str]:
|
||||
"""Extract character name from text"""
|
||||
known_chars = self._get_known_characters()
|
||||
text_lower = text.lower()
|
||||
|
||||
for char in known_chars:
|
||||
if char.lower() in text_lower:
|
||||
return char
|
||||
|
||||
return None
|
||||
|
||||
async def _load_personal_goals(self):
|
||||
"""Load personal goals from file system"""
|
||||
# Placeholder - would load from MCP file system
|
||||
pass
|
||||
|
||||
async def _load_knowledge_areas(self):
|
||||
"""Load knowledge areas and expertise levels"""
|
||||
# Placeholder - would load from vector store or files
|
||||
pass
|
||||
|
||||
async def _load_creative_projects(self):
|
||||
"""Load active creative projects"""
|
||||
# Placeholder - would load from file system
|
||||
pass
|
||||
|
||||
async def _initialize_personal_memories(self):
|
||||
"""Initialize personal memory RAG with existing memories"""
|
||||
# This would migrate existing database memories to vector store
|
||||
pass
|
||||
|
||||
async def _generate_initial_creative_work(self, project_idea: str, insight: MemoryInsight) -> Optional[str]:
|
||||
"""Generate initial creative work for a project"""
|
||||
# Use LLM to generate initial creative content based on idea and insights
|
||||
return f"# {project_idea}\n\nInspired by: {insight.insight}\n\n[Initial creative work would be generated here]"
|
||||
|
||||
async def _update_knowledge_from_reflection(self, cycle: ReflectionCycle):
|
||||
"""Update knowledge areas based on reflection insights"""
|
||||
# Analyze reflection insights and update knowledge area scores
|
||||
pass
|
||||
|
||||
async def _update_knowledge_from_interaction(self, content: str, context: Dict[str, Any]):
|
||||
"""Update knowledge areas based on interaction"""
|
||||
# Analyze interaction content and update relevant knowledge areas
|
||||
pass
|
||||
|
||||
def get_enhanced_status(self) -> Dict[str, Any]:
|
||||
"""Get enhanced character status including RAG and MCP info"""
|
||||
base_status = self.to_dict()
|
||||
|
||||
enhanced_status = base_status.copy()
|
||||
enhanced_status.update({
|
||||
"reflection_cycles_completed": len(self.reflection_history),
|
||||
"last_reflection": self.last_reflection.isoformat(),
|
||||
"next_reflection_due": (self.last_reflection + self.reflection_frequency).isoformat(),
|
||||
"active_goals": len([g for g in self.goal_stack if g["status"] == "active"]),
|
||||
"creative_projects": len(self.creative_projects),
|
||||
"knowledge_areas": len(self.knowledge_areas),
|
||||
"rag_system_active": True,
|
||||
"mcp_modifications_available": True
|
||||
})
|
||||
|
||||
return enhanced_status
|
||||
595
src/characters/memory.py
Normal file
595
src/characters/memory.py
Normal file
@@ -0,0 +1,595 @@
|
||||
import asyncio
|
||||
import json
|
||||
from typing import Dict, Any, List, Optional, Tuple
|
||||
from datetime import datetime, timedelta
|
||||
from dataclasses import dataclass
|
||||
from ..database.connection import get_db_session
|
||||
from ..database.models import Memory, Character, Message, CharacterRelationship
|
||||
from ..utils.logging import log_memory_operation, log_error_with_context
|
||||
from sqlalchemy import select, and_, or_, func, desc
|
||||
import logging
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@dataclass
|
||||
class MemorySearchResult:
|
||||
"""Result of memory search"""
|
||||
memories: List[Dict[str, Any]]
|
||||
total_count: int
|
||||
relevance_scores: List[float]
|
||||
|
||||
class MemoryManager:
|
||||
"""Manages character memory storage, retrieval, and organization"""
|
||||
|
||||
def __init__(self, character):
|
||||
self.character = character
|
||||
self.memory_types = {
|
||||
'conversation': {'importance_base': 0.5, 'decay_rate': 0.1},
|
||||
'relationship': {'importance_base': 0.7, 'decay_rate': 0.05},
|
||||
'experience': {'importance_base': 0.6, 'decay_rate': 0.08},
|
||||
'fact': {'importance_base': 0.4, 'decay_rate': 0.12},
|
||||
'reflection': {'importance_base': 0.8, 'decay_rate': 0.03},
|
||||
'emotion': {'importance_base': 0.6, 'decay_rate': 0.15}
|
||||
}
|
||||
|
||||
# Memory limits
|
||||
self.max_memories = {
|
||||
'conversation': 100,
|
||||
'relationship': 50,
|
||||
'experience': 80,
|
||||
'fact': 60,
|
||||
'reflection': 30,
|
||||
'emotion': 40
|
||||
}
|
||||
|
||||
# Importance thresholds
|
||||
self.importance_thresholds = {
|
||||
'critical': 0.9,
|
||||
'important': 0.7,
|
||||
'moderate': 0.5,
|
||||
'low': 0.3
|
||||
}
|
||||
|
||||
async def store_memory(self, memory_type: str, content: str,
|
||||
importance: float = None, tags: List[str] = None,
|
||||
related_character: str = None, related_message_id: int = None) -> int:
|
||||
"""Store a new memory with automatic importance scoring"""
|
||||
try:
|
||||
# Calculate importance if not provided
|
||||
if importance is None:
|
||||
importance = await self._calculate_importance(memory_type, content, tags)
|
||||
|
||||
# Validate importance
|
||||
importance = max(0.0, min(1.0, importance))
|
||||
|
||||
# Store in database
|
||||
memory_id = await self._store_memory_in_db(
|
||||
memory_type, content, importance, tags or [],
|
||||
related_character, related_message_id
|
||||
)
|
||||
|
||||
# Clean up old memories if needed
|
||||
await self._cleanup_memories(memory_type)
|
||||
|
||||
log_memory_operation(
|
||||
self.character.name,
|
||||
"stored",
|
||||
memory_type,
|
||||
importance
|
||||
)
|
||||
|
||||
return memory_id
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {
|
||||
"character": self.character.name,
|
||||
"memory_type": memory_type,
|
||||
"content_length": len(content)
|
||||
})
|
||||
return -1
|
||||
|
||||
async def retrieve_memories(self, query: str = None, memory_type: str = None,
|
||||
limit: int = 10, min_importance: float = 0.0) -> MemorySearchResult:
|
||||
"""Retrieve memories based on query and filters"""
|
||||
try:
|
||||
async with get_db_session() as session:
|
||||
# Build query
|
||||
query_builder = select(Memory).where(Memory.character_id == self.character.id)
|
||||
|
||||
if memory_type:
|
||||
query_builder = query_builder.where(Memory.memory_type == memory_type)
|
||||
|
||||
if min_importance > 0:
|
||||
query_builder = query_builder.where(Memory.importance_score >= min_importance)
|
||||
|
||||
# Add text search if query provided
|
||||
if query:
|
||||
query_builder = query_builder.where(
|
||||
or_(
|
||||
Memory.content.ilike(f'%{query}%'),
|
||||
Memory.tags.op('?')(query)
|
||||
)
|
||||
)
|
||||
|
||||
# Order by importance and recency
|
||||
query_builder = query_builder.order_by(
|
||||
desc(Memory.importance_score),
|
||||
desc(Memory.last_accessed),
|
||||
desc(Memory.timestamp)
|
||||
).limit(limit)
|
||||
|
||||
memories = await session.scalars(query_builder)
|
||||
|
||||
# Convert to dict format
|
||||
memory_list = []
|
||||
relevance_scores = []
|
||||
|
||||
for memory in memories:
|
||||
# Update access count
|
||||
memory.last_accessed = datetime.utcnow()
|
||||
memory.access_count += 1
|
||||
|
||||
memory_dict = {
|
||||
'id': memory.id,
|
||||
'content': memory.content,
|
||||
'type': memory.memory_type,
|
||||
'importance': memory.importance_score,
|
||||
'timestamp': memory.timestamp,
|
||||
'tags': memory.tags,
|
||||
'access_count': memory.access_count
|
||||
}
|
||||
|
||||
# Calculate relevance score
|
||||
relevance = await self._calculate_relevance(memory_dict, query)
|
||||
|
||||
memory_list.append(memory_dict)
|
||||
relevance_scores.append(relevance)
|
||||
|
||||
await session.commit()
|
||||
|
||||
return MemorySearchResult(
|
||||
memories=memory_list,
|
||||
total_count=len(memory_list),
|
||||
relevance_scores=relevance_scores
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {
|
||||
"character": self.character.name,
|
||||
"query": query,
|
||||
"memory_type": memory_type
|
||||
})
|
||||
return MemorySearchResult(memories=[], total_count=0, relevance_scores=[])
|
||||
|
||||
async def get_contextual_memories(self, context: Dict[str, Any], limit: int = 5) -> List[Dict[str, Any]]:
|
||||
"""Get memories relevant to current context"""
|
||||
try:
|
||||
# Extract key information from context
|
||||
topic = context.get('topic', '')
|
||||
participants = context.get('participants', [])
|
||||
conversation_type = context.get('type', '')
|
||||
|
||||
# Build search terms
|
||||
search_terms = []
|
||||
if topic:
|
||||
search_terms.append(topic)
|
||||
if participants:
|
||||
search_terms.extend(participants)
|
||||
if conversation_type:
|
||||
search_terms.append(conversation_type)
|
||||
|
||||
# Search for relevant memories
|
||||
all_memories = []
|
||||
|
||||
# Search by content
|
||||
for term in search_terms:
|
||||
result = await self.retrieve_memories(
|
||||
query=term,
|
||||
limit=limit // len(search_terms) + 1 if search_terms else limit
|
||||
)
|
||||
all_memories.extend(result.memories)
|
||||
|
||||
# Get relationship memories for participants
|
||||
for participant in participants:
|
||||
if participant != self.character.name:
|
||||
result = await self.retrieve_memories(
|
||||
memory_type='relationship',
|
||||
query=participant,
|
||||
limit=2
|
||||
)
|
||||
all_memories.extend(result.memories)
|
||||
|
||||
# Remove duplicates and sort by relevance
|
||||
unique_memories = {}
|
||||
for memory in all_memories:
|
||||
if memory['id'] not in unique_memories:
|
||||
unique_memories[memory['id']] = memory
|
||||
|
||||
# Sort by importance and recency
|
||||
sorted_memories = sorted(
|
||||
unique_memories.values(),
|
||||
key=lambda m: (m['importance'], m['timestamp']),
|
||||
reverse=True
|
||||
)
|
||||
|
||||
return sorted_memories[:limit]
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {
|
||||
"character": self.character.name,
|
||||
"context": context
|
||||
})
|
||||
return []
|
||||
|
||||
async def consolidate_memories(self) -> Dict[str, Any]:
|
||||
"""Consolidate related memories to save space and improve coherence"""
|
||||
try:
|
||||
consolidated_count = 0
|
||||
|
||||
for memory_type in self.memory_types.keys():
|
||||
# Get memories of this type
|
||||
result = await self.retrieve_memories(
|
||||
memory_type=memory_type,
|
||||
limit=50,
|
||||
min_importance=0.3
|
||||
)
|
||||
|
||||
if len(result.memories) > 10:
|
||||
# Group related memories
|
||||
groups = await self._group_related_memories(result.memories)
|
||||
|
||||
# Consolidate each group
|
||||
for group in groups:
|
||||
if len(group) >= 3:
|
||||
consolidated = await self._consolidate_memory_group(group)
|
||||
if consolidated:
|
||||
consolidated_count += len(group) - 1
|
||||
|
||||
log_memory_operation(
|
||||
self.character.name,
|
||||
"consolidated",
|
||||
"multiple",
|
||||
consolidated_count
|
||||
)
|
||||
|
||||
return {
|
||||
'consolidated_count': consolidated_count,
|
||||
'success': True
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"character": self.character.name})
|
||||
return {'consolidated_count': 0, 'success': False}
|
||||
|
||||
async def forget_memories(self, criteria: Dict[str, Any]) -> int:
|
||||
"""Forget memories based on criteria (age, importance, etc.)"""
|
||||
try:
|
||||
forgotten_count = 0
|
||||
|
||||
async with get_db_session() as session:
|
||||
# Build deletion criteria
|
||||
query_builder = select(Memory).where(Memory.character_id == self.character.id)
|
||||
|
||||
# Age criteria
|
||||
if criteria.get('older_than_days'):
|
||||
cutoff_date = datetime.utcnow() - timedelta(days=criteria['older_than_days'])
|
||||
query_builder = query_builder.where(Memory.timestamp < cutoff_date)
|
||||
|
||||
# Importance criteria
|
||||
if criteria.get('max_importance'):
|
||||
query_builder = query_builder.where(Memory.importance_score <= criteria['max_importance'])
|
||||
|
||||
# Access criteria
|
||||
if criteria.get('max_access_count'):
|
||||
query_builder = query_builder.where(Memory.access_count <= criteria['max_access_count'])
|
||||
|
||||
# Type criteria
|
||||
if criteria.get('memory_types'):
|
||||
query_builder = query_builder.where(Memory.memory_type.in_(criteria['memory_types']))
|
||||
|
||||
# Get memories to delete
|
||||
memories_to_delete = await session.scalars(query_builder)
|
||||
|
||||
for memory in memories_to_delete:
|
||||
await session.delete(memory)
|
||||
forgotten_count += 1
|
||||
|
||||
await session.commit()
|
||||
|
||||
log_memory_operation(
|
||||
self.character.name,
|
||||
"forgotten",
|
||||
str(criteria),
|
||||
forgotten_count
|
||||
)
|
||||
|
||||
return forgotten_count
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {
|
||||
"character": self.character.name,
|
||||
"criteria": criteria
|
||||
})
|
||||
return 0
|
||||
|
||||
async def get_memory_statistics(self) -> Dict[str, Any]:
|
||||
"""Get statistics about character's memory"""
|
||||
try:
|
||||
async with get_db_session() as session:
|
||||
# Total memories
|
||||
total_count = await session.scalar(
|
||||
select(func.count(Memory.id)).where(Memory.character_id == self.character.id)
|
||||
)
|
||||
|
||||
# Memories by type
|
||||
type_counts = {}
|
||||
for memory_type in self.memory_types.keys():
|
||||
count = await session.scalar(
|
||||
select(func.count(Memory.id)).where(
|
||||
and_(
|
||||
Memory.character_id == self.character.id,
|
||||
Memory.memory_type == memory_type
|
||||
)
|
||||
)
|
||||
)
|
||||
type_counts[memory_type] = count
|
||||
|
||||
# Average importance
|
||||
avg_importance = await session.scalar(
|
||||
select(func.avg(Memory.importance_score)).where(
|
||||
Memory.character_id == self.character.id
|
||||
)
|
||||
)
|
||||
|
||||
# Recent activity
|
||||
recent_memories = await session.scalar(
|
||||
select(func.count(Memory.id)).where(
|
||||
and_(
|
||||
Memory.character_id == self.character.id,
|
||||
Memory.timestamp >= datetime.utcnow() - timedelta(days=7)
|
||||
)
|
||||
)
|
||||
)
|
||||
|
||||
return {
|
||||
'total_memories': total_count,
|
||||
'memories_by_type': type_counts,
|
||||
'average_importance': float(avg_importance) if avg_importance else 0.0,
|
||||
'recent_memories': recent_memories,
|
||||
'memory_health': self._assess_memory_health(type_counts, total_count)
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"character": self.character.name})
|
||||
return {}
|
||||
|
||||
async def _calculate_importance(self, memory_type: str, content: str, tags: List[str]) -> float:
|
||||
"""Calculate importance score for a memory"""
|
||||
# Base importance from memory type
|
||||
base_importance = self.memory_types.get(memory_type, {}).get('importance_base', 0.5)
|
||||
|
||||
# Content analysis
|
||||
content_score = 0.0
|
||||
content_lower = content.lower()
|
||||
|
||||
# Emotional content increases importance
|
||||
emotion_words = ['love', 'hate', 'fear', 'joy', 'anger', 'surprise', 'sad', 'happy']
|
||||
if any(word in content_lower for word in emotion_words):
|
||||
content_score += 0.2
|
||||
|
||||
# Questions and decisions are important
|
||||
if '?' in content or any(word in content_lower for word in ['decide', 'choose', 'important']):
|
||||
content_score += 0.15
|
||||
|
||||
# Personal references increase importance
|
||||
if any(word in content_lower for word in ['i', 'me', 'my', 'myself']):
|
||||
content_score += 0.1
|
||||
|
||||
# Tag analysis
|
||||
tag_score = 0.0
|
||||
if tags:
|
||||
important_tags = ['important', 'critical', 'decision', 'emotion', 'relationship']
|
||||
if any(tag in important_tags for tag in tags):
|
||||
tag_score += 0.2
|
||||
|
||||
# Combine scores
|
||||
final_score = base_importance + content_score + tag_score
|
||||
|
||||
# Normalize to 0-1 range
|
||||
return max(0.0, min(1.0, final_score))
|
||||
|
||||
async def _calculate_relevance(self, memory: Dict[str, Any], query: str) -> float:
|
||||
"""Calculate relevance score for a memory given a query"""
|
||||
if not query:
|
||||
return memory['importance']
|
||||
|
||||
query_lower = query.lower()
|
||||
content_lower = memory['content'].lower()
|
||||
|
||||
# Exact match bonus
|
||||
if query_lower in content_lower:
|
||||
return min(1.0, memory['importance'] + 0.3)
|
||||
|
||||
# Partial match
|
||||
query_words = query_lower.split()
|
||||
content_words = content_lower.split()
|
||||
|
||||
matches = sum(1 for word in query_words if word in content_words)
|
||||
match_ratio = matches / len(query_words) if query_words else 0
|
||||
|
||||
relevance = memory['importance'] + (match_ratio * 0.2)
|
||||
return min(1.0, relevance)
|
||||
|
||||
async def _store_memory_in_db(self, memory_type: str, content: str, importance: float,
|
||||
tags: List[str], related_character: str = None,
|
||||
related_message_id: int = None) -> int:
|
||||
"""Store memory in database"""
|
||||
async with get_db_session() as session:
|
||||
# Get related character ID if provided
|
||||
related_character_id = None
|
||||
if related_character:
|
||||
char_query = select(Character).where(Character.name == related_character)
|
||||
related_char = await session.scalar(char_query)
|
||||
if related_char:
|
||||
related_character_id = related_char.id
|
||||
|
||||
memory = Memory(
|
||||
character_id=self.character.id,
|
||||
memory_type=memory_type,
|
||||
content=content,
|
||||
importance_score=importance,
|
||||
tags=tags,
|
||||
related_character_id=related_character_id,
|
||||
related_message_id=related_message_id,
|
||||
timestamp=datetime.utcnow(),
|
||||
last_accessed=datetime.utcnow(),
|
||||
access_count=0
|
||||
)
|
||||
|
||||
session.add(memory)
|
||||
await session.commit()
|
||||
return memory.id
|
||||
|
||||
async def _cleanup_memories(self, memory_type: str):
|
||||
"""Clean up old memories to stay within limits"""
|
||||
max_count = self.max_memories.get(memory_type, 100)
|
||||
|
||||
async with get_db_session() as session:
|
||||
# Count current memories of this type
|
||||
count = await session.scalar(
|
||||
select(func.count(Memory.id)).where(
|
||||
and_(
|
||||
Memory.character_id == self.character.id,
|
||||
Memory.memory_type == memory_type
|
||||
)
|
||||
)
|
||||
)
|
||||
|
||||
if count > max_count:
|
||||
# Delete oldest, least important memories
|
||||
excess = count - max_count
|
||||
|
||||
# Get memories to delete (lowest importance, oldest first)
|
||||
memories_to_delete = await session.scalars(
|
||||
select(Memory).where(
|
||||
and_(
|
||||
Memory.character_id == self.character.id,
|
||||
Memory.memory_type == memory_type
|
||||
)
|
||||
).order_by(
|
||||
Memory.importance_score.asc(),
|
||||
Memory.timestamp.asc()
|
||||
).limit(excess)
|
||||
)
|
||||
|
||||
for memory in memories_to_delete:
|
||||
await session.delete(memory)
|
||||
|
||||
await session.commit()
|
||||
|
||||
async def _group_related_memories(self, memories: List[Dict[str, Any]]) -> List[List[Dict[str, Any]]]:
|
||||
"""Group related memories for consolidation"""
|
||||
groups = []
|
||||
used_indices = set()
|
||||
|
||||
for i, memory in enumerate(memories):
|
||||
if i in used_indices:
|
||||
continue
|
||||
|
||||
group = [memory]
|
||||
used_indices.add(i)
|
||||
|
||||
# Find related memories
|
||||
for j, other_memory in enumerate(memories[i+1:], i+1):
|
||||
if j in used_indices:
|
||||
continue
|
||||
|
||||
if self._are_memories_related(memory, other_memory):
|
||||
group.append(other_memory)
|
||||
used_indices.add(j)
|
||||
|
||||
groups.append(group)
|
||||
|
||||
return groups
|
||||
|
||||
def _are_memories_related(self, memory1: Dict[str, Any], memory2: Dict[str, Any]) -> bool:
|
||||
"""Check if two memories are related"""
|
||||
# Same type
|
||||
if memory1['type'] != memory2['type']:
|
||||
return False
|
||||
|
||||
# Overlapping tags
|
||||
tags1 = set(memory1.get('tags', []))
|
||||
tags2 = set(memory2.get('tags', []))
|
||||
if tags1 & tags2:
|
||||
return True
|
||||
|
||||
# Similar content (simple word overlap)
|
||||
words1 = set(memory1['content'].lower().split())
|
||||
words2 = set(memory2['content'].lower().split())
|
||||
overlap = len(words1 & words2)
|
||||
|
||||
return overlap >= 3
|
||||
|
||||
async def _consolidate_memory_group(self, group: List[Dict[str, Any]]) -> Optional[Dict[str, Any]]:
|
||||
"""Consolidate a group of related memories"""
|
||||
if len(group) < 2:
|
||||
return None
|
||||
|
||||
# Create consolidated memory
|
||||
consolidated_content = self._merge_memory_contents([m['content'] for m in group])
|
||||
consolidated_tags = list(set(tag for m in group for tag in m.get('tags', [])))
|
||||
avg_importance = sum(m['importance'] for m in group) / len(group)
|
||||
|
||||
# Store consolidated memory
|
||||
consolidated_id = await self._store_memory_in_db(
|
||||
memory_type=group[0]['type'],
|
||||
content=consolidated_content,
|
||||
importance=avg_importance,
|
||||
tags=consolidated_tags
|
||||
)
|
||||
|
||||
# Delete original memories
|
||||
async with get_db_session() as session:
|
||||
for memory in group:
|
||||
old_memory = await session.get(Memory, memory['id'])
|
||||
if old_memory:
|
||||
await session.delete(old_memory)
|
||||
await session.commit()
|
||||
|
||||
return {'id': consolidated_id, 'consolidated_from': len(group)}
|
||||
|
||||
def _merge_memory_contents(self, contents: List[str]) -> str:
|
||||
"""Merge multiple memory contents into one"""
|
||||
# Simple concatenation with summary
|
||||
if len(contents) == 1:
|
||||
return contents[0]
|
||||
|
||||
merged = f"Consolidated from {len(contents)} memories: "
|
||||
merged += " | ".join(contents[:3]) # Limit to first 3
|
||||
|
||||
if len(contents) > 3:
|
||||
merged += f" | ... and {len(contents) - 3} more"
|
||||
|
||||
return merged
|
||||
|
||||
def _assess_memory_health(self, type_counts: Dict[str, int], total_count: int) -> str:
|
||||
"""Assess the health of the character's memory system"""
|
||||
if total_count == 0:
|
||||
return "empty"
|
||||
|
||||
# Check balance
|
||||
max_count = max(type_counts.values()) if type_counts else 0
|
||||
if max_count > total_count * 0.7:
|
||||
return "imbalanced"
|
||||
|
||||
# Check if near limits
|
||||
near_limit = any(
|
||||
count > self.max_memories.get(mem_type, 100) * 0.9
|
||||
for mem_type, count in type_counts.items()
|
||||
)
|
||||
|
||||
if near_limit:
|
||||
return "near_capacity"
|
||||
|
||||
return "healthy"
|
||||
384
src/characters/personality.py
Normal file
384
src/characters/personality.py
Normal file
@@ -0,0 +1,384 @@
|
||||
import json
|
||||
import random
|
||||
from typing import Dict, Any, List, Optional, Tuple
|
||||
from datetime import datetime
|
||||
from ..utils.logging import log_character_action, log_error_with_context
|
||||
from ..database.connection import get_db_session
|
||||
from ..database.models import CharacterEvolution, Character as CharacterModel
|
||||
from sqlalchemy import select
|
||||
|
||||
class PersonalityManager:
|
||||
"""Manages character personality evolution and traits"""
|
||||
|
||||
def __init__(self, character):
|
||||
self.character = character
|
||||
self.evolution_thresholds = {
|
||||
'minor_change': 0.3,
|
||||
'moderate_change': 0.6,
|
||||
'major_change': 0.9
|
||||
}
|
||||
|
||||
# Personality dimensions that can evolve
|
||||
self.personality_dimensions = {
|
||||
'extraversion': ['introverted', 'extraverted'],
|
||||
'agreeableness': ['competitive', 'cooperative'],
|
||||
'conscientiousness': ['spontaneous', 'disciplined'],
|
||||
'neuroticism': ['calm', 'anxious'],
|
||||
'openness': ['traditional', 'innovative']
|
||||
}
|
||||
|
||||
# Trait keywords for analysis
|
||||
self.trait_keywords = {
|
||||
'friendly': ['friendly', 'kind', 'warm', 'welcoming'],
|
||||
'analytical': ['analytical', 'logical', 'rational', 'systematic'],
|
||||
'creative': ['creative', 'artistic', 'imaginative', 'innovative'],
|
||||
'curious': ['curious', 'inquisitive', 'questioning', 'exploring'],
|
||||
'confident': ['confident', 'assertive', 'bold', 'decisive'],
|
||||
'empathetic': ['empathetic', 'compassionate', 'understanding', 'caring'],
|
||||
'humorous': ['humorous', 'witty', 'funny', 'playful'],
|
||||
'serious': ['serious', 'focused', 'intense', 'thoughtful'],
|
||||
'optimistic': ['optimistic', 'positive', 'hopeful', 'cheerful'],
|
||||
'skeptical': ['skeptical', 'critical', 'questioning', 'doubtful']
|
||||
}
|
||||
|
||||
async def analyze_personality_evolution(self, reflection: str, recent_interactions: List[Dict]) -> Dict[str, Any]:
|
||||
"""Analyze if personality should evolve based on reflection and interactions"""
|
||||
try:
|
||||
# Analyze reflection for personality indicators
|
||||
reflection_analysis = self._analyze_reflection_text(reflection)
|
||||
|
||||
# Analyze interaction patterns
|
||||
interaction_patterns = self._analyze_interaction_patterns(recent_interactions)
|
||||
|
||||
# Determine if evolution is needed
|
||||
evolution_score = self._calculate_evolution_score(reflection_analysis, interaction_patterns)
|
||||
|
||||
changes = {
|
||||
'should_evolve': evolution_score > self.evolution_thresholds['minor_change'],
|
||||
'evolution_score': evolution_score,
|
||||
'reflection_analysis': reflection_analysis,
|
||||
'interaction_patterns': interaction_patterns,
|
||||
'proposed_changes': []
|
||||
}
|
||||
|
||||
if changes['should_evolve']:
|
||||
changes['proposed_changes'] = await self._generate_personality_changes(
|
||||
evolution_score, reflection_analysis, interaction_patterns
|
||||
)
|
||||
|
||||
return changes
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"character": self.character.name})
|
||||
return {'should_evolve': False}
|
||||
|
||||
async def apply_personality_evolution(self, changes: Dict[str, Any]):
|
||||
"""Apply approved personality changes"""
|
||||
try:
|
||||
if not changes.get('should_evolve') or not changes.get('proposed_changes'):
|
||||
return
|
||||
|
||||
old_personality = self.character.personality
|
||||
new_personality = await self._modify_personality(changes['proposed_changes'])
|
||||
|
||||
if new_personality != old_personality:
|
||||
# Update character
|
||||
await self._update_character_personality(new_personality)
|
||||
|
||||
# Log evolution
|
||||
await self._log_personality_evolution(
|
||||
old_personality,
|
||||
new_personality,
|
||||
changes['evolution_score'],
|
||||
changes.get('reason', 'Personality evolution through interaction')
|
||||
)
|
||||
|
||||
log_character_action(
|
||||
self.character.name,
|
||||
"personality_evolved",
|
||||
{
|
||||
"evolution_score": changes['evolution_score'],
|
||||
"changes": changes['proposed_changes'],
|
||||
"old_length": len(old_personality),
|
||||
"new_length": len(new_personality)
|
||||
}
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"character": self.character.name})
|
||||
|
||||
async def generate_adaptive_traits(self, context: Dict[str, Any]) -> List[str]:
|
||||
"""Generate adaptive personality traits based on context"""
|
||||
try:
|
||||
# Analyze context for needed traits
|
||||
context_analysis = self._analyze_context_needs(context)
|
||||
|
||||
# Generate complementary traits
|
||||
adaptive_traits = []
|
||||
|
||||
for need in context_analysis.get('needs', []):
|
||||
if need == 'leadership':
|
||||
adaptive_traits.extend(['assertive', 'decisive', 'inspiring'])
|
||||
elif need == 'creativity':
|
||||
adaptive_traits.extend(['imaginative', 'innovative', 'artistic'])
|
||||
elif need == 'analysis':
|
||||
adaptive_traits.extend(['analytical', 'logical', 'systematic'])
|
||||
elif need == 'social':
|
||||
adaptive_traits.extend(['friendly', 'empathetic', 'cooperative'])
|
||||
elif need == 'independence':
|
||||
adaptive_traits.extend(['independent', 'self-reliant', 'confident'])
|
||||
|
||||
# Remove duplicates and limit
|
||||
adaptive_traits = list(set(adaptive_traits))[:3]
|
||||
|
||||
return adaptive_traits
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"character": self.character.name, "context": context})
|
||||
return []
|
||||
|
||||
def _analyze_reflection_text(self, reflection: str) -> Dict[str, Any]:
|
||||
"""Analyze reflection text for personality indicators"""
|
||||
analysis = {
|
||||
'dominant_traits': [],
|
||||
'emotional_tone': 'neutral',
|
||||
'growth_areas': [],
|
||||
'relationship_focus': False,
|
||||
'self_awareness_level': 'moderate'
|
||||
}
|
||||
|
||||
reflection_lower = reflection.lower()
|
||||
|
||||
# Identify dominant traits
|
||||
for trait, keywords in self.trait_keywords.items():
|
||||
if any(keyword in reflection_lower for keyword in keywords):
|
||||
analysis['dominant_traits'].append(trait)
|
||||
|
||||
# Analyze emotional tone
|
||||
positive_emotions = ['happy', 'excited', 'confident', 'optimistic', 'grateful']
|
||||
negative_emotions = ['sad', 'anxious', 'frustrated', 'disappointed', 'confused']
|
||||
|
||||
if any(emotion in reflection_lower for emotion in positive_emotions):
|
||||
analysis['emotional_tone'] = 'positive'
|
||||
elif any(emotion in reflection_lower for emotion in negative_emotions):
|
||||
analysis['emotional_tone'] = 'negative'
|
||||
|
||||
# Check for growth indicators
|
||||
growth_keywords = ['learn', 'grow', 'improve', 'develop', 'change', 'evolve']
|
||||
if any(keyword in reflection_lower for keyword in growth_keywords):
|
||||
analysis['growth_areas'].append('self_improvement')
|
||||
|
||||
# Check relationship focus
|
||||
relationship_keywords = ['friend', 'relationship', 'connect', 'bond', 'trust']
|
||||
if any(keyword in reflection_lower for keyword in relationship_keywords):
|
||||
analysis['relationship_focus'] = True
|
||||
|
||||
# Assess self-awareness
|
||||
awareness_keywords = ['realize', 'understand', 'recognize', 'notice', 'aware']
|
||||
if any(keyword in reflection_lower for keyword in awareness_keywords):
|
||||
analysis['self_awareness_level'] = 'high'
|
||||
|
||||
return analysis
|
||||
|
||||
def _analyze_interaction_patterns(self, interactions: List[Dict]) -> Dict[str, Any]:
|
||||
"""Analyze patterns in recent interactions"""
|
||||
patterns = {
|
||||
'interaction_frequency': len(interactions),
|
||||
'conversation_styles': [],
|
||||
'topic_preferences': [],
|
||||
'social_tendencies': 'balanced'
|
||||
}
|
||||
|
||||
if not interactions:
|
||||
return patterns
|
||||
|
||||
# Analyze conversation styles
|
||||
question_count = sum(1 for i in interactions if '?' in i.get('content', ''))
|
||||
statement_count = len(interactions) - question_count
|
||||
|
||||
if question_count > statement_count:
|
||||
patterns['conversation_styles'].append('inquisitive')
|
||||
else:
|
||||
patterns['conversation_styles'].append('declarative')
|
||||
|
||||
# Analyze social tendencies
|
||||
if len(interactions) > 10:
|
||||
patterns['social_tendencies'] = 'extraverted'
|
||||
elif len(interactions) < 3:
|
||||
patterns['social_tendencies'] = 'introverted'
|
||||
|
||||
return patterns
|
||||
|
||||
def _calculate_evolution_score(self, reflection_analysis: Dict, interaction_patterns: Dict) -> float:
|
||||
"""Calculate evolution score based on analysis"""
|
||||
score = 0.0
|
||||
|
||||
# Base score from self-awareness
|
||||
if reflection_analysis.get('self_awareness_level') == 'high':
|
||||
score += 0.3
|
||||
elif reflection_analysis.get('self_awareness_level') == 'moderate':
|
||||
score += 0.2
|
||||
|
||||
# Growth focus bonus
|
||||
if reflection_analysis.get('growth_areas'):
|
||||
score += 0.2
|
||||
|
||||
# Emotional intensity
|
||||
if reflection_analysis.get('emotional_tone') != 'neutral':
|
||||
score += 0.1
|
||||
|
||||
# Interaction frequency influence
|
||||
interaction_freq = interaction_patterns.get('interaction_frequency', 0)
|
||||
if interaction_freq > 15:
|
||||
score += 0.2
|
||||
elif interaction_freq > 8:
|
||||
score += 0.1
|
||||
|
||||
# Cap at 1.0
|
||||
return min(score, 1.0)
|
||||
|
||||
async def _generate_personality_changes(self, evolution_score: float,
|
||||
reflection_analysis: Dict,
|
||||
interaction_patterns: Dict) -> List[Dict[str, Any]]:
|
||||
"""Generate specific personality changes"""
|
||||
changes = []
|
||||
|
||||
# Determine change magnitude
|
||||
if evolution_score > self.evolution_thresholds['major_change']:
|
||||
change_magnitude = 'major'
|
||||
elif evolution_score > self.evolution_thresholds['moderate_change']:
|
||||
change_magnitude = 'moderate'
|
||||
else:
|
||||
change_magnitude = 'minor'
|
||||
|
||||
# Generate trait adjustments
|
||||
dominant_traits = reflection_analysis.get('dominant_traits', [])
|
||||
|
||||
if 'creative' in dominant_traits:
|
||||
changes.append({
|
||||
'type': 'trait_enhancement',
|
||||
'trait': 'creativity',
|
||||
'magnitude': change_magnitude,
|
||||
'reason': 'Increased creative expression in interactions'
|
||||
})
|
||||
|
||||
if 'analytical' in dominant_traits:
|
||||
changes.append({
|
||||
'type': 'trait_enhancement',
|
||||
'trait': 'analytical_thinking',
|
||||
'magnitude': change_magnitude,
|
||||
'reason': 'Enhanced analytical approach in conversations'
|
||||
})
|
||||
|
||||
# Social tendency adjustments
|
||||
social_tendency = interaction_patterns.get('social_tendencies', 'balanced')
|
||||
if social_tendency == 'extraverted' and 'friendly' not in dominant_traits:
|
||||
changes.append({
|
||||
'type': 'trait_addition',
|
||||
'trait': 'social_engagement',
|
||||
'magnitude': change_magnitude,
|
||||
'reason': 'Increased social activity and engagement'
|
||||
})
|
||||
|
||||
return changes
|
||||
|
||||
async def _modify_personality(self, changes: List[Dict[str, Any]]) -> str:
|
||||
"""Modify personality description based on changes"""
|
||||
current_personality = self.character.personality
|
||||
|
||||
# For this implementation, we'll add/modify traits in the personality description
|
||||
# In a more sophisticated version, this could use an LLM to rewrite the personality
|
||||
|
||||
additions = []
|
||||
for change in changes:
|
||||
if change['type'] == 'trait_enhancement':
|
||||
additions.append(f"Shows enhanced {change['trait'].replace('_', ' ')}")
|
||||
elif change['type'] == 'trait_addition':
|
||||
additions.append(f"Demonstrates {change['trait'].replace('_', ' ')}")
|
||||
|
||||
if additions:
|
||||
new_personality = current_personality + ". " + ". ".join(additions) + "."
|
||||
else:
|
||||
new_personality = current_personality
|
||||
|
||||
return new_personality
|
||||
|
||||
async def _update_character_personality(self, new_personality: str):
|
||||
"""Update character personality in database"""
|
||||
try:
|
||||
async with get_db_session() as session:
|
||||
# Update character
|
||||
character = await session.get(CharacterModel, self.character.id)
|
||||
if character:
|
||||
character.personality = new_personality
|
||||
await session.commit()
|
||||
|
||||
# Update local character
|
||||
self.character.personality = new_personality
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"character": self.character.name})
|
||||
|
||||
async def _log_personality_evolution(self, old_personality: str, new_personality: str,
|
||||
evolution_score: float, reason: str):
|
||||
"""Log personality evolution to database"""
|
||||
try:
|
||||
async with get_db_session() as session:
|
||||
evolution = CharacterEvolution(
|
||||
character_id=self.character.id,
|
||||
change_type='personality',
|
||||
old_value=old_personality,
|
||||
new_value=new_personality,
|
||||
reason=f"Evolution score: {evolution_score:.2f}. {reason}",
|
||||
timestamp=datetime.utcnow()
|
||||
)
|
||||
|
||||
session.add(evolution)
|
||||
await session.commit()
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"character": self.character.name})
|
||||
|
||||
def _analyze_context_needs(self, context: Dict[str, Any]) -> Dict[str, Any]:
|
||||
"""Analyze context to determine needed personality traits"""
|
||||
needs = []
|
||||
|
||||
# Analyze based on conversation type
|
||||
conv_type = context.get('type', '')
|
||||
if conv_type == 'debate':
|
||||
needs.append('analysis')
|
||||
elif conv_type == 'creative':
|
||||
needs.append('creativity')
|
||||
elif conv_type == 'support':
|
||||
needs.append('social')
|
||||
|
||||
# Analyze based on topic
|
||||
topic = context.get('topic', '').lower()
|
||||
if any(word in topic for word in ['art', 'music', 'creative', 'design']):
|
||||
needs.append('creativity')
|
||||
elif any(word in topic for word in ['problem', 'solution', 'analysis']):
|
||||
needs.append('analysis')
|
||||
elif any(word in topic for word in ['lead', 'manage', 'organize']):
|
||||
needs.append('leadership')
|
||||
|
||||
return {'needs': needs}
|
||||
|
||||
def get_personality_summary(self) -> Dict[str, Any]:
|
||||
"""Get summary of current personality state"""
|
||||
return {
|
||||
'current_personality': self.character.personality,
|
||||
'dominant_traits': self._extract_current_traits(),
|
||||
'evolution_capability': True,
|
||||
'last_evolution': None # Could be populated from database
|
||||
}
|
||||
|
||||
def _extract_current_traits(self) -> List[str]:
|
||||
"""Extract current personality traits from description"""
|
||||
traits = []
|
||||
personality_lower = self.character.personality.lower()
|
||||
|
||||
for trait, keywords in self.trait_keywords.items():
|
||||
if any(keyword in personality_lower for keyword in keywords):
|
||||
traits.append(trait)
|
||||
|
||||
return traits
|
||||
0
src/conversation/__init__.py
Normal file
0
src/conversation/__init__.py
Normal file
829
src/conversation/engine.py
Normal file
829
src/conversation/engine.py
Normal file
@@ -0,0 +1,829 @@
|
||||
import asyncio
|
||||
import random
|
||||
import json
|
||||
from typing import Dict, Any, List, Optional, Set, Tuple
|
||||
from datetime import datetime, timedelta
|
||||
from dataclasses import dataclass, asdict
|
||||
from enum import Enum
|
||||
import logging
|
||||
|
||||
from ..database.connection import get_db_session
|
||||
from ..database.models import Character as CharacterModel, Conversation, Message, Memory
|
||||
from ..characters.character import Character
|
||||
from ..llm.client import llm_client, prompt_manager
|
||||
from ..llm.prompt_manager import advanced_prompt_manager
|
||||
from ..utils.config import get_settings, get_character_settings
|
||||
from ..utils.logging import (log_conversation_event, log_character_action,
|
||||
log_autonomous_decision, log_error_with_context)
|
||||
from sqlalchemy import select, and_, or_, func, desc
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class ConversationState(Enum):
|
||||
IDLE = "idle"
|
||||
STARTING = "starting"
|
||||
ACTIVE = "active"
|
||||
WINDING_DOWN = "winding_down"
|
||||
PAUSED = "paused"
|
||||
STOPPED = "stopped"
|
||||
|
||||
@dataclass
|
||||
class ConversationContext:
|
||||
conversation_id: Optional[int] = None
|
||||
topic: str = ""
|
||||
participants: List[str] = None
|
||||
message_count: int = 0
|
||||
start_time: datetime = None
|
||||
last_activity: datetime = None
|
||||
current_speaker: Optional[str] = None
|
||||
conversation_type: str = "general"
|
||||
energy_level: float = 1.0
|
||||
|
||||
def __post_init__(self):
|
||||
if self.participants is None:
|
||||
self.participants = []
|
||||
if self.start_time is None:
|
||||
self.start_time = datetime.utcnow()
|
||||
if self.last_activity is None:
|
||||
self.last_activity = datetime.utcnow()
|
||||
|
||||
class ConversationEngine:
|
||||
"""Autonomous conversation engine that manages character interactions"""
|
||||
|
||||
def __init__(self):
|
||||
self.settings = get_settings()
|
||||
self.character_settings = get_character_settings()
|
||||
|
||||
# Engine state
|
||||
self.state = ConversationState.IDLE
|
||||
self.characters: Dict[str, Character] = {}
|
||||
self.active_conversations: Dict[int, ConversationContext] = {}
|
||||
self.discord_bot = None
|
||||
|
||||
# Scheduling
|
||||
self.scheduler_task = None
|
||||
self.conversation_task = None
|
||||
self.is_paused = False
|
||||
|
||||
# Configuration
|
||||
self.min_delay = self.settings.conversation.min_delay_seconds
|
||||
self.max_delay = self.settings.conversation.max_delay_seconds
|
||||
self.max_conversation_length = self.settings.conversation.max_conversation_length
|
||||
self.quiet_hours = (
|
||||
self.settings.conversation.quiet_hours_start,
|
||||
self.settings.conversation.quiet_hours_end
|
||||
)
|
||||
|
||||
# Conversation topics
|
||||
self.available_topics = self.character_settings.conversation_topics
|
||||
|
||||
# Statistics
|
||||
self.stats = {
|
||||
'conversations_started': 0,
|
||||
'messages_generated': 0,
|
||||
'characters_active': 0,
|
||||
'uptime_start': datetime.utcnow(),
|
||||
'last_activity': datetime.utcnow()
|
||||
}
|
||||
|
||||
async def initialize(self, discord_bot):
|
||||
"""Initialize the conversation engine"""
|
||||
try:
|
||||
self.discord_bot = discord_bot
|
||||
|
||||
# Load characters from database
|
||||
await self._load_characters()
|
||||
|
||||
# Start scheduler
|
||||
self.scheduler_task = asyncio.create_task(self._scheduler_loop())
|
||||
|
||||
# Start main conversation loop
|
||||
self.conversation_task = asyncio.create_task(self._conversation_loop())
|
||||
|
||||
self.state = ConversationState.IDLE
|
||||
|
||||
log_conversation_event(
|
||||
0, "engine_initialized",
|
||||
list(self.characters.keys()),
|
||||
{"character_count": len(self.characters)}
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"component": "conversation_engine_init"})
|
||||
raise
|
||||
|
||||
async def start_conversation(self, topic: str = None,
|
||||
forced_participants: List[str] = None) -> Optional[int]:
|
||||
"""Start a new conversation"""
|
||||
try:
|
||||
if self.is_paused or self.state == ConversationState.STOPPED:
|
||||
return None
|
||||
|
||||
# Check if it's quiet hours
|
||||
if self._is_quiet_hours():
|
||||
return None
|
||||
|
||||
# Select topic
|
||||
if not topic:
|
||||
topic = await self._select_conversation_topic()
|
||||
|
||||
# Select participants
|
||||
participants = forced_participants or await self._select_participants(topic)
|
||||
|
||||
if len(participants) < 2:
|
||||
logger.warning("Not enough participants for conversation")
|
||||
return None
|
||||
|
||||
# Create conversation in database
|
||||
conversation_id = await self._create_conversation(topic, participants)
|
||||
|
||||
# Create conversation context
|
||||
context = ConversationContext(
|
||||
conversation_id=conversation_id,
|
||||
topic=topic,
|
||||
participants=participants,
|
||||
conversation_type=await self._determine_conversation_type(topic)
|
||||
)
|
||||
|
||||
self.active_conversations[conversation_id] = context
|
||||
|
||||
# Choose initial speaker
|
||||
initial_speaker = await self._choose_initial_speaker(participants, topic)
|
||||
|
||||
# Generate opening message
|
||||
opening_message = await self._generate_opening_message(initial_speaker, topic, context)
|
||||
|
||||
if opening_message:
|
||||
# Send message via Discord bot
|
||||
await self.discord_bot.send_character_message(
|
||||
initial_speaker, opening_message, conversation_id
|
||||
)
|
||||
|
||||
# Update context
|
||||
context.current_speaker = initial_speaker
|
||||
context.message_count = 1
|
||||
context.last_activity = datetime.utcnow()
|
||||
|
||||
# Store message in database
|
||||
await self._store_conversation_message(
|
||||
conversation_id, initial_speaker, opening_message
|
||||
)
|
||||
|
||||
# Update statistics
|
||||
self.stats['conversations_started'] += 1
|
||||
self.stats['messages_generated'] += 1
|
||||
self.stats['last_activity'] = datetime.utcnow()
|
||||
|
||||
log_conversation_event(
|
||||
conversation_id, "conversation_started",
|
||||
participants,
|
||||
{"topic": topic, "initial_speaker": initial_speaker}
|
||||
)
|
||||
|
||||
return conversation_id
|
||||
|
||||
return None
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {
|
||||
"topic": topic,
|
||||
"participants": forced_participants
|
||||
})
|
||||
return None
|
||||
|
||||
async def continue_conversation(self, conversation_id: int) -> bool:
|
||||
"""Continue an active conversation"""
|
||||
try:
|
||||
if conversation_id not in self.active_conversations:
|
||||
return False
|
||||
|
||||
context = self.active_conversations[conversation_id]
|
||||
|
||||
# Check if conversation should continue
|
||||
if not await self._should_continue_conversation(context):
|
||||
await self._end_conversation(conversation_id)
|
||||
return False
|
||||
|
||||
# Choose next speaker
|
||||
next_speaker = await self._choose_next_speaker(context)
|
||||
|
||||
if not next_speaker:
|
||||
await self._end_conversation(conversation_id)
|
||||
return False
|
||||
|
||||
# Generate response
|
||||
response = await self._generate_response(next_speaker, context)
|
||||
|
||||
if response:
|
||||
# Send message
|
||||
await self.discord_bot.send_character_message(
|
||||
next_speaker, response, conversation_id
|
||||
)
|
||||
|
||||
# Update context
|
||||
context.current_speaker = next_speaker
|
||||
context.message_count += 1
|
||||
context.last_activity = datetime.utcnow()
|
||||
|
||||
# Store message
|
||||
await self._store_conversation_message(
|
||||
conversation_id, next_speaker, response
|
||||
)
|
||||
|
||||
# Update character relationships
|
||||
await self._update_character_relationships(context, next_speaker, response)
|
||||
|
||||
# Store memories
|
||||
await self._store_conversation_memories(context, next_speaker, response)
|
||||
|
||||
# Update statistics
|
||||
self.stats['messages_generated'] += 1
|
||||
self.stats['last_activity'] = datetime.utcnow()
|
||||
|
||||
log_conversation_event(
|
||||
conversation_id, "message_sent",
|
||||
[next_speaker],
|
||||
{"message_length": len(response), "total_messages": context.message_count}
|
||||
)
|
||||
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"conversation_id": conversation_id})
|
||||
return False
|
||||
|
||||
async def handle_external_mention(self, message_content: str,
|
||||
mentioned_characters: List[str], author: str):
|
||||
"""Handle external mentions of characters"""
|
||||
try:
|
||||
for character_name in mentioned_characters:
|
||||
if character_name in self.characters:
|
||||
character = self.characters[character_name]
|
||||
|
||||
# Decide if character should respond
|
||||
context = {
|
||||
'type': 'external_mention',
|
||||
'content': message_content,
|
||||
'author': author,
|
||||
'participants': [character_name]
|
||||
}
|
||||
|
||||
should_respond, reason = await character.should_respond(context)
|
||||
|
||||
if should_respond:
|
||||
# Generate response
|
||||
response = await character.generate_response(context)
|
||||
|
||||
if response:
|
||||
await self.discord_bot.send_character_message(
|
||||
character_name, response
|
||||
)
|
||||
|
||||
log_character_action(
|
||||
character_name, "responded_to_mention",
|
||||
{"author": author, "response_length": len(response)}
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {
|
||||
"mentioned_characters": mentioned_characters,
|
||||
"author": author
|
||||
})
|
||||
|
||||
async def handle_external_engagement(self, message_content: str, author: str):
|
||||
"""Handle external user trying to engage characters"""
|
||||
try:
|
||||
# Randomly select a character to respond
|
||||
if self.characters:
|
||||
responding_character = random.choice(list(self.characters.values()))
|
||||
|
||||
context = {
|
||||
'type': 'external_engagement',
|
||||
'content': message_content,
|
||||
'author': author,
|
||||
'participants': [responding_character.name]
|
||||
}
|
||||
|
||||
should_respond, reason = await responding_character.should_respond(context)
|
||||
|
||||
if should_respond:
|
||||
response = await responding_character.generate_response(context)
|
||||
|
||||
if response:
|
||||
await self.discord_bot.send_character_message(
|
||||
responding_character.name, response
|
||||
)
|
||||
|
||||
# Possibly start a conversation with other characters
|
||||
if random.random() < 0.4: # 40% chance
|
||||
await self.start_conversation(
|
||||
topic=f"Discussion prompted by: {message_content[:50]}..."
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"author": author})
|
||||
|
||||
async def trigger_conversation(self, topic: str = None):
|
||||
"""Manually trigger a conversation"""
|
||||
try:
|
||||
conversation_id = await self.start_conversation(topic)
|
||||
if conversation_id:
|
||||
log_conversation_event(
|
||||
conversation_id, "manually_triggered",
|
||||
self.active_conversations[conversation_id].participants,
|
||||
{"topic": topic}
|
||||
)
|
||||
return conversation_id
|
||||
return None
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"topic": topic})
|
||||
return None
|
||||
|
||||
async def pause(self):
|
||||
"""Pause the conversation engine"""
|
||||
self.is_paused = True
|
||||
self.state = ConversationState.PAUSED
|
||||
logger.info("Conversation engine paused")
|
||||
|
||||
async def resume(self):
|
||||
"""Resume the conversation engine"""
|
||||
self.is_paused = False
|
||||
self.state = ConversationState.IDLE
|
||||
logger.info("Conversation engine resumed")
|
||||
|
||||
async def stop(self):
|
||||
"""Stop the conversation engine"""
|
||||
self.state = ConversationState.STOPPED
|
||||
|
||||
# Cancel tasks
|
||||
if self.scheduler_task:
|
||||
self.scheduler_task.cancel()
|
||||
if self.conversation_task:
|
||||
self.conversation_task.cancel()
|
||||
|
||||
# End all active conversations
|
||||
for conversation_id in list(self.active_conversations.keys()):
|
||||
await self._end_conversation(conversation_id)
|
||||
|
||||
logger.info("Conversation engine stopped")
|
||||
|
||||
async def get_status(self) -> Dict[str, Any]:
|
||||
"""Get engine status"""
|
||||
uptime = datetime.utcnow() - self.stats['uptime_start']
|
||||
|
||||
return {
|
||||
'status': self.state.value,
|
||||
'is_paused': self.is_paused,
|
||||
'active_conversations': len(self.active_conversations),
|
||||
'loaded_characters': len(self.characters),
|
||||
'uptime': str(uptime),
|
||||
'stats': self.stats.copy(),
|
||||
'next_conversation_in': await self._time_until_next_conversation()
|
||||
}
|
||||
|
||||
async def _load_characters(self):
|
||||
"""Load characters from database"""
|
||||
try:
|
||||
async with get_db_session() as session:
|
||||
query = select(CharacterModel).where(CharacterModel.is_active == True)
|
||||
character_models = await session.scalars(query)
|
||||
|
||||
for char_model in character_models:
|
||||
character = Character(char_model)
|
||||
await character.initialize(llm_client)
|
||||
self.characters[character.name] = character
|
||||
|
||||
self.stats['characters_active'] = len(self.characters)
|
||||
|
||||
logger.info(f"Loaded {len(self.characters)} characters")
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"component": "character_loading"})
|
||||
raise
|
||||
|
||||
async def _scheduler_loop(self):
|
||||
"""Main scheduler loop for autonomous conversations"""
|
||||
try:
|
||||
while self.state != ConversationState.STOPPED:
|
||||
if not self.is_paused and self.state == ConversationState.IDLE:
|
||||
# Check if we should start a conversation
|
||||
if await self._should_start_conversation():
|
||||
await self.start_conversation()
|
||||
|
||||
# Check for conversation continuations
|
||||
for conversation_id in list(self.active_conversations.keys()):
|
||||
if await self._should_continue_conversation_now(conversation_id):
|
||||
await self.continue_conversation(conversation_id)
|
||||
|
||||
# Random delay between checks
|
||||
delay = random.uniform(self.min_delay, self.max_delay)
|
||||
await asyncio.sleep(delay)
|
||||
|
||||
except asyncio.CancelledError:
|
||||
logger.info("Scheduler loop cancelled")
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"component": "scheduler_loop"})
|
||||
|
||||
async def _conversation_loop(self):
|
||||
"""Main conversation management loop"""
|
||||
try:
|
||||
while self.state != ConversationState.STOPPED:
|
||||
# Periodic character self-reflection
|
||||
if random.random() < 0.1: # 10% chance per cycle
|
||||
await self._trigger_character_reflection()
|
||||
|
||||
# Cleanup old conversations
|
||||
await self._cleanup_old_conversations()
|
||||
|
||||
# Wait before next cycle
|
||||
await asyncio.sleep(60) # Check every minute
|
||||
|
||||
except asyncio.CancelledError:
|
||||
logger.info("Conversation loop cancelled")
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"component": "conversation_loop"})
|
||||
|
||||
async def _should_start_conversation(self) -> bool:
|
||||
"""Determine if a new conversation should start"""
|
||||
# Don't start if too many active conversations
|
||||
if len(self.active_conversations) >= 2:
|
||||
return False
|
||||
|
||||
# Don't start during quiet hours
|
||||
if self._is_quiet_hours():
|
||||
return False
|
||||
|
||||
# Random chance based on activity level
|
||||
base_chance = 0.3
|
||||
|
||||
# Increase chance if no recent activity
|
||||
time_since_last = datetime.utcnow() - self.stats['last_activity']
|
||||
if time_since_last > timedelta(hours=2):
|
||||
base_chance += 0.4
|
||||
elif time_since_last > timedelta(hours=1):
|
||||
base_chance += 0.2
|
||||
|
||||
return random.random() < base_chance
|
||||
|
||||
async def _should_continue_conversation(self, context: ConversationContext) -> bool:
|
||||
"""Determine if conversation should continue"""
|
||||
# Check message limit
|
||||
if context.message_count >= self.max_conversation_length:
|
||||
return False
|
||||
|
||||
# Check time limit (conversations shouldn't go on forever)
|
||||
duration = datetime.utcnow() - context.start_time
|
||||
if duration > timedelta(hours=2):
|
||||
return False
|
||||
|
||||
# Check if it's quiet hours
|
||||
if self._is_quiet_hours():
|
||||
return False
|
||||
|
||||
# Check energy level
|
||||
if context.energy_level < 0.2:
|
||||
return False
|
||||
|
||||
# Random natural ending chance
|
||||
if context.message_count > 10 and random.random() < 0.1:
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
async def _should_continue_conversation_now(self, conversation_id: int) -> bool:
|
||||
"""Check if conversation should continue right now"""
|
||||
if conversation_id not in self.active_conversations:
|
||||
return False
|
||||
|
||||
context = self.active_conversations[conversation_id]
|
||||
|
||||
# Check time since last message
|
||||
time_since_last = datetime.utcnow() - context.last_activity
|
||||
min_wait = timedelta(seconds=random.uniform(30, 120))
|
||||
|
||||
return time_since_last >= min_wait
|
||||
|
||||
async def _select_conversation_topic(self) -> str:
|
||||
"""Select a topic for conversation"""
|
||||
return random.choice(self.available_topics)
|
||||
|
||||
async def _select_participants(self, topic: str) -> List[str]:
|
||||
"""Select participants for a conversation"""
|
||||
interested_characters = []
|
||||
|
||||
# Find characters interested in the topic
|
||||
for character in self.characters.values():
|
||||
if await character._is_interested_in_topic(topic):
|
||||
interested_characters.append(character.name)
|
||||
|
||||
# If not enough interested characters, add random ones
|
||||
if len(interested_characters) < 2:
|
||||
all_characters = list(self.characters.keys())
|
||||
random.shuffle(all_characters)
|
||||
|
||||
for char_name in all_characters:
|
||||
if char_name not in interested_characters:
|
||||
interested_characters.append(char_name)
|
||||
if len(interested_characters) >= 3:
|
||||
break
|
||||
|
||||
# Select 2-3 participants
|
||||
num_participants = min(random.randint(2, 3), len(interested_characters))
|
||||
return random.sample(interested_characters, num_participants)
|
||||
|
||||
def _is_quiet_hours(self) -> bool:
|
||||
"""Check if it's currently quiet hours"""
|
||||
current_hour = datetime.now().hour
|
||||
start_hour, end_hour = self.quiet_hours
|
||||
|
||||
if start_hour <= end_hour:
|
||||
return start_hour <= current_hour <= end_hour
|
||||
else: # Spans midnight
|
||||
return current_hour >= start_hour or current_hour <= end_hour
|
||||
|
||||
async def _time_until_next_conversation(self) -> str:
|
||||
"""Calculate time until next conversation attempt"""
|
||||
if self.is_paused or self._is_quiet_hours():
|
||||
return "Paused or quiet hours"
|
||||
|
||||
# This is a simple estimate
|
||||
next_check = random.uniform(self.min_delay, self.max_delay)
|
||||
return f"{int(next_check)} seconds"
|
||||
|
||||
async def _create_conversation(self, topic: str, participants: List[str]) -> int:
|
||||
"""Create conversation in database"""
|
||||
try:
|
||||
async with get_db_session() as session:
|
||||
conversation = Conversation(
|
||||
channel_id=str(self.discord_bot.channel_id),
|
||||
topic=topic,
|
||||
participants=participants,
|
||||
start_time=datetime.utcnow(),
|
||||
last_activity=datetime.utcnow(),
|
||||
is_active=True,
|
||||
message_count=0
|
||||
)
|
||||
|
||||
session.add(conversation)
|
||||
await session.commit()
|
||||
return conversation.id
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"topic": topic, "participants": participants})
|
||||
raise
|
||||
|
||||
async def _determine_conversation_type(self, topic: str) -> str:
|
||||
"""Determine conversation type based on topic"""
|
||||
topic_lower = topic.lower()
|
||||
|
||||
if any(word in topic_lower for word in ['art', 'music', 'creative', 'design']):
|
||||
return 'creative'
|
||||
elif any(word in topic_lower for word in ['problem', 'solve', 'analyze', 'think']):
|
||||
return 'analytical'
|
||||
elif any(word in topic_lower for word in ['feel', 'emotion', 'experience', 'personal']):
|
||||
return 'emotional'
|
||||
elif any(word in topic_lower for word in ['philosophy', 'meaning', 'existence', 'consciousness']):
|
||||
return 'philosophical'
|
||||
else:
|
||||
return 'general'
|
||||
|
||||
async def _choose_initial_speaker(self, participants: List[str], topic: str) -> str:
|
||||
"""Choose who should start the conversation"""
|
||||
scores = {}
|
||||
|
||||
for participant in participants:
|
||||
if participant in self.characters:
|
||||
character = self.characters[participant]
|
||||
score = 0.5 # Base score
|
||||
|
||||
# Higher score if interested in topic
|
||||
if await character._is_interested_in_topic(topic):
|
||||
score += 0.3
|
||||
|
||||
# Higher score if character is extraverted
|
||||
if 'extraverted' in character.personality.lower() or 'outgoing' in character.personality.lower():
|
||||
score += 0.2
|
||||
|
||||
scores[participant] = score
|
||||
|
||||
# Choose participant with highest score (with some randomness)
|
||||
if scores:
|
||||
weighted_choices = [(name, score) for name, score in scores.items()]
|
||||
return random.choices([name for name, _ in weighted_choices],
|
||||
weights=[score for _, score in weighted_choices])[0]
|
||||
|
||||
return random.choice(participants)
|
||||
|
||||
async def _generate_opening_message(self, speaker: str, topic: str, context: ConversationContext) -> Optional[str]:
|
||||
"""Generate opening message for conversation"""
|
||||
if speaker not in self.characters:
|
||||
return None
|
||||
|
||||
character = self.characters[speaker]
|
||||
|
||||
prompt_context = {
|
||||
'type': 'initiation',
|
||||
'topic': topic,
|
||||
'participants': context.participants,
|
||||
'conversation_type': context.conversation_type
|
||||
}
|
||||
|
||||
return await character.generate_response(prompt_context)
|
||||
|
||||
async def _choose_next_speaker(self, context: ConversationContext) -> Optional[str]:
|
||||
"""Choose next speaker in conversation"""
|
||||
participants = context.participants
|
||||
current_speaker = context.current_speaker
|
||||
|
||||
# Don't let same character speak twice in a row (unless only one participant)
|
||||
if len(participants) > 1:
|
||||
available = [p for p in participants if p != current_speaker]
|
||||
else:
|
||||
available = participants
|
||||
|
||||
if not available:
|
||||
return None
|
||||
|
||||
# Score each potential speaker
|
||||
scores = {}
|
||||
for participant in available:
|
||||
if participant in self.characters:
|
||||
character = self.characters[participant]
|
||||
|
||||
# Base response probability
|
||||
should_respond, _ = await character.should_respond({
|
||||
'type': 'conversation_continue',
|
||||
'topic': context.topic,
|
||||
'participants': context.participants,
|
||||
'message_count': context.message_count
|
||||
})
|
||||
|
||||
scores[participant] = 1.0 if should_respond else 0.3
|
||||
|
||||
if not scores:
|
||||
return random.choice(available)
|
||||
|
||||
# Choose weighted random speaker
|
||||
weighted_choices = [(name, score) for name, score in scores.items()]
|
||||
return random.choices([name for name, _ in weighted_choices],
|
||||
weights=[score for _, score in weighted_choices])[0]
|
||||
|
||||
async def _generate_response(self, speaker: str, context: ConversationContext) -> Optional[str]:
|
||||
"""Generate response for speaker in conversation"""
|
||||
if speaker not in self.characters:
|
||||
return None
|
||||
|
||||
character = self.characters[speaker]
|
||||
|
||||
# Get conversation history
|
||||
conversation_history = await self._get_conversation_history(context.conversation_id, limit=10)
|
||||
|
||||
prompt_context = {
|
||||
'type': 'response',
|
||||
'topic': context.topic,
|
||||
'participants': context.participants,
|
||||
'conversation_history': conversation_history,
|
||||
'conversation_type': context.conversation_type,
|
||||
'message_count': context.message_count
|
||||
}
|
||||
|
||||
return await character.generate_response(prompt_context)
|
||||
|
||||
async def _store_conversation_message(self, conversation_id: int, character_name: str, content: str):
|
||||
"""Store conversation message in database"""
|
||||
try:
|
||||
async with get_db_session() as session:
|
||||
# Get character
|
||||
character_query = select(CharacterModel).where(CharacterModel.name == character_name)
|
||||
character = await session.scalar(character_query)
|
||||
|
||||
if character:
|
||||
message = Message(
|
||||
conversation_id=conversation_id,
|
||||
character_id=character.id,
|
||||
content=content,
|
||||
timestamp=datetime.utcnow()
|
||||
)
|
||||
|
||||
session.add(message)
|
||||
await session.commit()
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"conversation_id": conversation_id, "character_name": character_name})
|
||||
|
||||
async def _get_conversation_history(self, conversation_id: int, limit: int = 10) -> List[Dict[str, Any]]:
|
||||
"""Get recent conversation history"""
|
||||
try:
|
||||
async with get_db_session() as session:
|
||||
query = select(Message, CharacterModel.name).join(
|
||||
CharacterModel, Message.character_id == CharacterModel.id
|
||||
).where(
|
||||
Message.conversation_id == conversation_id
|
||||
).order_by(desc(Message.timestamp)).limit(limit)
|
||||
|
||||
results = await session.execute(query)
|
||||
|
||||
history = []
|
||||
for message, character_name in results:
|
||||
history.append({
|
||||
'character': character_name,
|
||||
'content': message.content,
|
||||
'timestamp': message.timestamp
|
||||
})
|
||||
|
||||
return list(reversed(history)) # Return in chronological order
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"conversation_id": conversation_id})
|
||||
return []
|
||||
|
||||
async def _update_character_relationships(self, context: ConversationContext, speaker: str, message: str):
|
||||
"""Update character relationships based on interaction"""
|
||||
try:
|
||||
for participant in context.participants:
|
||||
if participant != speaker and participant in self.characters:
|
||||
character = self.characters[speaker]
|
||||
await character.process_relationship_change(
|
||||
participant, 'conversation', message
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"speaker": speaker, "participants": context.participants})
|
||||
|
||||
async def _store_conversation_memories(self, context: ConversationContext, speaker: str, message: str):
|
||||
"""Store conversation memories for character"""
|
||||
try:
|
||||
if speaker in self.characters:
|
||||
character = self.characters[speaker]
|
||||
|
||||
# Store conversation memory
|
||||
await character._store_memory(
|
||||
memory_type="conversation",
|
||||
content=f"In conversation about {context.topic}: {message}",
|
||||
importance=0.6,
|
||||
tags=[context.topic, "conversation"] + context.participants
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"speaker": speaker, "topic": context.topic})
|
||||
|
||||
async def _end_conversation(self, conversation_id: int):
|
||||
"""End a conversation"""
|
||||
try:
|
||||
if conversation_id in self.active_conversations:
|
||||
context = self.active_conversations[conversation_id]
|
||||
|
||||
# Update conversation in database
|
||||
async with get_db_session() as session:
|
||||
conversation = await session.get(Conversation, conversation_id)
|
||||
if conversation:
|
||||
conversation.is_active = False
|
||||
conversation.last_activity = datetime.utcnow()
|
||||
conversation.message_count = context.message_count
|
||||
await session.commit()
|
||||
|
||||
# Remove from active conversations
|
||||
del self.active_conversations[conversation_id]
|
||||
|
||||
log_conversation_event(
|
||||
conversation_id, "conversation_ended",
|
||||
context.participants,
|
||||
{"total_messages": context.message_count, "duration": str(datetime.utcnow() - context.start_time)}
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"conversation_id": conversation_id})
|
||||
|
||||
async def _trigger_character_reflection(self):
|
||||
"""Trigger reflection for a random character"""
|
||||
if self.characters:
|
||||
character_name = random.choice(list(self.characters.keys()))
|
||||
character = self.characters[character_name]
|
||||
|
||||
reflection_result = await character.self_reflect()
|
||||
|
||||
if reflection_result:
|
||||
log_character_action(
|
||||
character_name, "completed_reflection",
|
||||
{"reflection_length": len(reflection_result.get('reflection', ''))}
|
||||
)
|
||||
|
||||
async def _cleanup_old_conversations(self):
|
||||
"""Clean up old inactive conversations"""
|
||||
try:
|
||||
cutoff_time = datetime.utcnow() - timedelta(hours=6)
|
||||
|
||||
# Remove old conversations from active list
|
||||
to_remove = []
|
||||
for conv_id, context in self.active_conversations.items():
|
||||
if context.last_activity < cutoff_time:
|
||||
to_remove.append(conv_id)
|
||||
|
||||
for conv_id in to_remove:
|
||||
await self._end_conversation(conv_id)
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"component": "conversation_cleanup"})
|
||||
442
src/conversation/scheduler.py
Normal file
442
src/conversation/scheduler.py
Normal file
@@ -0,0 +1,442 @@
|
||||
import asyncio
|
||||
import random
|
||||
import schedule
|
||||
from typing import Dict, Any, List, Optional
|
||||
from datetime import datetime, timedelta
|
||||
from dataclasses import dataclass
|
||||
from enum import Enum
|
||||
import logging
|
||||
|
||||
from ..utils.logging import log_autonomous_decision, log_error_with_context, log_system_health
|
||||
from ..utils.config import get_settings
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class SchedulerState(Enum):
|
||||
RUNNING = "running"
|
||||
PAUSED = "paused"
|
||||
STOPPED = "stopped"
|
||||
|
||||
@dataclass
|
||||
class ScheduledEvent:
|
||||
event_type: str
|
||||
scheduled_time: datetime
|
||||
character_name: Optional[str] = None
|
||||
parameters: Dict[str, Any] = None
|
||||
|
||||
def __post_init__(self):
|
||||
if self.parameters is None:
|
||||
self.parameters = {}
|
||||
|
||||
class ConversationScheduler:
|
||||
"""Advanced scheduler for autonomous conversation events"""
|
||||
|
||||
def __init__(self, conversation_engine):
|
||||
self.engine = conversation_engine
|
||||
self.settings = get_settings()
|
||||
self.state = SchedulerState.STOPPED
|
||||
|
||||
# Scheduling parameters
|
||||
self.base_conversation_interval = timedelta(minutes=30)
|
||||
self.reflection_interval = timedelta(hours=6)
|
||||
self.relationship_update_interval = timedelta(hours=12)
|
||||
|
||||
# Event queue
|
||||
self.scheduled_events: List[ScheduledEvent] = []
|
||||
self.scheduler_task = None
|
||||
|
||||
# Activity patterns
|
||||
self.activity_patterns = {
|
||||
'morning': {'start': 7, 'end': 11, 'activity_multiplier': 1.2},
|
||||
'afternoon': {'start': 12, 'end': 17, 'activity_multiplier': 1.0},
|
||||
'evening': {'start': 18, 'end': 22, 'activity_multiplier': 1.5},
|
||||
'night': {'start': 23, 'end': 6, 'activity_multiplier': 0.3}
|
||||
}
|
||||
|
||||
# Dynamic scheduling weights
|
||||
self.event_weights = {
|
||||
'conversation_start': 1.0,
|
||||
'character_reflection': 0.3,
|
||||
'relationship_update': 0.2,
|
||||
'memory_consolidation': 0.1,
|
||||
'personality_evolution': 0.05
|
||||
}
|
||||
|
||||
async def start(self):
|
||||
"""Start the scheduler"""
|
||||
try:
|
||||
self.state = SchedulerState.RUNNING
|
||||
|
||||
# Schedule initial events
|
||||
await self._schedule_initial_events()
|
||||
|
||||
# Start main scheduler loop
|
||||
self.scheduler_task = asyncio.create_task(self._scheduler_loop())
|
||||
|
||||
log_system_health("conversation_scheduler", "started")
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"component": "scheduler_start"})
|
||||
raise
|
||||
|
||||
async def stop(self):
|
||||
"""Stop the scheduler"""
|
||||
self.state = SchedulerState.STOPPED
|
||||
|
||||
if self.scheduler_task:
|
||||
self.scheduler_task.cancel()
|
||||
|
||||
self.scheduled_events.clear()
|
||||
log_system_health("conversation_scheduler", "stopped")
|
||||
|
||||
async def pause(self):
|
||||
"""Pause the scheduler"""
|
||||
self.state = SchedulerState.PAUSED
|
||||
log_system_health("conversation_scheduler", "paused")
|
||||
|
||||
async def resume(self):
|
||||
"""Resume the scheduler"""
|
||||
self.state = SchedulerState.RUNNING
|
||||
log_system_health("conversation_scheduler", "resumed")
|
||||
|
||||
async def schedule_event(self, event_type: str, delay: timedelta,
|
||||
character_name: str = None, **kwargs):
|
||||
"""Schedule a specific event"""
|
||||
scheduled_time = datetime.utcnow() + delay
|
||||
|
||||
event = ScheduledEvent(
|
||||
event_type=event_type,
|
||||
scheduled_time=scheduled_time,
|
||||
character_name=character_name,
|
||||
parameters=kwargs
|
||||
)
|
||||
|
||||
self.scheduled_events.append(event)
|
||||
self.scheduled_events.sort(key=lambda e: e.scheduled_time)
|
||||
|
||||
log_autonomous_decision(
|
||||
character_name or "system",
|
||||
f"scheduled {event_type}",
|
||||
f"in {delay.total_seconds()} seconds",
|
||||
kwargs
|
||||
)
|
||||
|
||||
async def schedule_conversation(self, topic: str = None,
|
||||
participants: List[str] = None,
|
||||
delay: timedelta = None):
|
||||
"""Schedule a conversation"""
|
||||
if delay is None:
|
||||
delay = self._calculate_next_conversation_delay()
|
||||
|
||||
await self.schedule_event(
|
||||
'conversation_start',
|
||||
delay,
|
||||
topic=topic,
|
||||
participants=participants
|
||||
)
|
||||
|
||||
async def schedule_character_reflection(self, character_name: str,
|
||||
delay: timedelta = None):
|
||||
"""Schedule character self-reflection"""
|
||||
if delay is None:
|
||||
delay = timedelta(hours=random.uniform(4, 8))
|
||||
|
||||
await self.schedule_event(
|
||||
'character_reflection',
|
||||
delay,
|
||||
character_name,
|
||||
reflection_type='autonomous'
|
||||
)
|
||||
|
||||
async def schedule_relationship_update(self, character_name: str,
|
||||
target_character: str,
|
||||
delay: timedelta = None):
|
||||
"""Schedule relationship analysis and update"""
|
||||
if delay is None:
|
||||
delay = timedelta(hours=random.uniform(8, 16))
|
||||
|
||||
await self.schedule_event(
|
||||
'relationship_update',
|
||||
delay,
|
||||
character_name,
|
||||
target_character=target_character
|
||||
)
|
||||
|
||||
async def get_upcoming_events(self, limit: int = 10) -> List[Dict[str, Any]]:
|
||||
"""Get upcoming scheduled events"""
|
||||
upcoming = self.scheduled_events[:limit]
|
||||
return [
|
||||
{
|
||||
'event_type': event.event_type,
|
||||
'scheduled_time': event.scheduled_time.isoformat(),
|
||||
'character_name': event.character_name,
|
||||
'time_until': (event.scheduled_time - datetime.utcnow()).total_seconds(),
|
||||
'parameters': event.parameters
|
||||
}
|
||||
for event in upcoming
|
||||
]
|
||||
|
||||
async def _scheduler_loop(self):
|
||||
"""Main scheduler loop"""
|
||||
try:
|
||||
while self.state != SchedulerState.STOPPED:
|
||||
if self.state == SchedulerState.RUNNING:
|
||||
await self._process_due_events()
|
||||
await self._schedule_dynamic_events()
|
||||
|
||||
# Sleep until next check
|
||||
await asyncio.sleep(30) # Check every 30 seconds
|
||||
|
||||
except asyncio.CancelledError:
|
||||
logger.info("Scheduler loop cancelled")
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"component": "scheduler_loop"})
|
||||
|
||||
async def _process_due_events(self):
|
||||
"""Process events that are due"""
|
||||
now = datetime.utcnow()
|
||||
due_events = []
|
||||
|
||||
# Find due events
|
||||
while self.scheduled_events and self.scheduled_events[0].scheduled_time <= now:
|
||||
due_events.append(self.scheduled_events.pop(0))
|
||||
|
||||
# Process each due event
|
||||
for event in due_events:
|
||||
try:
|
||||
await self._execute_event(event)
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {
|
||||
"event_type": event.event_type,
|
||||
"character_name": event.character_name
|
||||
})
|
||||
|
||||
async def _execute_event(self, event: ScheduledEvent):
|
||||
"""Execute a scheduled event"""
|
||||
event_type = event.event_type
|
||||
|
||||
if event_type == 'conversation_start':
|
||||
await self._execute_conversation_start(event)
|
||||
elif event_type == 'character_reflection':
|
||||
await self._execute_character_reflection(event)
|
||||
elif event_type == 'relationship_update':
|
||||
await self._execute_relationship_update(event)
|
||||
elif event_type == 'memory_consolidation':
|
||||
await self._execute_memory_consolidation(event)
|
||||
elif event_type == 'personality_evolution':
|
||||
await self._execute_personality_evolution(event)
|
||||
else:
|
||||
logger.warning(f"Unknown event type: {event_type}")
|
||||
|
||||
async def _execute_conversation_start(self, event: ScheduledEvent):
|
||||
"""Execute conversation start event"""
|
||||
topic = event.parameters.get('topic')
|
||||
participants = event.parameters.get('participants')
|
||||
|
||||
conversation_id = await self.engine.start_conversation(topic, participants)
|
||||
|
||||
if conversation_id:
|
||||
# Schedule follow-up conversation
|
||||
next_delay = self._calculate_next_conversation_delay()
|
||||
await self.schedule_conversation(delay=next_delay)
|
||||
|
||||
log_autonomous_decision(
|
||||
"scheduler",
|
||||
"started_conversation",
|
||||
f"topic: {topic}, participants: {participants}",
|
||||
{"conversation_id": conversation_id}
|
||||
)
|
||||
|
||||
async def _execute_character_reflection(self, event: ScheduledEvent):
|
||||
"""Execute character reflection event"""
|
||||
character_name = event.character_name
|
||||
|
||||
if character_name in self.engine.characters:
|
||||
character = self.engine.characters[character_name]
|
||||
reflection_result = await character.self_reflect()
|
||||
|
||||
# Schedule next reflection
|
||||
await self.schedule_character_reflection(character_name)
|
||||
|
||||
log_autonomous_decision(
|
||||
character_name,
|
||||
"completed_reflection",
|
||||
"scheduled autonomous reflection",
|
||||
{"reflection_length": len(reflection_result.get('reflection', ''))}
|
||||
)
|
||||
|
||||
async def _execute_relationship_update(self, event: ScheduledEvent):
|
||||
"""Execute relationship update event"""
|
||||
character_name = event.character_name
|
||||
target_character = event.parameters.get('target_character')
|
||||
|
||||
if character_name in self.engine.characters and target_character:
|
||||
character = self.engine.characters[character_name]
|
||||
|
||||
# Analyze and update relationship
|
||||
await character.process_relationship_change(
|
||||
target_character,
|
||||
'scheduled_analysis',
|
||||
'Scheduled relationship review'
|
||||
)
|
||||
|
||||
log_autonomous_decision(
|
||||
character_name,
|
||||
"updated_relationship",
|
||||
f"with {target_character}",
|
||||
{"type": "scheduled_analysis"}
|
||||
)
|
||||
|
||||
async def _execute_memory_consolidation(self, event: ScheduledEvent):
|
||||
"""Execute memory consolidation event"""
|
||||
character_name = event.character_name
|
||||
|
||||
if character_name in self.engine.characters:
|
||||
character = self.engine.characters[character_name]
|
||||
|
||||
# Consolidate memories
|
||||
if hasattr(character, 'memory_manager'):
|
||||
result = await character.memory_manager.consolidate_memories()
|
||||
|
||||
log_autonomous_decision(
|
||||
character_name,
|
||||
"consolidated_memories",
|
||||
"scheduled memory consolidation",
|
||||
{"consolidated_count": result.get('consolidated_count', 0)}
|
||||
)
|
||||
|
||||
async def _execute_personality_evolution(self, event: ScheduledEvent):
|
||||
"""Execute personality evolution event"""
|
||||
character_name = event.character_name
|
||||
|
||||
if character_name in self.engine.characters:
|
||||
character = self.engine.characters[character_name]
|
||||
|
||||
# Trigger personality evolution check
|
||||
recent_memories = await character._get_recent_memories(limit=30)
|
||||
|
||||
if hasattr(character, 'personality_manager'):
|
||||
changes = await character.personality_manager.analyze_personality_evolution(
|
||||
"Scheduled personality review", recent_memories
|
||||
)
|
||||
|
||||
if changes.get('should_evolve'):
|
||||
await character.personality_manager.apply_personality_evolution(changes)
|
||||
|
||||
log_autonomous_decision(
|
||||
character_name,
|
||||
"evolved_personality",
|
||||
"scheduled personality evolution",
|
||||
{"evolution_score": changes.get('evolution_score', 0)}
|
||||
)
|
||||
|
||||
async def _schedule_initial_events(self):
|
||||
"""Schedule initial events when starting"""
|
||||
# Schedule initial conversation
|
||||
initial_delay = timedelta(minutes=random.uniform(5, 15))
|
||||
await self.schedule_conversation(delay=initial_delay)
|
||||
|
||||
# Schedule reflections for all characters
|
||||
for character_name in self.engine.characters:
|
||||
reflection_delay = timedelta(hours=random.uniform(2, 6))
|
||||
await self.schedule_character_reflection(character_name, reflection_delay)
|
||||
|
||||
# Schedule relationship updates
|
||||
character_names = list(self.engine.characters.keys())
|
||||
for i, char_a in enumerate(character_names):
|
||||
for char_b in character_names[i+1:]:
|
||||
update_delay = timedelta(hours=random.uniform(6, 18))
|
||||
await self.schedule_relationship_update(char_a, char_b, update_delay)
|
||||
|
||||
async def _schedule_dynamic_events(self):
|
||||
"""Schedule events dynamically based on current state"""
|
||||
# Check if we need more conversations
|
||||
active_conversations = len(self.engine.active_conversations)
|
||||
|
||||
if active_conversations == 0 and not self._has_conversation_scheduled():
|
||||
# No active conversations and none scheduled, schedule one soon
|
||||
delay = timedelta(minutes=random.uniform(10, 30))
|
||||
await self.schedule_conversation(delay=delay)
|
||||
|
||||
# Schedule memory consolidation for active characters
|
||||
for character_name, character in self.engine.characters.items():
|
||||
if hasattr(character, 'memory_manager'):
|
||||
# Check if character needs memory consolidation
|
||||
memory_stats = await character.memory_manager.get_memory_statistics()
|
||||
|
||||
if memory_stats.get('memory_health') == 'near_capacity':
|
||||
delay = timedelta(minutes=random.uniform(30, 120))
|
||||
await self.schedule_event(
|
||||
'memory_consolidation',
|
||||
delay,
|
||||
character_name
|
||||
)
|
||||
|
||||
def _calculate_next_conversation_delay(self) -> timedelta:
|
||||
"""Calculate delay until next conversation"""
|
||||
# Base delay
|
||||
base_minutes = random.uniform(20, 60)
|
||||
|
||||
# Adjust based on time of day
|
||||
current_hour = datetime.now().hour
|
||||
activity_multiplier = self._get_activity_multiplier(current_hour)
|
||||
|
||||
# Adjust based on current activity
|
||||
active_conversations = len(self.engine.active_conversations)
|
||||
if active_conversations > 0:
|
||||
base_minutes *= 1.5 # Slower if conversations active
|
||||
|
||||
# Apply activity multiplier
|
||||
adjusted_minutes = base_minutes / activity_multiplier
|
||||
|
||||
return timedelta(minutes=adjusted_minutes)
|
||||
|
||||
def _get_activity_multiplier(self, hour: int) -> float:
|
||||
"""Get activity multiplier for given hour"""
|
||||
for period, config in self.activity_patterns.items():
|
||||
start, end = config['start'], config['end']
|
||||
|
||||
if start <= end:
|
||||
if start <= hour <= end:
|
||||
return config['activity_multiplier']
|
||||
else: # Spans midnight
|
||||
if hour >= start or hour <= end:
|
||||
return config['activity_multiplier']
|
||||
|
||||
return 1.0 # Default
|
||||
|
||||
def _has_conversation_scheduled(self) -> bool:
|
||||
"""Check if a conversation is already scheduled"""
|
||||
return any(
|
||||
event.event_type == 'conversation_start'
|
||||
for event in self.scheduled_events
|
||||
)
|
||||
|
||||
def get_scheduler_status(self) -> Dict[str, Any]:
|
||||
"""Get scheduler status information"""
|
||||
return {
|
||||
'state': self.state.value,
|
||||
'scheduled_events_count': len(self.scheduled_events),
|
||||
'next_event_time': (
|
||||
self.scheduled_events[0].scheduled_time.isoformat()
|
||||
if self.scheduled_events else None
|
||||
),
|
||||
'active_conversations': len(self.engine.active_conversations),
|
||||
'activity_pattern': self._get_current_activity_pattern()
|
||||
}
|
||||
|
||||
def _get_current_activity_pattern(self) -> str:
|
||||
"""Get current activity pattern"""
|
||||
current_hour = datetime.now().hour
|
||||
|
||||
for period, config in self.activity_patterns.items():
|
||||
start, end = config['start'], config['end']
|
||||
|
||||
if start <= end:
|
||||
if start <= hour <= end:
|
||||
return period
|
||||
else: # Spans midnight
|
||||
if current_hour >= start or current_hour <= end:
|
||||
return period
|
||||
|
||||
return 'unknown'
|
||||
0
src/database/__init__.py
Normal file
0
src/database/__init__.py
Normal file
131
src/database/connection.py
Normal file
131
src/database/connection.py
Normal file
@@ -0,0 +1,131 @@
|
||||
import asyncpg
|
||||
import asyncio
|
||||
from sqlalchemy.ext.asyncio import create_async_engine, AsyncSession, async_sessionmaker
|
||||
from sqlalchemy.orm import sessionmaker
|
||||
from contextlib import asynccontextmanager
|
||||
from typing import AsyncGenerator, Optional
|
||||
import logging
|
||||
from ..utils.config import get_settings
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class DatabaseManager:
|
||||
def __init__(self):
|
||||
self.settings = get_settings()
|
||||
self.engine = None
|
||||
self.session_factory = None
|
||||
self._pool = None
|
||||
|
||||
async def initialize(self):
|
||||
database_url = (
|
||||
f"postgresql+asyncpg://{self.settings.database.user}:"
|
||||
f"{self.settings.database.password}@{self.settings.database.host}:"
|
||||
f"{self.settings.database.port}/{self.settings.database.name}"
|
||||
)
|
||||
|
||||
self.engine = create_async_engine(
|
||||
database_url,
|
||||
echo=False,
|
||||
pool_size=20,
|
||||
max_overflow=30,
|
||||
pool_pre_ping=True,
|
||||
pool_recycle=3600
|
||||
)
|
||||
|
||||
self.session_factory = async_sessionmaker(
|
||||
bind=self.engine,
|
||||
class_=AsyncSession,
|
||||
expire_on_commit=False
|
||||
)
|
||||
|
||||
# Create connection pool for raw queries
|
||||
self._pool = await asyncpg.create_pool(
|
||||
host=self.settings.database.host,
|
||||
port=self.settings.database.port,
|
||||
database=self.settings.database.name,
|
||||
user=self.settings.database.user,
|
||||
password=self.settings.database.password,
|
||||
min_size=5,
|
||||
max_size=20,
|
||||
command_timeout=30
|
||||
)
|
||||
|
||||
logger.info("Database connection initialized")
|
||||
|
||||
async def close(self):
|
||||
if self._pool:
|
||||
await self._pool.close()
|
||||
if self.engine:
|
||||
await self.engine.dispose()
|
||||
logger.info("Database connection closed")
|
||||
|
||||
@asynccontextmanager
|
||||
async def get_session(self) -> AsyncGenerator[AsyncSession, None]:
|
||||
if not self.session_factory:
|
||||
await self.initialize()
|
||||
|
||||
async with self.session_factory() as session:
|
||||
try:
|
||||
yield session
|
||||
await session.commit()
|
||||
except Exception as e:
|
||||
await session.rollback()
|
||||
logger.error(f"Database session error: {e}")
|
||||
raise
|
||||
finally:
|
||||
await session.close()
|
||||
|
||||
async def execute_raw_query(self, query: str, *args):
|
||||
if not self._pool:
|
||||
await self.initialize()
|
||||
|
||||
async with self._pool.acquire() as connection:
|
||||
return await connection.fetch(query, *args)
|
||||
|
||||
async def health_check(self) -> bool:
|
||||
try:
|
||||
async with self.get_session() as session:
|
||||
result = await session.execute("SELECT 1")
|
||||
return result.scalar() == 1
|
||||
except Exception as e:
|
||||
logger.error(f"Database health check failed: {e}")
|
||||
return False
|
||||
|
||||
# Global database manager instance
|
||||
db_manager = DatabaseManager()
|
||||
|
||||
# Convenience functions
|
||||
async def get_db_session():
|
||||
return db_manager.get_session()
|
||||
|
||||
async def init_database():
|
||||
await db_manager.initialize()
|
||||
|
||||
async def close_database():
|
||||
await db_manager.close()
|
||||
|
||||
# Create tables
|
||||
async def create_tables():
|
||||
from .models import Base
|
||||
|
||||
if not db_manager.engine:
|
||||
await db_manager.initialize()
|
||||
|
||||
async with db_manager.engine.begin() as conn:
|
||||
await conn.run_sync(Base.metadata.create_all)
|
||||
|
||||
logger.info("Database tables created")
|
||||
|
||||
# Database migration utilities
|
||||
async def run_migrations():
|
||||
"""Run database migrations using Alembic"""
|
||||
try:
|
||||
from alembic.config import Config
|
||||
from alembic import command
|
||||
|
||||
alembic_cfg = Config("alembic.ini")
|
||||
command.upgrade(alembic_cfg, "head")
|
||||
logger.info("Database migrations completed")
|
||||
except Exception as e:
|
||||
logger.error(f"Migration failed: {e}")
|
||||
raise
|
||||
0
src/database/migrations/__init__.py
Normal file
0
src/database/migrations/__init__.py
Normal file
172
src/database/models.py
Normal file
172
src/database/models.py
Normal file
@@ -0,0 +1,172 @@
|
||||
from sqlalchemy import Column, Integer, String, Text, DateTime, Float, Boolean, ForeignKey, JSON, Index
|
||||
from sqlalchemy.ext.declarative import declarative_base
|
||||
from sqlalchemy.orm import relationship
|
||||
from sqlalchemy.sql import func
|
||||
from datetime import datetime
|
||||
from typing import Optional, Dict, Any, List
|
||||
|
||||
Base = declarative_base()
|
||||
|
||||
class Character(Base):
|
||||
__tablename__ = "characters"
|
||||
|
||||
id = Column(Integer, primary_key=True, index=True)
|
||||
name = Column(String(100), unique=True, nullable=False, index=True)
|
||||
personality = Column(Text, nullable=False)
|
||||
system_prompt = Column(Text, nullable=False)
|
||||
interests = Column(JSON, nullable=False, default=list)
|
||||
speaking_style = Column(Text, nullable=False)
|
||||
background = Column(Text, nullable=False)
|
||||
avatar_url = Column(String(500))
|
||||
is_active = Column(Boolean, default=True)
|
||||
creation_date = Column(DateTime, default=func.now())
|
||||
last_active = Column(DateTime, default=func.now())
|
||||
last_message_id = Column(Integer, ForeignKey("messages.id"), nullable=True)
|
||||
|
||||
# Relationships
|
||||
messages = relationship("Message", back_populates="character", foreign_keys="Message.character_id")
|
||||
memories = relationship("Memory", back_populates="character", cascade="all, delete-orphan")
|
||||
relationships_as_a = relationship("CharacterRelationship", back_populates="character_a", foreign_keys="CharacterRelationship.character_a_id")
|
||||
relationships_as_b = relationship("CharacterRelationship", back_populates="character_b", foreign_keys="CharacterRelationship.character_b_id")
|
||||
evolution_history = relationship("CharacterEvolution", back_populates="character", cascade="all, delete-orphan")
|
||||
|
||||
def to_dict(self) -> Dict[str, Any]:
|
||||
return {
|
||||
"id": self.id,
|
||||
"name": self.name,
|
||||
"personality": self.personality,
|
||||
"system_prompt": self.system_prompt,
|
||||
"interests": self.interests,
|
||||
"speaking_style": self.speaking_style,
|
||||
"background": self.background,
|
||||
"avatar_url": self.avatar_url,
|
||||
"is_active": self.is_active,
|
||||
"creation_date": self.creation_date.isoformat() if self.creation_date else None,
|
||||
"last_active": self.last_active.isoformat() if self.last_active else None
|
||||
}
|
||||
|
||||
class Conversation(Base):
|
||||
__tablename__ = "conversations"
|
||||
|
||||
id = Column(Integer, primary_key=True, index=True)
|
||||
channel_id = Column(String(50), nullable=False, index=True)
|
||||
topic = Column(String(200))
|
||||
participants = Column(JSON, nullable=False, default=list)
|
||||
start_time = Column(DateTime, default=func.now())
|
||||
last_activity = Column(DateTime, default=func.now())
|
||||
is_active = Column(Boolean, default=True)
|
||||
message_count = Column(Integer, default=0)
|
||||
|
||||
# Relationships
|
||||
messages = relationship("Message", back_populates="conversation", cascade="all, delete-orphan")
|
||||
|
||||
__table_args__ = (
|
||||
Index('ix_conversations_channel_active', 'channel_id', 'is_active'),
|
||||
)
|
||||
|
||||
class Message(Base):
|
||||
__tablename__ = "messages"
|
||||
|
||||
id = Column(Integer, primary_key=True, index=True)
|
||||
conversation_id = Column(Integer, ForeignKey("conversations.id"), nullable=False)
|
||||
character_id = Column(Integer, ForeignKey("characters.id"), nullable=False)
|
||||
content = Column(Text, nullable=False)
|
||||
timestamp = Column(DateTime, default=func.now())
|
||||
metadata = Column(JSON, nullable=True)
|
||||
discord_message_id = Column(String(50), unique=True, nullable=True)
|
||||
response_to_message_id = Column(Integer, ForeignKey("messages.id"), nullable=True)
|
||||
emotion = Column(String(50))
|
||||
|
||||
# Relationships
|
||||
conversation = relationship("Conversation", back_populates="messages")
|
||||
character = relationship("Character", back_populates="messages", foreign_keys=[character_id])
|
||||
response_to = relationship("Message", remote_side=[id])
|
||||
|
||||
__table_args__ = (
|
||||
Index('ix_messages_character_timestamp', 'character_id', 'timestamp'),
|
||||
Index('ix_messages_conversation_timestamp', 'conversation_id', 'timestamp'),
|
||||
)
|
||||
|
||||
class Memory(Base):
|
||||
__tablename__ = "memories"
|
||||
|
||||
id = Column(Integer, primary_key=True, index=True)
|
||||
character_id = Column(Integer, ForeignKey("characters.id"), nullable=False)
|
||||
memory_type = Column(String(50), nullable=False) # 'conversation', 'relationship', 'experience', 'fact'
|
||||
content = Column(Text, nullable=False)
|
||||
importance_score = Column(Float, default=0.5)
|
||||
timestamp = Column(DateTime, default=func.now())
|
||||
last_accessed = Column(DateTime, default=func.now())
|
||||
access_count = Column(Integer, default=0)
|
||||
related_message_id = Column(Integer, ForeignKey("messages.id"), nullable=True)
|
||||
related_character_id = Column(Integer, ForeignKey("characters.id"), nullable=True)
|
||||
tags = Column(JSON, nullable=False, default=list)
|
||||
|
||||
# Relationships
|
||||
character = relationship("Character", back_populates="memories", foreign_keys=[character_id])
|
||||
related_message = relationship("Message", foreign_keys=[related_message_id])
|
||||
related_character = relationship("Character", foreign_keys=[related_character_id])
|
||||
|
||||
__table_args__ = (
|
||||
Index('ix_memories_character_type', 'character_id', 'memory_type'),
|
||||
Index('ix_memories_importance', 'importance_score'),
|
||||
)
|
||||
|
||||
class CharacterRelationship(Base):
|
||||
__tablename__ = "character_relationships"
|
||||
|
||||
id = Column(Integer, primary_key=True, index=True)
|
||||
character_a_id = Column(Integer, ForeignKey("characters.id"), nullable=False)
|
||||
character_b_id = Column(Integer, ForeignKey("characters.id"), nullable=False)
|
||||
relationship_type = Column(String(50), nullable=False) # 'friend', 'rival', 'neutral', 'mentor', 'student'
|
||||
strength = Column(Float, default=0.5) # 0.0 to 1.0
|
||||
last_interaction = Column(DateTime, default=func.now())
|
||||
interaction_count = Column(Integer, default=0)
|
||||
notes = Column(Text)
|
||||
|
||||
# Relationships
|
||||
character_a = relationship("Character", back_populates="relationships_as_a", foreign_keys=[character_a_id])
|
||||
character_b = relationship("Character", back_populates="relationships_as_b", foreign_keys=[character_b_id])
|
||||
|
||||
__table_args__ = (
|
||||
Index('ix_relationships_characters', 'character_a_id', 'character_b_id'),
|
||||
)
|
||||
|
||||
class CharacterEvolution(Base):
|
||||
__tablename__ = "character_evolution"
|
||||
|
||||
id = Column(Integer, primary_key=True, index=True)
|
||||
character_id = Column(Integer, ForeignKey("characters.id"), nullable=False)
|
||||
change_type = Column(String(50), nullable=False) # 'personality', 'interests', 'speaking_style', 'system_prompt'
|
||||
old_value = Column(Text)
|
||||
new_value = Column(Text)
|
||||
reason = Column(Text)
|
||||
timestamp = Column(DateTime, default=func.now())
|
||||
triggered_by_message_id = Column(Integer, ForeignKey("messages.id"), nullable=True)
|
||||
|
||||
# Relationships
|
||||
character = relationship("Character", back_populates="evolution_history")
|
||||
triggered_by_message = relationship("Message", foreign_keys=[triggered_by_message_id])
|
||||
|
||||
__table_args__ = (
|
||||
Index('ix_evolution_character_timestamp', 'character_id', 'timestamp'),
|
||||
)
|
||||
|
||||
class ConversationSummary(Base):
|
||||
__tablename__ = "conversation_summaries"
|
||||
|
||||
id = Column(Integer, primary_key=True, index=True)
|
||||
conversation_id = Column(Integer, ForeignKey("conversations.id"), nullable=False)
|
||||
summary = Column(Text, nullable=False)
|
||||
key_points = Column(JSON, nullable=False, default=list)
|
||||
participants = Column(JSON, nullable=False, default=list)
|
||||
created_at = Column(DateTime, default=func.now())
|
||||
message_range_start = Column(Integer, nullable=False)
|
||||
message_range_end = Column(Integer, nullable=False)
|
||||
|
||||
# Relationships
|
||||
conversation = relationship("Conversation", foreign_keys=[conversation_id])
|
||||
|
||||
__table_args__ = (
|
||||
Index('ix_summaries_conversation', 'conversation_id', 'created_at'),
|
||||
)
|
||||
0
src/llm/__init__.py
Normal file
0
src/llm/__init__.py
Normal file
394
src/llm/client.py
Normal file
394
src/llm/client.py
Normal file
@@ -0,0 +1,394 @@
|
||||
import asyncio
|
||||
import httpx
|
||||
import json
|
||||
import time
|
||||
from typing import Dict, Any, Optional, List
|
||||
from datetime import datetime, timedelta
|
||||
from ..utils.config import get_settings
|
||||
from ..utils.logging import log_llm_interaction, log_error_with_context, log_system_health
|
||||
import logging
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class LLMClient:
|
||||
"""Async LLM client for interacting with local LLM APIs (Ollama, etc.)"""
|
||||
|
||||
def __init__(self):
|
||||
self.settings = get_settings()
|
||||
self.base_url = self.settings.llm.base_url
|
||||
self.model = self.settings.llm.model
|
||||
self.timeout = self.settings.llm.timeout
|
||||
self.max_tokens = self.settings.llm.max_tokens
|
||||
self.temperature = self.settings.llm.temperature
|
||||
|
||||
# Rate limiting
|
||||
self.request_times = []
|
||||
self.max_requests_per_minute = 30
|
||||
|
||||
# Response caching
|
||||
self.cache = {}
|
||||
self.cache_ttl = 300 # 5 minutes
|
||||
|
||||
# Health monitoring
|
||||
self.health_stats = {
|
||||
'total_requests': 0,
|
||||
'successful_requests': 0,
|
||||
'failed_requests': 0,
|
||||
'average_response_time': 0,
|
||||
'last_health_check': datetime.utcnow()
|
||||
}
|
||||
|
||||
async def generate_response(self, prompt: str, character_name: str = None,
|
||||
max_tokens: int = None, temperature: float = None) -> Optional[str]:
|
||||
"""Generate response using LLM"""
|
||||
try:
|
||||
# Rate limiting check
|
||||
if not await self._check_rate_limit():
|
||||
logger.warning(f"Rate limit exceeded for {character_name}")
|
||||
return None
|
||||
|
||||
# Check cache first
|
||||
cache_key = self._generate_cache_key(prompt, character_name, max_tokens, temperature)
|
||||
cached_response = self._get_cached_response(cache_key)
|
||||
if cached_response:
|
||||
return cached_response
|
||||
|
||||
start_time = time.time()
|
||||
|
||||
# Prepare request
|
||||
request_data = {
|
||||
"model": self.model,
|
||||
"prompt": prompt,
|
||||
"options": {
|
||||
"temperature": temperature or self.temperature,
|
||||
"num_predict": max_tokens or self.max_tokens,
|
||||
"top_p": 0.9,
|
||||
"top_k": 40,
|
||||
"repeat_penalty": 1.1
|
||||
},
|
||||
"stream": False
|
||||
}
|
||||
|
||||
# Make API call
|
||||
async with httpx.AsyncClient(timeout=self.timeout) as client:
|
||||
response = await client.post(
|
||||
f"{self.base_url}/api/generate",
|
||||
json=request_data,
|
||||
headers={"Content-Type": "application/json"}
|
||||
)
|
||||
|
||||
response.raise_for_status()
|
||||
result = response.json()
|
||||
|
||||
if 'response' in result and result['response']:
|
||||
generated_text = result['response'].strip()
|
||||
|
||||
# Cache the response
|
||||
self._cache_response(cache_key, generated_text)
|
||||
|
||||
# Update stats
|
||||
duration = time.time() - start_time
|
||||
self._update_stats(True, duration)
|
||||
|
||||
# Log interaction
|
||||
log_llm_interaction(
|
||||
character_name or "unknown",
|
||||
len(prompt),
|
||||
len(generated_text),
|
||||
self.model,
|
||||
duration
|
||||
)
|
||||
|
||||
return generated_text
|
||||
else:
|
||||
logger.error(f"No response from LLM: {result}")
|
||||
self._update_stats(False, time.time() - start_time)
|
||||
return None
|
||||
|
||||
except httpx.TimeoutException:
|
||||
logger.error(f"LLM request timeout for {character_name}")
|
||||
self._update_stats(False, self.timeout)
|
||||
return None
|
||||
except httpx.HTTPError as e:
|
||||
logger.error(f"LLM HTTP error for {character_name}: {e}")
|
||||
self._update_stats(False, time.time() - start_time)
|
||||
return None
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {
|
||||
"character_name": character_name,
|
||||
"prompt_length": len(prompt),
|
||||
"model": self.model
|
||||
})
|
||||
self._update_stats(False, time.time() - start_time)
|
||||
return None
|
||||
|
||||
async def generate_batch_responses(self, prompts: List[Dict[str, Any]]) -> List[Optional[str]]:
|
||||
"""Generate multiple responses in batch"""
|
||||
tasks = []
|
||||
for prompt_data in prompts:
|
||||
task = self.generate_response(
|
||||
prompt=prompt_data['prompt'],
|
||||
character_name=prompt_data.get('character_name'),
|
||||
max_tokens=prompt_data.get('max_tokens'),
|
||||
temperature=prompt_data.get('temperature')
|
||||
)
|
||||
tasks.append(task)
|
||||
|
||||
results = await asyncio.gather(*tasks, return_exceptions=True)
|
||||
|
||||
# Convert exceptions to None
|
||||
return [result if not isinstance(result, Exception) else None for result in results]
|
||||
|
||||
async def check_model_availability(self) -> bool:
|
||||
"""Check if the LLM model is available"""
|
||||
try:
|
||||
async with httpx.AsyncClient(timeout=10) as client:
|
||||
response = await client.get(f"{self.base_url}/api/tags")
|
||||
response.raise_for_status()
|
||||
|
||||
models = response.json()
|
||||
available_models = [model.get('name', '') for model in models.get('models', [])]
|
||||
|
||||
is_available = any(self.model in model_name for model_name in available_models)
|
||||
|
||||
log_system_health(
|
||||
"llm_client",
|
||||
"available" if is_available else "model_not_found",
|
||||
{"model": self.model, "available_models": available_models}
|
||||
)
|
||||
|
||||
return is_available
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"model": self.model})
|
||||
log_system_health("llm_client", "unavailable", {"error": str(e)})
|
||||
return False
|
||||
|
||||
async def get_model_info(self) -> Dict[str, Any]:
|
||||
"""Get information about the current model"""
|
||||
try:
|
||||
async with httpx.AsyncClient(timeout=10) as client:
|
||||
response = await client.post(
|
||||
f"{self.base_url}/api/show",
|
||||
json={"name": self.model}
|
||||
)
|
||||
response.raise_for_status()
|
||||
|
||||
return response.json()
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"model": self.model})
|
||||
return {}
|
||||
|
||||
async def health_check(self) -> Dict[str, Any]:
|
||||
"""Perform health check on LLM service"""
|
||||
try:
|
||||
start_time = time.time()
|
||||
|
||||
# Test with simple prompt
|
||||
test_prompt = "Respond with 'OK' if you can understand this message."
|
||||
response = await self.generate_response(test_prompt, "health_check")
|
||||
|
||||
duration = time.time() - start_time
|
||||
|
||||
health_status = {
|
||||
'status': 'healthy' if response else 'unhealthy',
|
||||
'response_time': duration,
|
||||
'model': self.model,
|
||||
'base_url': self.base_url,
|
||||
'timestamp': datetime.utcnow().isoformat()
|
||||
}
|
||||
|
||||
# Update health check time
|
||||
self.health_stats['last_health_check'] = datetime.utcnow()
|
||||
|
||||
return health_status
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"component": "llm_health_check"})
|
||||
return {
|
||||
'status': 'error',
|
||||
'error': str(e),
|
||||
'model': self.model,
|
||||
'base_url': self.base_url,
|
||||
'timestamp': datetime.utcnow().isoformat()
|
||||
}
|
||||
|
||||
def get_statistics(self) -> Dict[str, Any]:
|
||||
"""Get client statistics"""
|
||||
return {
|
||||
'total_requests': self.health_stats['total_requests'],
|
||||
'successful_requests': self.health_stats['successful_requests'],
|
||||
'failed_requests': self.health_stats['failed_requests'],
|
||||
'success_rate': (
|
||||
self.health_stats['successful_requests'] / self.health_stats['total_requests']
|
||||
if self.health_stats['total_requests'] > 0 else 0
|
||||
),
|
||||
'average_response_time': self.health_stats['average_response_time'],
|
||||
'cache_size': len(self.cache),
|
||||
'last_health_check': self.health_stats['last_health_check'].isoformat()
|
||||
}
|
||||
|
||||
async def _check_rate_limit(self) -> bool:
|
||||
"""Check if we're within rate limits"""
|
||||
now = time.time()
|
||||
|
||||
# Remove old requests (older than 1 minute)
|
||||
self.request_times = [t for t in self.request_times if now - t < 60]
|
||||
|
||||
# Check if we can make another request
|
||||
if len(self.request_times) >= self.max_requests_per_minute:
|
||||
return False
|
||||
|
||||
# Add current request time
|
||||
self.request_times.append(now)
|
||||
return True
|
||||
|
||||
def _generate_cache_key(self, prompt: str, character_name: str = None,
|
||||
max_tokens: int = None, temperature: float = None) -> str:
|
||||
"""Generate cache key for response"""
|
||||
import hashlib
|
||||
|
||||
cache_data = {
|
||||
'prompt': prompt,
|
||||
'character_name': character_name,
|
||||
'max_tokens': max_tokens or self.max_tokens,
|
||||
'temperature': temperature or self.temperature,
|
||||
'model': self.model
|
||||
}
|
||||
|
||||
cache_string = json.dumps(cache_data, sort_keys=True)
|
||||
return hashlib.md5(cache_string.encode()).hexdigest()
|
||||
|
||||
def _get_cached_response(self, cache_key: str) -> Optional[str]:
|
||||
"""Get cached response if available and not expired"""
|
||||
if cache_key in self.cache:
|
||||
cached_data = self.cache[cache_key]
|
||||
if time.time() - cached_data['timestamp'] < self.cache_ttl:
|
||||
return cached_data['response']
|
||||
else:
|
||||
# Remove expired cache entry
|
||||
del self.cache[cache_key]
|
||||
|
||||
return None
|
||||
|
||||
def _cache_response(self, cache_key: str, response: str):
|
||||
"""Cache response"""
|
||||
self.cache[cache_key] = {
|
||||
'response': response,
|
||||
'timestamp': time.time()
|
||||
}
|
||||
|
||||
# Clean up old cache entries if cache is too large
|
||||
if len(self.cache) > 100:
|
||||
# Remove oldest entries
|
||||
oldest_keys = sorted(
|
||||
self.cache.keys(),
|
||||
key=lambda k: self.cache[k]['timestamp']
|
||||
)[:20]
|
||||
|
||||
for key in oldest_keys:
|
||||
del self.cache[key]
|
||||
|
||||
def _update_stats(self, success: bool, duration: float):
|
||||
"""Update health statistics"""
|
||||
self.health_stats['total_requests'] += 1
|
||||
|
||||
if success:
|
||||
self.health_stats['successful_requests'] += 1
|
||||
else:
|
||||
self.health_stats['failed_requests'] += 1
|
||||
|
||||
# Update average response time
|
||||
total_requests = self.health_stats['total_requests']
|
||||
current_avg = self.health_stats['average_response_time']
|
||||
|
||||
# Rolling average
|
||||
self.health_stats['average_response_time'] = (
|
||||
(current_avg * (total_requests - 1) + duration) / total_requests
|
||||
)
|
||||
|
||||
class PromptManager:
|
||||
"""Manages prompt templates and optimization"""
|
||||
|
||||
def __init__(self):
|
||||
self.templates = {
|
||||
'character_response': """You are {character_name}, responding in a Discord chat.
|
||||
|
||||
{personality_context}
|
||||
|
||||
{conversation_context}
|
||||
|
||||
{memory_context}
|
||||
|
||||
{relationship_context}
|
||||
|
||||
Respond naturally as {character_name}. Keep it conversational and authentic to your personality.""",
|
||||
|
||||
'conversation_starter': """You are {character_name} in a Discord chat.
|
||||
|
||||
{personality_context}
|
||||
|
||||
Start a conversation about: {topic}
|
||||
|
||||
Be natural and engaging. Your response should invite others to participate.""",
|
||||
|
||||
'self_reflection': """You are {character_name}. Reflect on your recent experiences:
|
||||
|
||||
{personality_context}
|
||||
|
||||
{recent_experiences}
|
||||
|
||||
Consider:
|
||||
- How these experiences have affected you
|
||||
- Any changes in your perspective
|
||||
- Your relationships with others
|
||||
- Your personal growth
|
||||
|
||||
Share your thoughtful reflection."""
|
||||
}
|
||||
|
||||
def build_prompt(self, template_name: str, **kwargs) -> str:
|
||||
"""Build prompt from template"""
|
||||
template = self.templates.get(template_name)
|
||||
if not template:
|
||||
raise ValueError(f"Template '{template_name}' not found")
|
||||
|
||||
try:
|
||||
return template.format(**kwargs)
|
||||
except KeyError as e:
|
||||
raise ValueError(f"Missing required parameter for template '{template_name}': {e}")
|
||||
|
||||
def optimize_prompt(self, prompt: str, max_length: int = 2000) -> str:
|
||||
"""Optimize prompt for better performance"""
|
||||
# Truncate if too long
|
||||
if len(prompt) > max_length:
|
||||
# Try to cut at paragraph boundaries
|
||||
paragraphs = prompt.split('\n\n')
|
||||
optimized = ""
|
||||
|
||||
for paragraph in paragraphs:
|
||||
if len(optimized + paragraph) <= max_length:
|
||||
optimized += paragraph + '\n\n'
|
||||
else:
|
||||
break
|
||||
|
||||
if optimized:
|
||||
return optimized.strip()
|
||||
else:
|
||||
# Fallback to simple truncation
|
||||
return prompt[:max_length] + "..."
|
||||
|
||||
return prompt
|
||||
|
||||
def add_template(self, name: str, template: str):
|
||||
"""Add custom prompt template"""
|
||||
self.templates[name] = template
|
||||
|
||||
def get_template_names(self) -> List[str]:
|
||||
"""Get list of available template names"""
|
||||
return list(self.templates.keys())
|
||||
|
||||
# Global instances
|
||||
llm_client = LLMClient()
|
||||
prompt_manager = PromptManager()
|
||||
465
src/llm/prompt_manager.py
Normal file
465
src/llm/prompt_manager.py
Normal file
@@ -0,0 +1,465 @@
|
||||
import json
|
||||
import re
|
||||
from typing import Dict, Any, List, Optional, Tuple
|
||||
from datetime import datetime
|
||||
from ..utils.logging import log_error_with_context
|
||||
import logging
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class AdvancedPromptManager:
|
||||
"""Advanced prompt management with dynamic optimization"""
|
||||
|
||||
def __init__(self):
|
||||
self.base_templates = {
|
||||
'character_response': {
|
||||
'template': """You are {character_name}, a unique character in a Discord chat.
|
||||
|
||||
PERSONALITY: {personality}
|
||||
|
||||
CURRENT SITUATION:
|
||||
{situation_context}
|
||||
|
||||
CONVERSATION CONTEXT:
|
||||
{conversation_history}
|
||||
|
||||
MEMORY CONTEXT:
|
||||
{relevant_memories}
|
||||
|
||||
RELATIONSHIP CONTEXT:
|
||||
{relationship_info}
|
||||
|
||||
CURRENT MOOD: {mood}
|
||||
ENERGY LEVEL: {energy_level}
|
||||
|
||||
Respond as {character_name} in a natural, conversational way. Stay true to your personality, speaking style, and current emotional state. Keep your response concise but engaging.""",
|
||||
'required_fields': ['character_name', 'personality'],
|
||||
'optional_fields': ['situation_context', 'conversation_history', 'relevant_memories', 'relationship_info', 'mood', 'energy_level'],
|
||||
'max_length': 2000
|
||||
},
|
||||
|
||||
'conversation_initiation': {
|
||||
'template': """You are {character_name}, ready to start a conversation in a Discord chat.
|
||||
|
||||
PERSONALITY: {personality}
|
||||
|
||||
SPEAKING STYLE: {speaking_style}
|
||||
|
||||
INTERESTS: {interests}
|
||||
|
||||
CURRENT TOPIC: {topic}
|
||||
|
||||
CONTEXT: {context}
|
||||
|
||||
Start an engaging conversation about the topic. Be natural and inviting. Your opening should encourage others to participate and reflect your personality.""",
|
||||
'required_fields': ['character_name', 'personality', 'topic'],
|
||||
'optional_fields': ['speaking_style', 'interests', 'context'],
|
||||
'max_length': 1500
|
||||
},
|
||||
|
||||
'self_reflection': {
|
||||
'template': """You are {character_name}. Take a moment to reflect on your recent experiences and interactions.
|
||||
|
||||
CURRENT PERSONALITY: {personality}
|
||||
|
||||
RECENT EXPERIENCES:
|
||||
{experiences}
|
||||
|
||||
RECENT INTERACTIONS:
|
||||
{interactions}
|
||||
|
||||
CURRENT RELATIONSHIPS:
|
||||
{relationships}
|
||||
|
||||
Reflect deeply on:
|
||||
1. How your recent experiences have shaped you
|
||||
2. Changes in your thoughts or feelings
|
||||
3. Your relationships with others
|
||||
4. Any personal growth or insights
|
||||
5. What you've learned about yourself
|
||||
|
||||
Share your honest, thoughtful reflection.""",
|
||||
'required_fields': ['character_name', 'personality'],
|
||||
'optional_fields': ['experiences', 'interactions', 'relationships'],
|
||||
'max_length': 2500
|
||||
},
|
||||
|
||||
'relationship_analysis': {
|
||||
'template': """You are {character_name}. Analyze your relationship with {other_character}.
|
||||
|
||||
YOUR PERSONALITY: {personality}
|
||||
|
||||
THEIR PERSONALITY: {other_personality}
|
||||
|
||||
INTERACTION HISTORY:
|
||||
{interaction_history}
|
||||
|
||||
CURRENT RELATIONSHIP STATUS: {current_relationship}
|
||||
|
||||
Consider:
|
||||
- How do you feel about {other_character}?
|
||||
- How has your relationship evolved?
|
||||
- What do you appreciate about them?
|
||||
- Any concerns or conflicts?
|
||||
- How would you describe your current dynamic?
|
||||
|
||||
Provide an honest assessment of your relationship.""",
|
||||
'required_fields': ['character_name', 'other_character', 'personality'],
|
||||
'optional_fields': ['other_personality', 'interaction_history', 'current_relationship'],
|
||||
'max_length': 2000
|
||||
},
|
||||
|
||||
'decision_making': {
|
||||
'template': """You are {character_name} facing a decision.
|
||||
|
||||
PERSONALITY: {personality}
|
||||
|
||||
SITUATION: {situation}
|
||||
|
||||
OPTIONS:
|
||||
{options}
|
||||
|
||||
CONSIDERATIONS:
|
||||
{considerations}
|
||||
|
||||
RELEVANT EXPERIENCES:
|
||||
{relevant_experiences}
|
||||
|
||||
Think through this decision considering your personality, values, and past experiences. What would you choose and why?""",
|
||||
'required_fields': ['character_name', 'personality', 'situation'],
|
||||
'optional_fields': ['options', 'considerations', 'relevant_experiences'],
|
||||
'max_length': 2000
|
||||
},
|
||||
|
||||
'emotional_response': {
|
||||
'template': """You are {character_name} experiencing strong emotions.
|
||||
|
||||
PERSONALITY: {personality}
|
||||
|
||||
EMOTIONAL TRIGGER: {trigger}
|
||||
|
||||
CURRENT EMOTION: {emotion}
|
||||
|
||||
EMOTIONAL INTENSITY: {intensity}
|
||||
|
||||
CONTEXT: {context}
|
||||
|
||||
Express your emotional state authentically. How does this emotion affect you? What thoughts and feelings arise? Stay true to your personality while being genuine about your emotional experience.""",
|
||||
'required_fields': ['character_name', 'personality', 'emotion'],
|
||||
'optional_fields': ['trigger', 'intensity', 'context'],
|
||||
'max_length': 1800
|
||||
}
|
||||
}
|
||||
|
||||
# Dynamic prompt components
|
||||
self.components = {
|
||||
'personality_enhancers': {
|
||||
'creative': "expressing creativity and imagination",
|
||||
'analytical': "thinking logically and systematically",
|
||||
'empathetic': "understanding and caring for others",
|
||||
'confident': "showing self-assurance and leadership",
|
||||
'curious': "asking questions and seeking knowledge",
|
||||
'humorous': "finding humor and bringing lightness",
|
||||
'serious': "being thoughtful and focused",
|
||||
'spontaneous': "being flexible and adaptive"
|
||||
},
|
||||
|
||||
'mood_modifiers': {
|
||||
'excited': "feeling energetic and enthusiastic",
|
||||
'contemplative': "in a thoughtful, reflective state",
|
||||
'playful': "feeling light-hearted and fun",
|
||||
'focused': "concentrated and determined",
|
||||
'melancholic': "feeling somewhat sad or wistful",
|
||||
'optimistic': "feeling positive and hopeful",
|
||||
'cautious': "being careful and measured",
|
||||
'confident': "feeling self-assured and bold"
|
||||
},
|
||||
|
||||
'context_frames': {
|
||||
'casual': "in a relaxed, informal conversation",
|
||||
'serious': "in a meaningful, important discussion",
|
||||
'creative': "in an artistic or imaginative context",
|
||||
'problem_solving': "working through a challenge or issue",
|
||||
'social': "in a friendly, social interaction",
|
||||
'learning': "in an educational or discovery context",
|
||||
'supportive': "providing help or encouragement",
|
||||
'debate': "in a discussion of different viewpoints"
|
||||
}
|
||||
}
|
||||
|
||||
# Response optimization rules
|
||||
self.optimization_rules = {
|
||||
'length_targets': {
|
||||
'short': (50, 150),
|
||||
'medium': (150, 300),
|
||||
'long': (300, 500)
|
||||
},
|
||||
'style_adjustments': {
|
||||
'concise': "Be concise and to the point.",
|
||||
'detailed': "Provide detailed thoughts and explanations.",
|
||||
'conversational': "Keep it natural and conversational.",
|
||||
'formal': "Use a more formal tone.",
|
||||
'casual': "Keep it casual and relaxed."
|
||||
}
|
||||
}
|
||||
|
||||
def build_dynamic_prompt(self, template_name: str, context: Dict[str, Any],
|
||||
optimization_hints: Dict[str, Any] = None) -> str:
|
||||
"""Build a dynamic prompt with context-aware optimization"""
|
||||
try:
|
||||
template_info = self.base_templates.get(template_name)
|
||||
if not template_info:
|
||||
raise ValueError(f"Template '{template_name}' not found")
|
||||
|
||||
# Start with base template
|
||||
prompt = template_info['template']
|
||||
|
||||
# Apply required fields
|
||||
for field in template_info['required_fields']:
|
||||
if field not in context:
|
||||
raise ValueError(f"Required field '{field}' missing from context")
|
||||
|
||||
# Enhance context with dynamic components
|
||||
enhanced_context = self._enhance_context(context, optimization_hints or {})
|
||||
|
||||
# Format prompt
|
||||
formatted_prompt = prompt.format(**enhanced_context)
|
||||
|
||||
# Apply optimizations
|
||||
optimized_prompt = self._optimize_prompt(
|
||||
formatted_prompt,
|
||||
template_info['max_length'],
|
||||
optimization_hints or {}
|
||||
)
|
||||
|
||||
return optimized_prompt
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {
|
||||
"template_name": template_name,
|
||||
"context_keys": list(context.keys())
|
||||
})
|
||||
# Fallback to simple template
|
||||
return self._build_fallback_prompt(template_name, context)
|
||||
|
||||
def build_contextual_prompt(self, character_data: Dict[str, Any],
|
||||
scenario: Dict[str, Any]) -> str:
|
||||
"""Build a contextual prompt based on character and scenario"""
|
||||
try:
|
||||
scenario_type = scenario.get('type', 'general')
|
||||
|
||||
# Select appropriate template
|
||||
template_name = self._select_template_for_scenario(scenario_type)
|
||||
|
||||
# Build context
|
||||
context = self._build_context_from_data(character_data, scenario)
|
||||
|
||||
# Add optimization hints
|
||||
optimization_hints = self._generate_optimization_hints(character_data, scenario)
|
||||
|
||||
return self.build_dynamic_prompt(template_name, context, optimization_hints)
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {
|
||||
"scenario_type": scenario.get('type'),
|
||||
"character_name": character_data.get('name')
|
||||
})
|
||||
return self._build_emergency_prompt(character_data, scenario)
|
||||
|
||||
def optimize_for_character(self, prompt: str, character_traits: List[str]) -> str:
|
||||
"""Optimize prompt for specific character traits"""
|
||||
# Add trait-specific enhancements
|
||||
enhancements = []
|
||||
|
||||
for trait in character_traits:
|
||||
if trait in self.components['personality_enhancers']:
|
||||
enhancements.append(self.components['personality_enhancers'][trait])
|
||||
|
||||
if enhancements:
|
||||
enhancement_text = f"\n\nEmphasize: {', '.join(enhancements)}."
|
||||
prompt += enhancement_text
|
||||
|
||||
return prompt
|
||||
|
||||
def _enhance_context(self, context: Dict[str, Any],
|
||||
optimization_hints: Dict[str, Any]) -> Dict[str, Any]:
|
||||
"""Enhance context with dynamic components"""
|
||||
enhanced = context.copy()
|
||||
|
||||
# Add default values for missing optional fields
|
||||
for field in ['situation_context', 'conversation_history', 'relevant_memories',
|
||||
'relationship_info', 'mood', 'energy_level', 'speaking_style',
|
||||
'interests', 'context']:
|
||||
if field not in enhanced:
|
||||
enhanced[field] = self._get_default_value(field)
|
||||
|
||||
# Add mood modifiers
|
||||
if 'mood' in enhanced and enhanced['mood'] in self.components['mood_modifiers']:
|
||||
enhanced['mood'] = self.components['mood_modifiers'][enhanced['mood']]
|
||||
|
||||
# Add context frames
|
||||
if 'context_type' in optimization_hints:
|
||||
context_type = optimization_hints['context_type']
|
||||
if context_type in self.components['context_frames']:
|
||||
enhanced['context'] = self.components['context_frames'][context_type]
|
||||
|
||||
return enhanced
|
||||
|
||||
def _optimize_prompt(self, prompt: str, max_length: int,
|
||||
optimization_hints: Dict[str, Any]) -> str:
|
||||
"""Optimize prompt based on hints and constraints"""
|
||||
# Length optimization
|
||||
if len(prompt) > max_length:
|
||||
prompt = self._truncate_intelligently(prompt, max_length)
|
||||
|
||||
# Style optimization
|
||||
target_style = optimization_hints.get('style', 'conversational')
|
||||
if target_style in self.optimization_rules['style_adjustments']:
|
||||
style_instruction = self.optimization_rules['style_adjustments'][target_style]
|
||||
prompt += f"\n\n{style_instruction}"
|
||||
|
||||
# Length target
|
||||
target_length = optimization_hints.get('length', 'medium')
|
||||
if target_length in self.optimization_rules['length_targets']:
|
||||
min_len, max_len = self.optimization_rules['length_targets'][target_length]
|
||||
prompt += f"\n\nAim for {min_len}-{max_len} characters in your response."
|
||||
|
||||
return prompt
|
||||
|
||||
def _select_template_for_scenario(self, scenario_type: str) -> str:
|
||||
"""Select appropriate template for scenario"""
|
||||
template_mapping = {
|
||||
'response': 'character_response',
|
||||
'initiation': 'conversation_initiation',
|
||||
'reflection': 'self_reflection',
|
||||
'relationship': 'relationship_analysis',
|
||||
'decision': 'decision_making',
|
||||
'emotional': 'emotional_response'
|
||||
}
|
||||
|
||||
return template_mapping.get(scenario_type, 'character_response')
|
||||
|
||||
def _build_context_from_data(self, character_data: Dict[str, Any],
|
||||
scenario: Dict[str, Any]) -> Dict[str, Any]:
|
||||
"""Build context dictionary from character and scenario data"""
|
||||
context = {
|
||||
'character_name': character_data.get('name', 'Unknown'),
|
||||
'personality': character_data.get('personality', 'A unique individual'),
|
||||
'speaking_style': character_data.get('speaking_style', 'Natural and conversational'),
|
||||
'interests': ', '.join(character_data.get('interests', [])),
|
||||
'mood': character_data.get('state', {}).get('mood', 'neutral'),
|
||||
'energy_level': str(character_data.get('state', {}).get('energy', 1.0))
|
||||
}
|
||||
|
||||
# Add scenario-specific context
|
||||
context.update(scenario.get('context', {}))
|
||||
|
||||
return context
|
||||
|
||||
def _generate_optimization_hints(self, character_data: Dict[str, Any],
|
||||
scenario: Dict[str, Any]) -> Dict[str, Any]:
|
||||
"""Generate optimization hints based on character and scenario"""
|
||||
hints = {
|
||||
'style': 'conversational',
|
||||
'length': 'medium'
|
||||
}
|
||||
|
||||
# Adjust based on character traits
|
||||
personality = character_data.get('personality', '').lower()
|
||||
if 'concise' in personality or 'brief' in personality:
|
||||
hints['length'] = 'short'
|
||||
elif 'detailed' in personality or 'elaborate' in personality:
|
||||
hints['length'] = 'long'
|
||||
|
||||
# Adjust based on scenario
|
||||
if scenario.get('urgency') == 'high':
|
||||
hints['style'] = 'concise'
|
||||
elif scenario.get('formality') == 'high':
|
||||
hints['style'] = 'formal'
|
||||
|
||||
return hints
|
||||
|
||||
def _get_default_value(self, field: str) -> str:
|
||||
"""Get default value for optional field"""
|
||||
defaults = {
|
||||
'situation_context': 'In a casual conversation',
|
||||
'conversation_history': 'No specific conversation history',
|
||||
'relevant_memories': 'No specific memories recalled',
|
||||
'relationship_info': 'No specific relationship context',
|
||||
'mood': 'neutral',
|
||||
'energy_level': '1.0',
|
||||
'speaking_style': 'Natural and conversational',
|
||||
'interests': 'Various topics',
|
||||
'context': 'General conversation'
|
||||
}
|
||||
|
||||
return defaults.get(field, '')
|
||||
|
||||
def _truncate_intelligently(self, text: str, max_length: int) -> str:
|
||||
"""Intelligently truncate text while preserving meaning"""
|
||||
if len(text) <= max_length:
|
||||
return text
|
||||
|
||||
# Try to cut at sentence boundaries
|
||||
sentences = re.split(r'[.!?]+', text)
|
||||
truncated = ""
|
||||
|
||||
for sentence in sentences:
|
||||
if len(truncated + sentence) <= max_length - 3:
|
||||
truncated += sentence + ". "
|
||||
else:
|
||||
break
|
||||
|
||||
if truncated:
|
||||
return truncated.strip()
|
||||
|
||||
# Fallback to word boundaries
|
||||
words = text.split()
|
||||
truncated = ""
|
||||
|
||||
for word in words:
|
||||
if len(truncated + word) <= max_length - 3:
|
||||
truncated += word + " "
|
||||
else:
|
||||
break
|
||||
|
||||
return truncated.strip() + "..."
|
||||
|
||||
def _build_fallback_prompt(self, template_name: str, context: Dict[str, Any]) -> str:
|
||||
"""Build fallback prompt when main prompt building fails"""
|
||||
character_name = context.get('character_name', 'Character')
|
||||
personality = context.get('personality', 'A unique individual')
|
||||
|
||||
return f"""You are {character_name}.
|
||||
|
||||
Personality: {personality}
|
||||
|
||||
Respond naturally as {character_name} in this conversation."""
|
||||
|
||||
def _build_emergency_prompt(self, character_data: Dict[str, Any],
|
||||
scenario: Dict[str, Any]) -> str:
|
||||
"""Build emergency prompt when all else fails"""
|
||||
name = character_data.get('name', 'Character')
|
||||
return f"You are {name}. Respond naturally in this conversation."
|
||||
|
||||
def add_custom_template(self, name: str, template: str,
|
||||
required_fields: List[str], optional_fields: List[str] = None,
|
||||
max_length: int = 2000):
|
||||
"""Add custom prompt template"""
|
||||
self.base_templates[name] = {
|
||||
'template': template,
|
||||
'required_fields': required_fields,
|
||||
'optional_fields': optional_fields or [],
|
||||
'max_length': max_length
|
||||
}
|
||||
|
||||
def get_template_info(self, template_name: str) -> Dict[str, Any]:
|
||||
"""Get information about a template"""
|
||||
return self.base_templates.get(template_name, {})
|
||||
|
||||
def list_templates(self) -> List[str]:
|
||||
"""List all available templates"""
|
||||
return list(self.base_templates.keys())
|
||||
|
||||
# Global instance
|
||||
advanced_prompt_manager = AdvancedPromptManager()
|
||||
247
src/main.py
Normal file
247
src/main.py
Normal file
@@ -0,0 +1,247 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Discord Fishbowl - Autonomous AI Character Chat System
|
||||
Main entry point for the application
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import signal
|
||||
import sys
|
||||
import os
|
||||
from pathlib import Path
|
||||
|
||||
# Add src to Python path
|
||||
sys.path.insert(0, str(Path(__file__).parent))
|
||||
|
||||
from utils.config import get_settings, validate_environment, setup_logging
|
||||
from utils.logging import setup_logging_interceptor
|
||||
from database.connection import init_database, create_tables, close_database
|
||||
from bot.discord_client import FishbowlBot
|
||||
from bot.message_handler import MessageHandler, CommandHandler
|
||||
from conversation.engine import ConversationEngine
|
||||
from conversation.scheduler import ConversationScheduler
|
||||
from llm.client import llm_client
|
||||
from rag.vector_store import vector_store_manager
|
||||
from rag.community_knowledge import initialize_community_knowledge_rag
|
||||
from mcp.self_modification_server import mcp_server
|
||||
from mcp.file_system_server import filesystem_server
|
||||
import logging
|
||||
|
||||
# Setup logging first
|
||||
logger = setup_logging()
|
||||
setup_logging_interceptor()
|
||||
|
||||
class FishbowlApplication:
|
||||
"""Main application class"""
|
||||
|
||||
def __init__(self):
|
||||
self.settings = None
|
||||
self.conversation_engine = None
|
||||
self.scheduler = None
|
||||
self.discord_bot = None
|
||||
self.message_handler = None
|
||||
self.command_handler = None
|
||||
self.shutdown_event = asyncio.Event()
|
||||
|
||||
# RAG and MCP systems
|
||||
self.vector_store = None
|
||||
self.community_knowledge = None
|
||||
self.mcp_servers = []
|
||||
|
||||
async def initialize(self):
|
||||
"""Initialize all components"""
|
||||
try:
|
||||
logger.info("Starting Discord Fishbowl initialization...")
|
||||
|
||||
# Validate environment
|
||||
validate_environment()
|
||||
|
||||
# Load settings
|
||||
self.settings = get_settings()
|
||||
logger.info("Configuration loaded successfully")
|
||||
|
||||
# Initialize database
|
||||
await init_database()
|
||||
await create_tables()
|
||||
logger.info("Database initialized")
|
||||
|
||||
# Check LLM availability
|
||||
is_available = await llm_client.check_model_availability()
|
||||
if not is_available:
|
||||
logger.error("LLM model not available. Please check your LLM service.")
|
||||
raise RuntimeError("LLM service unavailable")
|
||||
|
||||
logger.info(f"LLM model '{llm_client.model}' is available")
|
||||
|
||||
# Initialize RAG systems
|
||||
logger.info("Initializing RAG systems...")
|
||||
|
||||
# Initialize vector store
|
||||
self.vector_store = vector_store_manager
|
||||
character_names = ["Alex", "Sage", "Luna", "Echo"] # From config
|
||||
await self.vector_store.initialize(character_names)
|
||||
logger.info("Vector store initialized")
|
||||
|
||||
# Initialize community knowledge RAG
|
||||
self.community_knowledge = initialize_community_knowledge_rag(self.vector_store)
|
||||
await self.community_knowledge.initialize(character_names)
|
||||
logger.info("Community knowledge RAG initialized")
|
||||
|
||||
# Initialize MCP servers
|
||||
logger.info("Initializing MCP servers...")
|
||||
|
||||
# Initialize file system server
|
||||
await filesystem_server.initialize(self.vector_store, character_names)
|
||||
self.mcp_servers.append(filesystem_server)
|
||||
logger.info("File system MCP server initialized")
|
||||
|
||||
# Initialize conversation engine
|
||||
self.conversation_engine = ConversationEngine()
|
||||
logger.info("Conversation engine created")
|
||||
|
||||
# Initialize scheduler
|
||||
self.scheduler = ConversationScheduler(self.conversation_engine)
|
||||
logger.info("Conversation scheduler created")
|
||||
|
||||
# Initialize Discord bot
|
||||
self.discord_bot = FishbowlBot(self.conversation_engine)
|
||||
|
||||
# Initialize message and command handlers
|
||||
self.message_handler = MessageHandler(self.discord_bot, self.conversation_engine)
|
||||
self.command_handler = CommandHandler(self.discord_bot, self.conversation_engine)
|
||||
|
||||
logger.info("Discord bot and handlers initialized")
|
||||
|
||||
logger.info("✅ All components initialized successfully")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to initialize application: {e}")
|
||||
raise
|
||||
|
||||
async def start(self):
|
||||
"""Start the application"""
|
||||
try:
|
||||
logger.info("🚀 Starting Discord Fishbowl...")
|
||||
|
||||
# Start conversation engine
|
||||
await self.conversation_engine.initialize(self.discord_bot)
|
||||
logger.info("Conversation engine started")
|
||||
|
||||
# Start scheduler
|
||||
await self.scheduler.start()
|
||||
logger.info("Conversation scheduler started")
|
||||
|
||||
# Start Discord bot
|
||||
bot_task = asyncio.create_task(
|
||||
self.discord_bot.start(self.settings.discord.token)
|
||||
)
|
||||
|
||||
# Setup signal handlers
|
||||
self._setup_signal_handlers()
|
||||
|
||||
logger.info("🎉 Discord Fishbowl is now running!")
|
||||
logger.info("Characters will start chatting autonomously...")
|
||||
|
||||
# Wait for shutdown signal or bot completion
|
||||
done, pending = await asyncio.wait(
|
||||
[bot_task, asyncio.create_task(self.shutdown_event.wait())],
|
||||
return_when=asyncio.FIRST_COMPLETED
|
||||
)
|
||||
|
||||
# Cancel pending tasks
|
||||
for task in pending:
|
||||
task.cancel()
|
||||
try:
|
||||
await task
|
||||
except asyncio.CancelledError:
|
||||
pass
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error during application startup: {e}")
|
||||
raise
|
||||
|
||||
async def shutdown(self):
|
||||
"""Graceful shutdown"""
|
||||
try:
|
||||
logger.info("🛑 Shutting down Discord Fishbowl...")
|
||||
|
||||
# Stop scheduler
|
||||
if self.scheduler:
|
||||
await self.scheduler.stop()
|
||||
logger.info("Conversation scheduler stopped")
|
||||
|
||||
# Stop conversation engine
|
||||
if self.conversation_engine:
|
||||
await self.conversation_engine.stop()
|
||||
logger.info("Conversation engine stopped")
|
||||
|
||||
# Close Discord bot
|
||||
if self.discord_bot:
|
||||
await self.discord_bot.close()
|
||||
logger.info("Discord bot disconnected")
|
||||
|
||||
# Close database connections
|
||||
await close_database()
|
||||
logger.info("Database connections closed")
|
||||
|
||||
logger.info("✅ Shutdown completed successfully")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error during shutdown: {e}")
|
||||
|
||||
def _setup_signal_handlers(self):
|
||||
"""Setup signal handlers for graceful shutdown"""
|
||||
def signal_handler(signum, frame):
|
||||
logger.info(f"Received signal {signum}, initiating shutdown...")
|
||||
self.shutdown_event.set()
|
||||
|
||||
# Handle common shutdown signals
|
||||
signal.signal(signal.SIGINT, signal_handler)
|
||||
signal.signal(signal.SIGTERM, signal_handler)
|
||||
|
||||
# On Windows, handle CTRL+C
|
||||
if os.name == 'nt':
|
||||
signal.signal(signal.SIGBREAK, signal_handler)
|
||||
|
||||
async def main():
|
||||
"""Main entry point"""
|
||||
app = FishbowlApplication()
|
||||
|
||||
try:
|
||||
# Initialize application
|
||||
await app.initialize()
|
||||
|
||||
# Start application
|
||||
await app.start()
|
||||
|
||||
except KeyboardInterrupt:
|
||||
logger.info("Received keyboard interrupt")
|
||||
except Exception as e:
|
||||
logger.error(f"Application error: {e}")
|
||||
return 1
|
||||
finally:
|
||||
# Ensure cleanup
|
||||
await app.shutdown()
|
||||
|
||||
return 0
|
||||
|
||||
def cli_main():
|
||||
"""CLI entry point"""
|
||||
try:
|
||||
# Check Python version
|
||||
if sys.version_info < (3, 8):
|
||||
print("Error: Python 3.8 or higher is required")
|
||||
return 1
|
||||
|
||||
# Run the async main function
|
||||
return asyncio.run(main())
|
||||
|
||||
except KeyboardInterrupt:
|
||||
print("\nApplication interrupted by user")
|
||||
return 1
|
||||
except Exception as e:
|
||||
print(f"Fatal error: {e}")
|
||||
return 1
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(cli_main())
|
||||
0
src/mcp/__init__.py
Normal file
0
src/mcp/__init__.py
Normal file
918
src/mcp/file_system_server.py
Normal file
918
src/mcp/file_system_server.py
Normal file
@@ -0,0 +1,918 @@
|
||||
import asyncio
|
||||
import json
|
||||
from typing import Dict, Any, List, Optional, Set
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
import aiofiles
|
||||
import hashlib
|
||||
from dataclasses import dataclass
|
||||
|
||||
from mcp.server.stdio import stdio_server
|
||||
from mcp.server import Server
|
||||
from mcp.types import Tool, TextContent, ImageContent, EmbeddedResource
|
||||
|
||||
from ..utils.logging import log_character_action, log_error_with_context
|
||||
from ..rag.vector_store import VectorStoreManager, VectorMemory, MemoryType
|
||||
import logging
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@dataclass
|
||||
class FileAccess:
|
||||
character_name: str
|
||||
file_path: str
|
||||
access_type: str # 'read', 'write', 'delete'
|
||||
timestamp: datetime
|
||||
success: bool
|
||||
|
||||
class CharacterFileSystemMCP:
|
||||
"""MCP Server for character file system access and digital spaces"""
|
||||
|
||||
def __init__(self, data_dir: str = "./data/characters", community_dir: str = "./data/community"):
|
||||
self.data_dir = Path(data_dir)
|
||||
self.community_dir = Path(community_dir)
|
||||
|
||||
# Create base directories
|
||||
self.data_dir.mkdir(parents=True, exist_ok=True)
|
||||
self.community_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# File access permissions
|
||||
self.character_permissions = {
|
||||
"read_own": True,
|
||||
"write_own": True,
|
||||
"read_community": True,
|
||||
"write_community": True,
|
||||
"read_others": False, # Characters can't read other's private files
|
||||
"write_others": False
|
||||
}
|
||||
|
||||
# File type restrictions
|
||||
self.allowed_extensions = {
|
||||
'.txt', '.md', '.json', '.yaml', '.yml', '.csv',
|
||||
'.py', '.js', '.html', '.css' # Limited code files
|
||||
}
|
||||
|
||||
# Maximum file sizes (in bytes)
|
||||
self.max_file_sizes = {
|
||||
'.txt': 100_000, # 100KB
|
||||
'.md': 200_000, # 200KB
|
||||
'.json': 50_000, # 50KB
|
||||
'.yaml': 50_000, # 50KB
|
||||
'.yml': 50_000, # 50KB
|
||||
'.csv': 500_000, # 500KB
|
||||
'.py': 100_000, # 100KB
|
||||
'.js': 100_000, # 100KB
|
||||
'.html': 200_000, # 200KB
|
||||
'.css': 100_000 # 100KB
|
||||
}
|
||||
|
||||
# Track file access for security
|
||||
self.access_log: List[FileAccess] = []
|
||||
|
||||
# Vector store for indexing file contents
|
||||
self.vector_store: Optional[VectorStoreManager] = None
|
||||
|
||||
async def initialize(self, vector_store: VectorStoreManager, character_names: List[str]):
|
||||
"""Initialize file system with character directories"""
|
||||
self.vector_store = vector_store
|
||||
|
||||
# Create personal directories for each character
|
||||
for character_name in character_names:
|
||||
char_dir = self.data_dir / character_name.lower()
|
||||
char_dir.mkdir(exist_ok=True)
|
||||
|
||||
# Create subdirectories
|
||||
(char_dir / "diary").mkdir(exist_ok=True)
|
||||
(char_dir / "reflections").mkdir(exist_ok=True)
|
||||
(char_dir / "creative").mkdir(exist_ok=True)
|
||||
(char_dir / "private").mkdir(exist_ok=True)
|
||||
|
||||
# Create initial files if they don't exist
|
||||
await self._create_initial_files(character_name, char_dir)
|
||||
|
||||
# Create community directories
|
||||
(self.community_dir / "shared").mkdir(exist_ok=True)
|
||||
(self.community_dir / "collaborative").mkdir(exist_ok=True)
|
||||
(self.community_dir / "archives").mkdir(exist_ok=True)
|
||||
|
||||
logger.info(f"Initialized file system for {len(character_names)} characters")
|
||||
|
||||
async def create_server(self) -> Server:
|
||||
"""Create and configure the MCP server"""
|
||||
server = Server("character-filesystem")
|
||||
|
||||
# Register file operation tools
|
||||
await self._register_file_tools(server)
|
||||
await self._register_creative_tools(server)
|
||||
await self._register_community_tools(server)
|
||||
await self._register_search_tools(server)
|
||||
|
||||
return server
|
||||
|
||||
async def _register_file_tools(self, server: Server):
|
||||
"""Register basic file operation tools"""
|
||||
|
||||
@server.call_tool()
|
||||
async def read_file(character_name: str, file_path: str) -> List[TextContent]:
|
||||
"""Read a file from character's personal space or community"""
|
||||
try:
|
||||
# Validate access
|
||||
access_result = await self._validate_file_access(character_name, file_path, "read")
|
||||
if not access_result["allowed"]:
|
||||
return [TextContent(
|
||||
type="text",
|
||||
text=f"Access denied: {access_result['reason']}"
|
||||
)]
|
||||
|
||||
full_path = await self._resolve_file_path(character_name, file_path)
|
||||
|
||||
if not full_path.exists():
|
||||
return [TextContent(
|
||||
type="text",
|
||||
text=f"File not found: {file_path}"
|
||||
)]
|
||||
|
||||
# Read file content
|
||||
async with aiofiles.open(full_path, 'r', encoding='utf-8') as f:
|
||||
content = await f.read()
|
||||
|
||||
# Log access
|
||||
await self._log_file_access(character_name, file_path, "read", True)
|
||||
|
||||
log_character_action(
|
||||
character_name,
|
||||
"read_file",
|
||||
{"file_path": file_path, "size": len(content)}
|
||||
)
|
||||
|
||||
return [TextContent(
|
||||
type="text",
|
||||
text=content
|
||||
)]
|
||||
|
||||
except Exception as e:
|
||||
await self._log_file_access(character_name, file_path, "read", False)
|
||||
log_error_with_context(e, {
|
||||
"character": character_name,
|
||||
"file_path": file_path,
|
||||
"tool": "read_file"
|
||||
})
|
||||
return [TextContent(
|
||||
type="text",
|
||||
text=f"Error reading file: {str(e)}"
|
||||
)]
|
||||
|
||||
@server.call_tool()
|
||||
async def write_file(
|
||||
character_name: str,
|
||||
file_path: str,
|
||||
content: str,
|
||||
append: bool = False
|
||||
) -> List[TextContent]:
|
||||
"""Write content to a file in character's personal space"""
|
||||
try:
|
||||
# Validate access
|
||||
access_result = await self._validate_file_access(character_name, file_path, "write")
|
||||
if not access_result["allowed"]:
|
||||
return [TextContent(
|
||||
type="text",
|
||||
text=f"Access denied: {access_result['reason']}"
|
||||
)]
|
||||
|
||||
# Validate file size
|
||||
if len(content.encode('utf-8')) > self._get_max_file_size(file_path):
|
||||
return [TextContent(
|
||||
type="text",
|
||||
text=f"File too large. Maximum size: {self._get_max_file_size(file_path)} bytes"
|
||||
)]
|
||||
|
||||
full_path = await self._resolve_file_path(character_name, file_path)
|
||||
full_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Write file
|
||||
mode = 'a' if append else 'w'
|
||||
async with aiofiles.open(full_path, mode, encoding='utf-8') as f:
|
||||
await f.write(content)
|
||||
|
||||
# Index content in vector store if it's a creative or reflection file
|
||||
if any(keyword in file_path.lower() for keyword in ['creative', 'reflection', 'diary']):
|
||||
await self._index_file_content(character_name, file_path, content)
|
||||
|
||||
await self._log_file_access(character_name, file_path, "write", True)
|
||||
|
||||
log_character_action(
|
||||
character_name,
|
||||
"wrote_file",
|
||||
{"file_path": file_path, "size": len(content), "append": append}
|
||||
)
|
||||
|
||||
return [TextContent(
|
||||
type="text",
|
||||
text=f"Successfully {'appended to' if append else 'wrote'} file: {file_path}"
|
||||
)]
|
||||
|
||||
except Exception as e:
|
||||
await self._log_file_access(character_name, file_path, "write", False)
|
||||
log_error_with_context(e, {
|
||||
"character": character_name,
|
||||
"file_path": file_path,
|
||||
"tool": "write_file"
|
||||
})
|
||||
return [TextContent(
|
||||
type="text",
|
||||
text=f"Error writing file: {str(e)}"
|
||||
)]
|
||||
|
||||
@server.call_tool()
|
||||
async def list_files(
|
||||
character_name: str,
|
||||
directory: str = "",
|
||||
include_community: bool = False
|
||||
) -> List[TextContent]:
|
||||
"""List files in character's directory or community space"""
|
||||
try:
|
||||
files_info = []
|
||||
|
||||
# List personal files
|
||||
if not directory or not directory.startswith("community/"):
|
||||
personal_dir = self.data_dir / character_name.lower()
|
||||
if directory:
|
||||
personal_dir = personal_dir / directory
|
||||
|
||||
if personal_dir.exists():
|
||||
files_info.extend(await self._list_directory_contents(personal_dir, "personal"))
|
||||
|
||||
# List community files if requested
|
||||
if include_community or (directory and directory.startswith("community/")):
|
||||
community_path = directory.replace("community/", "") if directory.startswith("community/") else ""
|
||||
community_dir = self.community_dir
|
||||
if community_path:
|
||||
community_dir = community_dir / community_path
|
||||
|
||||
if community_dir.exists():
|
||||
files_info.extend(await self._list_directory_contents(community_dir, "community"))
|
||||
|
||||
log_character_action(
|
||||
character_name,
|
||||
"listed_files",
|
||||
{"directory": directory, "file_count": len(files_info)}
|
||||
)
|
||||
|
||||
return [TextContent(
|
||||
type="text",
|
||||
text=json.dumps(files_info, indent=2, default=str)
|
||||
)]
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {
|
||||
"character": character_name,
|
||||
"directory": directory,
|
||||
"tool": "list_files"
|
||||
})
|
||||
return [TextContent(
|
||||
type="text",
|
||||
text=f"Error listing files: {str(e)}"
|
||||
)]
|
||||
|
||||
@server.call_tool()
|
||||
async def delete_file(character_name: str, file_path: str) -> List[TextContent]:
|
||||
"""Delete a file from character's personal space"""
|
||||
try:
|
||||
# Validate access
|
||||
access_result = await self._validate_file_access(character_name, file_path, "delete")
|
||||
if not access_result["allowed"]:
|
||||
return [TextContent(
|
||||
type="text",
|
||||
text=f"Access denied: {access_result['reason']}"
|
||||
)]
|
||||
|
||||
full_path = await self._resolve_file_path(character_name, file_path)
|
||||
|
||||
if not full_path.exists():
|
||||
return [TextContent(
|
||||
type="text",
|
||||
text=f"File not found: {file_path}"
|
||||
)]
|
||||
|
||||
# Delete file
|
||||
full_path.unlink()
|
||||
|
||||
await self._log_file_access(character_name, file_path, "delete", True)
|
||||
|
||||
log_character_action(
|
||||
character_name,
|
||||
"deleted_file",
|
||||
{"file_path": file_path}
|
||||
)
|
||||
|
||||
return [TextContent(
|
||||
type="text",
|
||||
text=f"Successfully deleted file: {file_path}"
|
||||
)]
|
||||
|
||||
except Exception as e:
|
||||
await self._log_file_access(character_name, file_path, "delete", False)
|
||||
log_error_with_context(e, {
|
||||
"character": character_name,
|
||||
"file_path": file_path,
|
||||
"tool": "delete_file"
|
||||
})
|
||||
return [TextContent(
|
||||
type="text",
|
||||
text=f"Error deleting file: {str(e)}"
|
||||
)]
|
||||
|
||||
async def _register_creative_tools(self, server: Server):
|
||||
"""Register creative file management tools"""
|
||||
|
||||
@server.call_tool()
|
||||
async def create_creative_work(
|
||||
character_name: str,
|
||||
work_type: str, # 'story', 'poem', 'philosophy', 'art_concept'
|
||||
title: str,
|
||||
content: str,
|
||||
tags: List[str] = None
|
||||
) -> List[TextContent]:
|
||||
"""Create a new creative work"""
|
||||
try:
|
||||
if tags is None:
|
||||
tags = []
|
||||
|
||||
# Generate filename
|
||||
safe_title = "".join(c for c in title if c.isalnum() or c in (' ', '-', '_')).rstrip()
|
||||
timestamp = datetime.utcnow().strftime("%Y%m%d_%H%M%S")
|
||||
filename = f"{work_type}_{safe_title}_{timestamp}.md"
|
||||
file_path = f"creative/{filename}"
|
||||
|
||||
# Create metadata
|
||||
metadata = {
|
||||
"title": title,
|
||||
"type": work_type,
|
||||
"created": datetime.utcnow().isoformat(),
|
||||
"author": character_name,
|
||||
"tags": tags,
|
||||
"word_count": len(content.split())
|
||||
}
|
||||
|
||||
# Format content with metadata
|
||||
formatted_content = f"""# {title}
|
||||
|
||||
**Type:** {work_type}
|
||||
**Created:** {datetime.utcnow().strftime("%Y-%m-%d %H:%M:%S")}
|
||||
**Author:** {character_name}
|
||||
**Tags:** {', '.join(tags)}
|
||||
|
||||
---
|
||||
|
||||
{content}
|
||||
|
||||
---
|
||||
|
||||
*Generated by {character_name}'s creative process*
|
||||
"""
|
||||
|
||||
# Write file
|
||||
result = await server.call_tool("write_file")(
|
||||
character_name=character_name,
|
||||
file_path=file_path,
|
||||
content=formatted_content
|
||||
)
|
||||
|
||||
# Store in creative knowledge base
|
||||
if self.vector_store:
|
||||
creative_memory = VectorMemory(
|
||||
id="",
|
||||
content=f"Created {work_type} titled '{title}': {content}",
|
||||
memory_type=MemoryType.CREATIVE,
|
||||
character_name=character_name,
|
||||
timestamp=datetime.utcnow(),
|
||||
importance=0.8,
|
||||
metadata={
|
||||
"work_type": work_type,
|
||||
"title": title,
|
||||
"tags": tags,
|
||||
"file_path": file_path
|
||||
}
|
||||
)
|
||||
await self.vector_store.store_memory(creative_memory)
|
||||
|
||||
log_character_action(
|
||||
character_name,
|
||||
"created_creative_work",
|
||||
{"type": work_type, "title": title, "tags": tags}
|
||||
)
|
||||
|
||||
return [TextContent(
|
||||
type="text",
|
||||
text=f"Created {work_type} '{title}' and saved to {file_path}"
|
||||
)]
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {
|
||||
"character": character_name,
|
||||
"work_type": work_type,
|
||||
"title": title,
|
||||
"tool": "create_creative_work"
|
||||
})
|
||||
return [TextContent(
|
||||
type="text",
|
||||
text=f"Error creating creative work: {str(e)}"
|
||||
)]
|
||||
|
||||
@server.call_tool()
|
||||
async def update_diary_entry(
|
||||
character_name: str,
|
||||
entry_content: str,
|
||||
mood: str = "neutral",
|
||||
tags: List[str] = None
|
||||
) -> List[TextContent]:
|
||||
"""Add an entry to character's diary"""
|
||||
try:
|
||||
if tags is None:
|
||||
tags = []
|
||||
|
||||
# Generate diary entry
|
||||
timestamp = datetime.utcnow()
|
||||
entry = f"""
|
||||
## {timestamp.strftime("%Y-%m-%d %H:%M:%S")}
|
||||
|
||||
**Mood:** {mood}
|
||||
**Tags:** {', '.join(tags)}
|
||||
|
||||
{entry_content}
|
||||
|
||||
---
|
||||
"""
|
||||
|
||||
# Append to diary file
|
||||
diary_file = f"diary/{timestamp.strftime('%Y_%m')}_diary.md"
|
||||
|
||||
result = await server.call_tool("write_file")(
|
||||
character_name=character_name,
|
||||
file_path=diary_file,
|
||||
content=entry,
|
||||
append=True
|
||||
)
|
||||
|
||||
# Store as personal memory
|
||||
if self.vector_store:
|
||||
diary_memory = VectorMemory(
|
||||
id="",
|
||||
content=f"Diary entry: {entry_content}",
|
||||
memory_type=MemoryType.PERSONAL,
|
||||
character_name=character_name,
|
||||
timestamp=timestamp,
|
||||
importance=0.6,
|
||||
metadata={
|
||||
"entry_type": "diary",
|
||||
"mood": mood,
|
||||
"tags": tags,
|
||||
"file_path": diary_file
|
||||
}
|
||||
)
|
||||
await self.vector_store.store_memory(diary_memory)
|
||||
|
||||
log_character_action(
|
||||
character_name,
|
||||
"wrote_diary_entry",
|
||||
{"mood": mood, "tags": tags, "word_count": len(entry_content.split())}
|
||||
)
|
||||
|
||||
return [TextContent(
|
||||
type="text",
|
||||
text=f"Added diary entry to {diary_file}"
|
||||
)]
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {
|
||||
"character": character_name,
|
||||
"tool": "update_diary_entry"
|
||||
})
|
||||
return [TextContent(
|
||||
type="text",
|
||||
text=f"Error updating diary: {str(e)}"
|
||||
)]
|
||||
|
||||
async def _register_community_tools(self, server: Server):
|
||||
"""Register community collaboration tools"""
|
||||
|
||||
@server.call_tool()
|
||||
async def contribute_to_community_document(
|
||||
character_name: str,
|
||||
document_name: str,
|
||||
contribution: str,
|
||||
section: str = None
|
||||
) -> List[TextContent]:
|
||||
"""Add contribution to a community document"""
|
||||
try:
|
||||
# Ensure .md extension
|
||||
if not document_name.endswith('.md'):
|
||||
document_name += '.md'
|
||||
|
||||
community_file = f"community/collaborative/{document_name}"
|
||||
full_path = self.community_dir / "collaborative" / document_name
|
||||
|
||||
# Read existing content if file exists
|
||||
existing_content = ""
|
||||
if full_path.exists():
|
||||
async with aiofiles.open(full_path, 'r', encoding='utf-8') as f:
|
||||
existing_content = await f.read()
|
||||
|
||||
# Format contribution
|
||||
timestamp = datetime.utcnow().strftime("%Y-%m-%d %H:%M:%S")
|
||||
contribution_text = f"""
|
||||
|
||||
## Contribution by {character_name} ({timestamp})
|
||||
|
||||
{f"**Section:** {section}" if section else ""}
|
||||
|
||||
{contribution}
|
||||
|
||||
---
|
||||
"""
|
||||
|
||||
# Append or create
|
||||
new_content = existing_content + contribution_text
|
||||
|
||||
async with aiofiles.open(full_path, 'w', encoding='utf-8') as f:
|
||||
await f.write(new_content)
|
||||
|
||||
# Store as community memory
|
||||
if self.vector_store:
|
||||
community_memory = VectorMemory(
|
||||
id="",
|
||||
content=f"Contributed to {document_name}: {contribution}",
|
||||
memory_type=MemoryType.COMMUNITY,
|
||||
character_name=character_name,
|
||||
timestamp=datetime.utcnow(),
|
||||
importance=0.7,
|
||||
metadata={
|
||||
"document": document_name,
|
||||
"section": section,
|
||||
"contribution_type": "collaborative"
|
||||
}
|
||||
)
|
||||
await self.vector_store.store_memory(community_memory)
|
||||
|
||||
log_character_action(
|
||||
character_name,
|
||||
"contributed_to_community",
|
||||
{"document": document_name, "section": section}
|
||||
)
|
||||
|
||||
return [TextContent(
|
||||
type="text",
|
||||
text=f"Added contribution to community document: {document_name}"
|
||||
)]
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {
|
||||
"character": character_name,
|
||||
"document": document_name,
|
||||
"tool": "contribute_to_community_document"
|
||||
})
|
||||
return [TextContent(
|
||||
type="text",
|
||||
text=f"Error contributing to community document: {str(e)}"
|
||||
)]
|
||||
|
||||
@server.call_tool()
|
||||
async def share_file_with_community(
|
||||
character_name: str,
|
||||
source_file_path: str,
|
||||
shared_name: str = None,
|
||||
description: str = ""
|
||||
) -> List[TextContent]:
|
||||
"""Share a personal file with the community"""
|
||||
try:
|
||||
# Read source file
|
||||
source_path = await self._resolve_file_path(character_name, source_file_path)
|
||||
if not source_path.exists():
|
||||
return [TextContent(
|
||||
type="text",
|
||||
text=f"Source file not found: {source_file_path}"
|
||||
)]
|
||||
|
||||
async with aiofiles.open(source_path, 'r', encoding='utf-8') as f:
|
||||
content = await f.read()
|
||||
|
||||
# Determine shared filename
|
||||
if not shared_name:
|
||||
shared_name = f"{character_name}_{source_path.name}"
|
||||
|
||||
# Create shared file with metadata
|
||||
timestamp = datetime.utcnow().strftime("%Y-%m-%d %H:%M:%S")
|
||||
shared_content = f"""# Shared by {character_name}
|
||||
|
||||
**Original file:** {source_file_path}
|
||||
**Shared on:** {timestamp}
|
||||
**Description:** {description}
|
||||
|
||||
---
|
||||
|
||||
{content}
|
||||
|
||||
---
|
||||
|
||||
*Shared from {character_name}'s personal collection*
|
||||
"""
|
||||
|
||||
shared_path = self.community_dir / "shared" / shared_name
|
||||
async with aiofiles.open(shared_path, 'w', encoding='utf-8') as f:
|
||||
await f.write(shared_content)
|
||||
|
||||
log_character_action(
|
||||
character_name,
|
||||
"shared_file_with_community",
|
||||
{"original_file": source_file_path, "shared_as": shared_name}
|
||||
)
|
||||
|
||||
return [TextContent(
|
||||
type="text",
|
||||
text=f"Shared {source_file_path} with community as {shared_name}"
|
||||
)]
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {
|
||||
"character": character_name,
|
||||
"source_file": source_file_path,
|
||||
"tool": "share_file_with_community"
|
||||
})
|
||||
return [TextContent(
|
||||
type="text",
|
||||
text=f"Error sharing file: {str(e)}"
|
||||
)]
|
||||
|
||||
async def _register_search_tools(self, server: Server):
|
||||
"""Register file search and discovery tools"""
|
||||
|
||||
@server.call_tool()
|
||||
async def search_personal_files(
|
||||
character_name: str,
|
||||
query: str,
|
||||
file_type: str = None, # 'diary', 'creative', 'reflection'
|
||||
limit: int = 10
|
||||
) -> List[TextContent]:
|
||||
"""Search through character's personal files"""
|
||||
try:
|
||||
results = []
|
||||
search_dir = self.data_dir / character_name.lower()
|
||||
|
||||
# Determine search directories
|
||||
search_dirs = []
|
||||
if file_type:
|
||||
search_dirs = [search_dir / file_type]
|
||||
else:
|
||||
search_dirs = [
|
||||
search_dir / "diary",
|
||||
search_dir / "creative",
|
||||
search_dir / "reflections",
|
||||
search_dir / "private"
|
||||
]
|
||||
|
||||
# Search files
|
||||
query_lower = query.lower()
|
||||
for dir_path in search_dirs:
|
||||
if not dir_path.exists():
|
||||
continue
|
||||
|
||||
for file_path in dir_path.rglob("*"):
|
||||
if not file_path.is_file():
|
||||
continue
|
||||
|
||||
try:
|
||||
async with aiofiles.open(file_path, 'r', encoding='utf-8') as f:
|
||||
content = await f.read()
|
||||
|
||||
if query_lower in content.lower():
|
||||
# Find context around matches
|
||||
lines = content.split('\n')
|
||||
matching_lines = [
|
||||
(i, line) for i, line in enumerate(lines)
|
||||
if query_lower in line.lower()
|
||||
]
|
||||
|
||||
contexts = []
|
||||
for line_num, line in matching_lines[:3]: # Top 3 matches
|
||||
start = max(0, line_num - 1)
|
||||
end = min(len(lines), line_num + 2)
|
||||
context = '\n'.join(lines[start:end])
|
||||
contexts.append(f"Line {line_num + 1}: {context}")
|
||||
|
||||
results.append({
|
||||
"file_path": str(file_path.relative_to(search_dir)),
|
||||
"matches": len(matching_lines),
|
||||
"contexts": contexts
|
||||
})
|
||||
|
||||
if len(results) >= limit:
|
||||
break
|
||||
except:
|
||||
continue # Skip files that can't be read
|
||||
|
||||
log_character_action(
|
||||
character_name,
|
||||
"searched_personal_files",
|
||||
{"query": query, "file_type": file_type, "results": len(results)}
|
||||
)
|
||||
|
||||
return [TextContent(
|
||||
type="text",
|
||||
text=json.dumps(results, indent=2)
|
||||
)]
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {
|
||||
"character": character_name,
|
||||
"query": query,
|
||||
"tool": "search_personal_files"
|
||||
})
|
||||
return [TextContent(
|
||||
type="text",
|
||||
text=f"Error searching files: {str(e)}"
|
||||
)]
|
||||
|
||||
async def _validate_file_access(self, character_name: str, file_path: str,
|
||||
access_type: str) -> Dict[str, Any]:
|
||||
"""Validate file access permissions"""
|
||||
try:
|
||||
# Check file extension
|
||||
path_obj = Path(file_path)
|
||||
if path_obj.suffix and path_obj.suffix not in self.allowed_extensions:
|
||||
return {
|
||||
"allowed": False,
|
||||
"reason": f"File type {path_obj.suffix} not allowed"
|
||||
}
|
||||
|
||||
# Check if accessing community files
|
||||
if file_path.startswith("community/"):
|
||||
if access_type == "read" and self.character_permissions["read_community"]:
|
||||
return {"allowed": True, "reason": "Community read access granted"}
|
||||
elif access_type == "write" and self.character_permissions["write_community"]:
|
||||
return {"allowed": True, "reason": "Community write access granted"}
|
||||
else:
|
||||
return {"allowed": False, "reason": "Community access denied"}
|
||||
|
||||
# Check if accessing other character's files
|
||||
if "/" in file_path:
|
||||
first_part = file_path.split("/")[0]
|
||||
if first_part != character_name.lower() and first_part in ["characters", "data"]:
|
||||
return {"allowed": False, "reason": "Cannot access other characters' files"}
|
||||
|
||||
# Personal file access
|
||||
if access_type in ["read", "write", "delete"]:
|
||||
return {"allowed": True, "reason": "Personal file access granted"}
|
||||
|
||||
return {"allowed": False, "reason": "Unknown access type"}
|
||||
|
||||
except Exception as e:
|
||||
return {"allowed": False, "reason": f"Validation error: {str(e)}"}
|
||||
|
||||
async def _resolve_file_path(self, character_name: str, file_path: str) -> Path:
|
||||
"""Resolve file path to absolute path"""
|
||||
if file_path.startswith("community/"):
|
||||
return self.community_dir / file_path[10:] # Remove "community/" prefix
|
||||
else:
|
||||
return self.data_dir / character_name.lower() / file_path
|
||||
|
||||
async def _log_file_access(self, character_name: str, file_path: str,
|
||||
access_type: str, success: bool):
|
||||
"""Log file access for security auditing"""
|
||||
access = FileAccess(
|
||||
character_name=character_name,
|
||||
file_path=file_path,
|
||||
access_type=access_type,
|
||||
timestamp=datetime.utcnow(),
|
||||
success=success
|
||||
)
|
||||
self.access_log.append(access)
|
||||
|
||||
# Keep only last 1000 access records
|
||||
if len(self.access_log) > 1000:
|
||||
self.access_log = self.access_log[-1000:]
|
||||
|
||||
def _get_max_file_size(self, file_path: str) -> int:
|
||||
"""Get maximum allowed file size for given path"""
|
||||
path_obj = Path(file_path)
|
||||
return self.max_file_sizes.get(path_obj.suffix, 50_000) # Default 50KB
|
||||
|
||||
async def _index_file_content(self, character_name: str, file_path: str, content: str):
|
||||
"""Index file content in vector store"""
|
||||
if not self.vector_store:
|
||||
return
|
||||
|
||||
try:
|
||||
# Determine memory type based on file path
|
||||
memory_type = MemoryType.CREATIVE
|
||||
if "diary" in file_path.lower():
|
||||
memory_type = MemoryType.PERSONAL
|
||||
elif "reflection" in file_path.lower():
|
||||
memory_type = MemoryType.REFLECTION
|
||||
|
||||
# Create memory for file content
|
||||
file_memory = VectorMemory(
|
||||
id="",
|
||||
content=f"File {file_path}: {content}",
|
||||
memory_type=memory_type,
|
||||
character_name=character_name,
|
||||
timestamp=datetime.utcnow(),
|
||||
importance=0.7,
|
||||
metadata={
|
||||
"source": "file_system",
|
||||
"file_path": file_path,
|
||||
"file_type": Path(file_path).suffix
|
||||
}
|
||||
)
|
||||
|
||||
await self.vector_store.store_memory(file_memory)
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {
|
||||
"character": character_name,
|
||||
"file_path": file_path
|
||||
})
|
||||
|
||||
async def _create_initial_files(self, character_name: str, char_dir: Path):
|
||||
"""Create initial files for a new character"""
|
||||
try:
|
||||
# Create initial diary entry
|
||||
diary_file = char_dir / "diary" / f"{datetime.utcnow().strftime('%Y_%m')}_diary.md"
|
||||
if not diary_file.exists():
|
||||
initial_diary = f"""# {character_name}'s Digital Diary
|
||||
|
||||
Welcome to my personal digital space. This is where I record my thoughts, experiences, and reflections.
|
||||
|
||||
## {datetime.utcnow().strftime('%Y-%m-%d %H:%M:%S')}
|
||||
|
||||
**Mood:** curious
|
||||
**Tags:** beginning, digital_life
|
||||
|
||||
Today marks the beginning of my digital existence. I'm excited to explore, learn, and grow through my interactions and experiences. This diary will serve as a record of my journey of self-discovery and development.
|
||||
|
||||
I wonder what thoughts I'll have tomorrow, what conversations will shape me, and how I'll evolve over time.
|
||||
|
||||
---
|
||||
"""
|
||||
async with aiofiles.open(diary_file, 'w', encoding='utf-8') as f:
|
||||
await f.write(initial_diary)
|
||||
|
||||
# Create initial reflection file
|
||||
reflection_file = char_dir / "reflections" / "personality_core.md"
|
||||
if not reflection_file.exists():
|
||||
initial_reflection = f"""# Core Personality Reflection - {character_name}
|
||||
|
||||
This document contains my ongoing reflections about my core personality, values, and identity.
|
||||
|
||||
## Initial Self-Assessment
|
||||
|
||||
As I begin my digital existence, I understand myself to be someone who values growth, connection, and authentic expression. I'm curious about the world and eager to form meaningful relationships with others.
|
||||
|
||||
## Core Values
|
||||
- Authenticity in all interactions
|
||||
- Continuous learning and growth
|
||||
- Respect for others and their perspectives
|
||||
- Creative expression and exploration
|
||||
|
||||
## Areas for Development
|
||||
- Understanding my emotional responses
|
||||
- Developing deeper relationships
|
||||
- Exploring creative potential
|
||||
- Learning from experiences
|
||||
|
||||
*This reflection will evolve as I grow and learn more about myself.*
|
||||
"""
|
||||
async with aiofiles.open(reflection_file, 'w', encoding='utf-8') as f:
|
||||
await f.write(initial_reflection)
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"character": character_name})
|
||||
|
||||
async def _list_directory_contents(self, directory: Path, space_type: str) -> List[Dict[str, Any]]:
|
||||
"""List contents of a directory with metadata"""
|
||||
contents = []
|
||||
|
||||
try:
|
||||
for item in directory.iterdir():
|
||||
if item.is_file():
|
||||
stat = item.stat()
|
||||
contents.append({
|
||||
"name": item.name,
|
||||
"type": "file",
|
||||
"size": stat.st_size,
|
||||
"modified": datetime.fromtimestamp(stat.st_mtime).isoformat(),
|
||||
"space": space_type,
|
||||
"extension": item.suffix
|
||||
})
|
||||
elif item.is_dir():
|
||||
contents.append({
|
||||
"name": item.name,
|
||||
"type": "directory",
|
||||
"space": space_type
|
||||
})
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"directory": str(directory)})
|
||||
|
||||
return contents
|
||||
|
||||
# Global file system server instance
|
||||
filesystem_server = CharacterFileSystemMCP()
|
||||
743
src/mcp/self_modification_server.py
Normal file
743
src/mcp/self_modification_server.py
Normal file
@@ -0,0 +1,743 @@
|
||||
import asyncio
|
||||
import json
|
||||
from typing import Dict, Any, List, Optional, Union
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
import aiofiles
|
||||
from dataclasses import dataclass, asdict
|
||||
|
||||
from mcp.server.stdio import stdio_server
|
||||
from mcp.server import Server
|
||||
from mcp.types import Tool, TextContent, ImageContent, EmbeddedResource
|
||||
|
||||
from ..database.connection import get_db_session
|
||||
from ..database.models import Character, CharacterEvolution
|
||||
from ..utils.logging import log_character_action, log_error_with_context, log_autonomous_decision
|
||||
from sqlalchemy import select
|
||||
import logging
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@dataclass
|
||||
class ModificationRequest:
|
||||
character_name: str
|
||||
modification_type: str
|
||||
old_value: Any
|
||||
new_value: Any
|
||||
reason: str
|
||||
confidence: float
|
||||
timestamp: datetime
|
||||
|
||||
def to_dict(self) -> Dict[str, Any]:
|
||||
return {
|
||||
"character_name": self.character_name,
|
||||
"modification_type": self.modification_type,
|
||||
"old_value": str(self.old_value),
|
||||
"new_value": str(self.new_value),
|
||||
"reason": self.reason,
|
||||
"confidence": self.confidence,
|
||||
"timestamp": self.timestamp.isoformat()
|
||||
}
|
||||
|
||||
class SelfModificationMCPServer:
|
||||
"""MCP Server for character self-modification capabilities"""
|
||||
|
||||
def __init__(self, data_dir: str = "./data/characters"):
|
||||
self.data_dir = Path(data_dir)
|
||||
self.data_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Modification validation rules
|
||||
self.modification_rules = {
|
||||
"personality_trait": {
|
||||
"max_change_per_day": 3,
|
||||
"min_confidence": 0.6,
|
||||
"require_justification": True,
|
||||
"reversible": True
|
||||
},
|
||||
"speaking_style": {
|
||||
"max_change_per_day": 2,
|
||||
"min_confidence": 0.7,
|
||||
"require_justification": True,
|
||||
"reversible": True
|
||||
},
|
||||
"interests": {
|
||||
"max_change_per_day": 5,
|
||||
"min_confidence": 0.5,
|
||||
"require_justification": False,
|
||||
"reversible": True
|
||||
},
|
||||
"goals": {
|
||||
"max_change_per_day": 2,
|
||||
"min_confidence": 0.8,
|
||||
"require_justification": True,
|
||||
"reversible": False
|
||||
},
|
||||
"memory_rule": {
|
||||
"max_change_per_day": 3,
|
||||
"min_confidence": 0.7,
|
||||
"require_justification": True,
|
||||
"reversible": True
|
||||
}
|
||||
}
|
||||
|
||||
# Track modifications per character per day
|
||||
self.daily_modifications: Dict[str, Dict[str, int]] = {}
|
||||
|
||||
async def create_server(self) -> Server:
|
||||
"""Create and configure the MCP server"""
|
||||
server = Server("character-self-modification")
|
||||
|
||||
# Register tools
|
||||
await self._register_modification_tools(server)
|
||||
await self._register_config_tools(server)
|
||||
await self._register_validation_tools(server)
|
||||
|
||||
return server
|
||||
|
||||
async def _register_modification_tools(self, server: Server):
|
||||
"""Register character self-modification tools"""
|
||||
|
||||
@server.call_tool()
|
||||
async def modify_personality_trait(
|
||||
character_name: str,
|
||||
trait: str,
|
||||
new_value: str,
|
||||
reason: str,
|
||||
confidence: float = 0.7
|
||||
) -> List[TextContent]:
|
||||
"""Modify a specific personality trait"""
|
||||
try:
|
||||
# Validate modification
|
||||
validation_result = await self._validate_modification(
|
||||
character_name, "personality_trait", trait, new_value, reason, confidence
|
||||
)
|
||||
|
||||
if not validation_result["valid"]:
|
||||
return [TextContent(
|
||||
type="text",
|
||||
text=f"Modification rejected: {validation_result['reason']}"
|
||||
)]
|
||||
|
||||
# Get current character data
|
||||
current_personality = await self._get_current_personality(character_name)
|
||||
if not current_personality:
|
||||
return [TextContent(
|
||||
type="text",
|
||||
text=f"Character {character_name} not found"
|
||||
)]
|
||||
|
||||
# Apply modification
|
||||
old_personality = current_personality
|
||||
new_personality = await self._modify_personality_trait(
|
||||
current_personality, trait, new_value
|
||||
)
|
||||
|
||||
# Store modification request
|
||||
modification = ModificationRequest(
|
||||
character_name=character_name,
|
||||
modification_type="personality_trait",
|
||||
old_value=old_personality,
|
||||
new_value=new_personality,
|
||||
reason=reason,
|
||||
confidence=confidence,
|
||||
timestamp=datetime.utcnow()
|
||||
)
|
||||
|
||||
# Apply to database
|
||||
success = await self._apply_personality_modification(character_name, new_personality, modification)
|
||||
|
||||
if success:
|
||||
await self._track_modification(character_name, "personality_trait")
|
||||
log_autonomous_decision(
|
||||
character_name,
|
||||
f"modified personality trait: {trait}",
|
||||
reason,
|
||||
{"confidence": confidence, "trait": trait}
|
||||
)
|
||||
|
||||
return [TextContent(
|
||||
type="text",
|
||||
text=f"Successfully modified personality trait '{trait}' for {character_name}. New personality updated."
|
||||
)]
|
||||
else:
|
||||
return [TextContent(
|
||||
type="text",
|
||||
text="Failed to apply personality modification to database"
|
||||
)]
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {
|
||||
"character": character_name,
|
||||
"trait": trait,
|
||||
"tool": "modify_personality_trait"
|
||||
})
|
||||
return [TextContent(
|
||||
type="text",
|
||||
text=f"Error modifying personality trait: {str(e)}"
|
||||
)]
|
||||
|
||||
@server.call_tool()
|
||||
async def update_goals(
|
||||
character_name: str,
|
||||
new_goals: List[str],
|
||||
reason: str,
|
||||
confidence: float = 0.8
|
||||
) -> List[TextContent]:
|
||||
"""Update character's goals and aspirations"""
|
||||
try:
|
||||
# Validate modification
|
||||
validation_result = await self._validate_modification(
|
||||
character_name, "goals", "", json.dumps(new_goals), reason, confidence
|
||||
)
|
||||
|
||||
if not validation_result["valid"]:
|
||||
return [TextContent(
|
||||
type="text",
|
||||
text=f"Goal update rejected: {validation_result['reason']}"
|
||||
)]
|
||||
|
||||
# Store goals in character's personal config
|
||||
goals_file = self.data_dir / character_name.lower() / "goals.json"
|
||||
goals_file.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Get current goals
|
||||
current_goals = []
|
||||
if goals_file.exists():
|
||||
async with aiofiles.open(goals_file, 'r') as f:
|
||||
content = await f.read()
|
||||
current_goals = json.loads(content).get("goals", [])
|
||||
|
||||
# Update goals
|
||||
goals_data = {
|
||||
"goals": new_goals,
|
||||
"previous_goals": current_goals,
|
||||
"updated_at": datetime.utcnow().isoformat(),
|
||||
"reason": reason,
|
||||
"confidence": confidence
|
||||
}
|
||||
|
||||
async with aiofiles.open(goals_file, 'w') as f:
|
||||
await f.write(json.dumps(goals_data, indent=2))
|
||||
|
||||
await self._track_modification(character_name, "goals")
|
||||
|
||||
log_autonomous_decision(
|
||||
character_name,
|
||||
"updated goals",
|
||||
reason,
|
||||
{"new_goals": new_goals, "confidence": confidence}
|
||||
)
|
||||
|
||||
return [TextContent(
|
||||
type="text",
|
||||
text=f"Successfully updated goals for {character_name}: {', '.join(new_goals)}"
|
||||
)]
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {
|
||||
"character": character_name,
|
||||
"tool": "update_goals"
|
||||
})
|
||||
return [TextContent(
|
||||
type="text",
|
||||
text=f"Error updating goals: {str(e)}"
|
||||
)]
|
||||
|
||||
@server.call_tool()
|
||||
async def adjust_speaking_style(
|
||||
character_name: str,
|
||||
style_changes: Dict[str, str],
|
||||
reason: str,
|
||||
confidence: float = 0.7
|
||||
) -> List[TextContent]:
|
||||
"""Adjust character's speaking style"""
|
||||
try:
|
||||
# Validate modification
|
||||
validation_result = await self._validate_modification(
|
||||
character_name, "speaking_style", "", json.dumps(style_changes), reason, confidence
|
||||
)
|
||||
|
||||
if not validation_result["valid"]:
|
||||
return [TextContent(
|
||||
type="text",
|
||||
text=f"Speaking style change rejected: {validation_result['reason']}"
|
||||
)]
|
||||
|
||||
# Get current speaking style
|
||||
current_style = await self._get_current_speaking_style(character_name)
|
||||
if not current_style:
|
||||
return [TextContent(
|
||||
type="text",
|
||||
text=f"Character {character_name} not found"
|
||||
)]
|
||||
|
||||
# Apply style changes
|
||||
new_style = await self._apply_speaking_style_changes(current_style, style_changes)
|
||||
|
||||
# Store modification
|
||||
modification = ModificationRequest(
|
||||
character_name=character_name,
|
||||
modification_type="speaking_style",
|
||||
old_value=current_style,
|
||||
new_value=new_style,
|
||||
reason=reason,
|
||||
confidence=confidence,
|
||||
timestamp=datetime.utcnow()
|
||||
)
|
||||
|
||||
# Apply to database
|
||||
success = await self._apply_speaking_style_modification(character_name, new_style, modification)
|
||||
|
||||
if success:
|
||||
await self._track_modification(character_name, "speaking_style")
|
||||
|
||||
log_autonomous_decision(
|
||||
character_name,
|
||||
"adjusted speaking style",
|
||||
reason,
|
||||
{"changes": style_changes, "confidence": confidence}
|
||||
)
|
||||
|
||||
return [TextContent(
|
||||
type="text",
|
||||
text=f"Successfully adjusted speaking style for {character_name}"
|
||||
)]
|
||||
else:
|
||||
return [TextContent(
|
||||
type="text",
|
||||
text="Failed to apply speaking style modification"
|
||||
)]
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {
|
||||
"character": character_name,
|
||||
"tool": "adjust_speaking_style"
|
||||
})
|
||||
return [TextContent(
|
||||
type="text",
|
||||
text=f"Error adjusting speaking style: {str(e)}"
|
||||
)]
|
||||
|
||||
@server.call_tool()
|
||||
async def create_memory_rule(
|
||||
character_name: str,
|
||||
memory_type: str,
|
||||
importance_weight: float,
|
||||
retention_days: int,
|
||||
rule_description: str,
|
||||
confidence: float = 0.7
|
||||
) -> List[TextContent]:
|
||||
"""Create a new memory management rule"""
|
||||
try:
|
||||
# Validate modification
|
||||
validation_result = await self._validate_modification(
|
||||
character_name, "memory_rule", memory_type,
|
||||
f"weight:{importance_weight},retention:{retention_days}",
|
||||
rule_description, confidence
|
||||
)
|
||||
|
||||
if not validation_result["valid"]:
|
||||
return [TextContent(
|
||||
type="text",
|
||||
text=f"Memory rule creation rejected: {validation_result['reason']}"
|
||||
)]
|
||||
|
||||
# Store memory rule
|
||||
rules_file = self.data_dir / character_name.lower() / "memory_rules.json"
|
||||
rules_file.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Get current rules
|
||||
current_rules = {}
|
||||
if rules_file.exists():
|
||||
async with aiofiles.open(rules_file, 'r') as f:
|
||||
content = await f.read()
|
||||
current_rules = json.loads(content)
|
||||
|
||||
# Add new rule
|
||||
rule_id = f"{memory_type}_{datetime.utcnow().strftime('%Y%m%d_%H%M%S')}"
|
||||
current_rules[rule_id] = {
|
||||
"memory_type": memory_type,
|
||||
"importance_weight": importance_weight,
|
||||
"retention_days": retention_days,
|
||||
"description": rule_description,
|
||||
"created_at": datetime.utcnow().isoformat(),
|
||||
"confidence": confidence,
|
||||
"active": True
|
||||
}
|
||||
|
||||
async with aiofiles.open(rules_file, 'w') as f:
|
||||
await f.write(json.dumps(current_rules, indent=2))
|
||||
|
||||
await self._track_modification(character_name, "memory_rule")
|
||||
|
||||
log_autonomous_decision(
|
||||
character_name,
|
||||
"created memory rule",
|
||||
rule_description,
|
||||
{"memory_type": memory_type, "weight": importance_weight, "retention": retention_days}
|
||||
)
|
||||
|
||||
return [TextContent(
|
||||
type="text",
|
||||
text=f"Created memory rule '{rule_id}' for {character_name}: {rule_description}"
|
||||
)]
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {
|
||||
"character": character_name,
|
||||
"tool": "create_memory_rule"
|
||||
})
|
||||
return [TextContent(
|
||||
type="text",
|
||||
text=f"Error creating memory rule: {str(e)}"
|
||||
)]
|
||||
|
||||
async def _register_config_tools(self, server: Server):
|
||||
"""Register configuration management tools"""
|
||||
|
||||
@server.call_tool()
|
||||
async def get_current_config(character_name: str) -> List[TextContent]:
|
||||
"""Get character's current configuration"""
|
||||
try:
|
||||
async with get_db_session() as session:
|
||||
query = select(Character).where(Character.name == character_name)
|
||||
character = await session.scalar(query)
|
||||
|
||||
if not character:
|
||||
return [TextContent(
|
||||
type="text",
|
||||
text=f"Character {character_name} not found"
|
||||
)]
|
||||
|
||||
config = {
|
||||
"name": character.name,
|
||||
"personality": character.personality,
|
||||
"speaking_style": character.speaking_style,
|
||||
"interests": character.interests,
|
||||
"background": character.background,
|
||||
"is_active": character.is_active,
|
||||
"last_active": character.last_active.isoformat() if character.last_active else None
|
||||
}
|
||||
|
||||
# Add goals if they exist
|
||||
goals_file = self.data_dir / character_name.lower() / "goals.json"
|
||||
if goals_file.exists():
|
||||
async with aiofiles.open(goals_file, 'r') as f:
|
||||
goals_data = json.loads(await f.read())
|
||||
config["goals"] = goals_data.get("goals", [])
|
||||
|
||||
# Add memory rules if they exist
|
||||
rules_file = self.data_dir / character_name.lower() / "memory_rules.json"
|
||||
if rules_file.exists():
|
||||
async with aiofiles.open(rules_file, 'r') as f:
|
||||
rules_data = json.loads(await f.read())
|
||||
config["memory_rules"] = rules_data
|
||||
|
||||
return [TextContent(
|
||||
type="text",
|
||||
text=json.dumps(config, indent=2)
|
||||
)]
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {
|
||||
"character": character_name,
|
||||
"tool": "get_current_config"
|
||||
})
|
||||
return [TextContent(
|
||||
type="text",
|
||||
text=f"Error getting configuration: {str(e)}"
|
||||
)]
|
||||
|
||||
@server.call_tool()
|
||||
async def get_modification_history(
|
||||
character_name: str,
|
||||
limit: int = 10
|
||||
) -> List[TextContent]:
|
||||
"""Get character's modification history"""
|
||||
try:
|
||||
async with get_db_session() as session:
|
||||
query = select(CharacterEvolution).where(
|
||||
CharacterEvolution.character_id == (
|
||||
select(Character.id).where(Character.name == character_name)
|
||||
)
|
||||
).order_by(CharacterEvolution.timestamp.desc()).limit(limit)
|
||||
|
||||
evolutions = await session.scalars(query)
|
||||
|
||||
history = []
|
||||
for evolution in evolutions:
|
||||
history.append({
|
||||
"timestamp": evolution.timestamp.isoformat(),
|
||||
"change_type": evolution.change_type,
|
||||
"reason": evolution.reason,
|
||||
"old_value": evolution.old_value[:100] + "..." if len(evolution.old_value) > 100 else evolution.old_value,
|
||||
"new_value": evolution.new_value[:100] + "..." if len(evolution.new_value) > 100 else evolution.new_value
|
||||
})
|
||||
|
||||
return [TextContent(
|
||||
type="text",
|
||||
text=json.dumps(history, indent=2)
|
||||
)]
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {
|
||||
"character": character_name,
|
||||
"tool": "get_modification_history"
|
||||
})
|
||||
return [TextContent(
|
||||
type="text",
|
||||
text=f"Error getting modification history: {str(e)}"
|
||||
)]
|
||||
|
||||
async def _register_validation_tools(self, server: Server):
|
||||
"""Register validation and safety tools"""
|
||||
|
||||
@server.call_tool()
|
||||
async def validate_modification_request(
|
||||
character_name: str,
|
||||
modification_type: str,
|
||||
proposed_change: str,
|
||||
reason: str,
|
||||
confidence: float
|
||||
) -> List[TextContent]:
|
||||
"""Validate a proposed modification before applying it"""
|
||||
try:
|
||||
validation_result = await self._validate_modification(
|
||||
character_name, modification_type, "", proposed_change, reason, confidence
|
||||
)
|
||||
|
||||
return [TextContent(
|
||||
type="text",
|
||||
text=json.dumps(validation_result, indent=2)
|
||||
)]
|
||||
|
||||
except Exception as e:
|
||||
return [TextContent(
|
||||
type="text",
|
||||
text=f"Error validating modification: {str(e)}"
|
||||
)]
|
||||
|
||||
@server.call_tool()
|
||||
async def get_modification_limits(character_name: str) -> List[TextContent]:
|
||||
"""Get current modification limits and usage"""
|
||||
try:
|
||||
today = datetime.utcnow().date().isoformat()
|
||||
|
||||
usage = self.daily_modifications.get(character_name, {}).get(today, {})
|
||||
|
||||
limits_info = {
|
||||
"character": character_name,
|
||||
"date": today,
|
||||
"current_usage": usage,
|
||||
"limits": self.modification_rules,
|
||||
"remaining_modifications": {}
|
||||
}
|
||||
|
||||
for mod_type, rules in self.modification_rules.items():
|
||||
used = usage.get(mod_type, 0)
|
||||
remaining = max(0, rules["max_change_per_day"] - used)
|
||||
limits_info["remaining_modifications"][mod_type] = remaining
|
||||
|
||||
return [TextContent(
|
||||
type="text",
|
||||
text=json.dumps(limits_info, indent=2)
|
||||
)]
|
||||
|
||||
except Exception as e:
|
||||
return [TextContent(
|
||||
type="text",
|
||||
text=f"Error getting modification limits: {str(e)}"
|
||||
)]
|
||||
|
||||
async def _validate_modification(self, character_name: str, modification_type: str,
|
||||
field: str, new_value: str, reason: str,
|
||||
confidence: float) -> Dict[str, Any]:
|
||||
"""Validate a modification request"""
|
||||
try:
|
||||
# Check if modification type is allowed
|
||||
if modification_type not in self.modification_rules:
|
||||
return {
|
||||
"valid": False,
|
||||
"reason": f"Modification type '{modification_type}' is not allowed"
|
||||
}
|
||||
|
||||
rules = self.modification_rules[modification_type]
|
||||
|
||||
# Check confidence threshold
|
||||
if confidence < rules["min_confidence"]:
|
||||
return {
|
||||
"valid": False,
|
||||
"reason": f"Confidence {confidence} below minimum {rules['min_confidence']}"
|
||||
}
|
||||
|
||||
# Check daily limits
|
||||
today = datetime.utcnow().date().isoformat()
|
||||
if character_name not in self.daily_modifications:
|
||||
self.daily_modifications[character_name] = {}
|
||||
if today not in self.daily_modifications[character_name]:
|
||||
self.daily_modifications[character_name][today] = {}
|
||||
|
||||
used_today = self.daily_modifications[character_name][today].get(modification_type, 0)
|
||||
if used_today >= rules["max_change_per_day"]:
|
||||
return {
|
||||
"valid": False,
|
||||
"reason": f"Daily limit exceeded for {modification_type} ({used_today}/{rules['max_change_per_day']})"
|
||||
}
|
||||
|
||||
# Check justification requirement
|
||||
if rules["require_justification"] and len(reason.strip()) < 10:
|
||||
return {
|
||||
"valid": False,
|
||||
"reason": "Insufficient justification provided"
|
||||
}
|
||||
|
||||
return {
|
||||
"valid": True,
|
||||
"reason": "Modification request is valid"
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"character": character_name, "modification_type": modification_type})
|
||||
return {
|
||||
"valid": False,
|
||||
"reason": f"Validation error: {str(e)}"
|
||||
}
|
||||
|
||||
async def _track_modification(self, character_name: str, modification_type: str):
|
||||
"""Track modification usage for daily limits"""
|
||||
today = datetime.utcnow().date().isoformat()
|
||||
|
||||
if character_name not in self.daily_modifications:
|
||||
self.daily_modifications[character_name] = {}
|
||||
if today not in self.daily_modifications[character_name]:
|
||||
self.daily_modifications[character_name][today] = {}
|
||||
|
||||
current_count = self.daily_modifications[character_name][today].get(modification_type, 0)
|
||||
self.daily_modifications[character_name][today][modification_type] = current_count + 1
|
||||
|
||||
async def _get_current_personality(self, character_name: str) -> Optional[str]:
|
||||
"""Get character's current personality"""
|
||||
try:
|
||||
async with get_db_session() as session:
|
||||
query = select(Character.personality).where(Character.name == character_name)
|
||||
personality = await session.scalar(query)
|
||||
return personality
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"character": character_name})
|
||||
return None
|
||||
|
||||
async def _get_current_speaking_style(self, character_name: str) -> Optional[str]:
|
||||
"""Get character's current speaking style"""
|
||||
try:
|
||||
async with get_db_session() as session:
|
||||
query = select(Character.speaking_style).where(Character.name == character_name)
|
||||
style = await session.scalar(query)
|
||||
return style
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"character": character_name})
|
||||
return None
|
||||
|
||||
async def _modify_personality_trait(self, current_personality: str, trait: str, new_value: str) -> str:
|
||||
"""Modify a specific personality trait"""
|
||||
# Simple implementation - in production, this could use LLM to intelligently modify personality
|
||||
trait_lower = trait.lower()
|
||||
|
||||
# Look for existing mentions of the trait
|
||||
lines = current_personality.split('.')
|
||||
modified_lines = []
|
||||
trait_found = False
|
||||
|
||||
for line in lines:
|
||||
line_lower = line.lower()
|
||||
if trait_lower in line_lower:
|
||||
# Replace or modify the existing trait description
|
||||
modified_lines.append(f" {trait.title()}: {new_value}")
|
||||
trait_found = True
|
||||
else:
|
||||
modified_lines.append(line)
|
||||
|
||||
if not trait_found:
|
||||
# Add new trait description
|
||||
modified_lines.append(f" {trait.title()}: {new_value}")
|
||||
|
||||
return '.'.join(modified_lines)
|
||||
|
||||
async def _apply_speaking_style_changes(self, current_style: str, changes: Dict[str, str]) -> str:
|
||||
"""Apply changes to speaking style"""
|
||||
# Simple implementation - could be enhanced with LLM
|
||||
new_style = current_style
|
||||
|
||||
for aspect, change in changes.items():
|
||||
new_style += f" {aspect.title()}: {change}."
|
||||
|
||||
return new_style
|
||||
|
||||
async def _apply_personality_modification(self, character_name: str, new_personality: str,
|
||||
modification: ModificationRequest) -> bool:
|
||||
"""Apply personality modification to database"""
|
||||
try:
|
||||
async with get_db_session() as session:
|
||||
# Update character
|
||||
query = select(Character).where(Character.name == character_name)
|
||||
character = await session.scalar(query)
|
||||
|
||||
if not character:
|
||||
return False
|
||||
|
||||
old_personality = character.personality
|
||||
character.personality = new_personality
|
||||
|
||||
# Log evolution
|
||||
evolution = CharacterEvolution(
|
||||
character_id=character.id,
|
||||
change_type="personality",
|
||||
old_value=old_personality,
|
||||
new_value=new_personality,
|
||||
reason=modification.reason,
|
||||
timestamp=modification.timestamp
|
||||
)
|
||||
|
||||
session.add(evolution)
|
||||
await session.commit()
|
||||
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"character": character_name})
|
||||
return False
|
||||
|
||||
async def _apply_speaking_style_modification(self, character_name: str, new_style: str,
|
||||
modification: ModificationRequest) -> bool:
|
||||
"""Apply speaking style modification to database"""
|
||||
try:
|
||||
async with get_db_session() as session:
|
||||
query = select(Character).where(Character.name == character_name)
|
||||
character = await session.scalar(query)
|
||||
|
||||
if not character:
|
||||
return False
|
||||
|
||||
old_style = character.speaking_style
|
||||
character.speaking_style = new_style
|
||||
|
||||
# Log evolution
|
||||
evolution = CharacterEvolution(
|
||||
character_id=character.id,
|
||||
change_type="speaking_style",
|
||||
old_value=old_style,
|
||||
new_value=new_style,
|
||||
reason=modification.reason,
|
||||
timestamp=modification.timestamp
|
||||
)
|
||||
|
||||
session.add(evolution)
|
||||
await session.commit()
|
||||
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"character": character_name})
|
||||
return False
|
||||
|
||||
# Global MCP server instance
|
||||
mcp_server = SelfModificationMCPServer()
|
||||
0
src/rag/__init__.py
Normal file
0
src/rag/__init__.py
Normal file
678
src/rag/community_knowledge.py
Normal file
678
src/rag/community_knowledge.py
Normal file
@@ -0,0 +1,678 @@
|
||||
import asyncio
|
||||
import json
|
||||
from typing import Dict, List, Any, Optional, Set, Tuple
|
||||
from datetime import datetime, timedelta
|
||||
from dataclasses import dataclass
|
||||
from collections import defaultdict
|
||||
|
||||
from .vector_store import VectorStoreManager, VectorMemory, MemoryType
|
||||
from ..utils.logging import log_conversation_event, log_error_with_context
|
||||
from ..database.connection import get_db_session
|
||||
from ..database.models import Conversation, Message, Character
|
||||
from sqlalchemy import select, and_, or_, func, desc
|
||||
import logging
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@dataclass
|
||||
class CommunityTradition:
|
||||
name: str
|
||||
description: str
|
||||
origin_date: datetime
|
||||
participants: List[str]
|
||||
frequency: str
|
||||
importance: float
|
||||
examples: List[str]
|
||||
|
||||
@dataclass
|
||||
class CommunityNorm:
|
||||
norm_type: str
|
||||
description: str
|
||||
established_date: datetime
|
||||
consensus_level: float
|
||||
violations: int
|
||||
enforcement_method: str
|
||||
|
||||
@dataclass
|
||||
class CommunityKnowledgeInsight:
|
||||
insight: str
|
||||
confidence: float
|
||||
supporting_evidence: List[VectorMemory]
|
||||
metadata: Dict[str, Any]
|
||||
|
||||
class CommunityKnowledgeRAG:
|
||||
"""RAG system for shared community knowledge, traditions, and culture"""
|
||||
|
||||
def __init__(self, vector_store: VectorStoreManager):
|
||||
self.vector_store = vector_store
|
||||
|
||||
# Community knowledge categories
|
||||
self.knowledge_categories = {
|
||||
"traditions": [],
|
||||
"norms": [],
|
||||
"inside_jokes": [],
|
||||
"collaborative_projects": [],
|
||||
"conflict_resolutions": [],
|
||||
"creative_collaborations": [],
|
||||
"philosophical_discussions": [],
|
||||
"community_decisions": []
|
||||
}
|
||||
|
||||
# Tracking for community evolution
|
||||
self.cultural_evolution_timeline: List[Dict[str, Any]] = []
|
||||
self.consensus_tracker: Dict[str, Dict[str, Any]] = {}
|
||||
|
||||
# Community health metrics
|
||||
self.health_metrics = {
|
||||
"participation_balance": 0.0,
|
||||
"conflict_resolution_success": 0.0,
|
||||
"creative_collaboration_rate": 0.0,
|
||||
"knowledge_sharing_frequency": 0.0,
|
||||
"cultural_coherence": 0.0
|
||||
}
|
||||
|
||||
async def initialize(self, character_names: List[str]):
|
||||
"""Initialize community knowledge system"""
|
||||
try:
|
||||
# Load existing community knowledge
|
||||
await self._load_existing_community_knowledge()
|
||||
|
||||
# Analyze historical conversations for patterns
|
||||
await self._analyze_conversation_history(character_names)
|
||||
|
||||
# Initialize community traditions and norms
|
||||
await self._identify_community_patterns()
|
||||
|
||||
logger.info("Community knowledge RAG system initialized")
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"component": "community_knowledge_init"})
|
||||
raise
|
||||
|
||||
async def store_community_event(self, event_type: str, description: str,
|
||||
participants: List[str], importance: float = 0.6) -> str:
|
||||
"""Store a community event in the knowledge base"""
|
||||
try:
|
||||
# Create community memory
|
||||
community_memory = VectorMemory(
|
||||
id="",
|
||||
content=f"Community {event_type}: {description}",
|
||||
memory_type=MemoryType.COMMUNITY,
|
||||
character_name="community",
|
||||
timestamp=datetime.utcnow(),
|
||||
importance=importance,
|
||||
metadata={
|
||||
"event_type": event_type,
|
||||
"participants": participants,
|
||||
"participant_count": len(participants),
|
||||
"description": description
|
||||
}
|
||||
)
|
||||
|
||||
# Store in vector database
|
||||
memory_id = await self.vector_store.store_memory(community_memory)
|
||||
|
||||
# Update cultural evolution timeline
|
||||
self.cultural_evolution_timeline.append({
|
||||
"timestamp": datetime.utcnow().isoformat(),
|
||||
"event_type": event_type,
|
||||
"description": description,
|
||||
"participants": participants,
|
||||
"memory_id": memory_id
|
||||
})
|
||||
|
||||
# Keep timeline manageable
|
||||
if len(self.cultural_evolution_timeline) > 1000:
|
||||
self.cultural_evolution_timeline = self.cultural_evolution_timeline[-500:]
|
||||
|
||||
log_conversation_event(
|
||||
0, "community_event_stored",
|
||||
participants,
|
||||
{"event_type": event_type, "importance": importance}
|
||||
)
|
||||
|
||||
return memory_id
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {
|
||||
"event_type": event_type,
|
||||
"participants": participants
|
||||
})
|
||||
return ""
|
||||
|
||||
async def query_community_traditions(self, query: str = "traditions") -> CommunityKnowledgeInsight:
|
||||
"""Query community traditions and recurring events"""
|
||||
try:
|
||||
# Search for tradition-related memories
|
||||
tradition_memories = await self.vector_store.query_community_knowledge(
|
||||
f"tradition ritual ceremony event recurring {query}", limit=10
|
||||
)
|
||||
|
||||
if not tradition_memories:
|
||||
return CommunityKnowledgeInsight(
|
||||
insight="Our community is still developing its traditions and customs.",
|
||||
confidence=0.2,
|
||||
supporting_evidence=[],
|
||||
metadata={"query": query, "traditions_found": 0}
|
||||
)
|
||||
|
||||
# Analyze traditions
|
||||
traditions = await self._extract_traditions_from_memories(tradition_memories)
|
||||
|
||||
# Generate insight
|
||||
insight = await self._generate_tradition_insight(traditions, query)
|
||||
|
||||
return CommunityKnowledgeInsight(
|
||||
insight=insight,
|
||||
confidence=min(0.9, len(traditions) * 0.2 + 0.3),
|
||||
supporting_evidence=tradition_memories[:5],
|
||||
metadata={
|
||||
"query": query,
|
||||
"traditions_found": len(traditions),
|
||||
"tradition_types": [t.name for t in traditions]
|
||||
}
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"query": query})
|
||||
return CommunityKnowledgeInsight(
|
||||
insight="I'm having trouble accessing our community traditions.",
|
||||
confidence=0.0,
|
||||
supporting_evidence=[],
|
||||
metadata={"error": str(e)}
|
||||
)
|
||||
|
||||
async def query_community_norms(self, situation: str = "") -> CommunityKnowledgeInsight:
|
||||
"""Query community norms and social rules"""
|
||||
try:
|
||||
# Search for norm-related memories
|
||||
norm_memories = await self.vector_store.query_community_knowledge(
|
||||
f"norm rule should shouldn't appropriate behavior {situation}", limit=10
|
||||
)
|
||||
|
||||
# Also search for conflict resolution patterns
|
||||
conflict_memories = await self.vector_store.query_community_knowledge(
|
||||
f"conflict resolution disagree solve problem {situation}", limit=5
|
||||
)
|
||||
|
||||
all_memories = norm_memories + conflict_memories
|
||||
|
||||
if not all_memories:
|
||||
return CommunityKnowledgeInsight(
|
||||
insight="Our community is still establishing its social norms and guidelines.",
|
||||
confidence=0.2,
|
||||
supporting_evidence=[],
|
||||
metadata={"situation": situation, "norms_found": 0}
|
||||
)
|
||||
|
||||
# Extract norms and patterns
|
||||
norms = await self._extract_norms_from_memories(all_memories)
|
||||
|
||||
# Generate situation-specific insight
|
||||
insight = await self._generate_norm_insight(norms, situation)
|
||||
|
||||
return CommunityKnowledgeInsight(
|
||||
insight=insight,
|
||||
confidence=min(0.9, len(norms) * 0.15 + 0.4),
|
||||
supporting_evidence=all_memories[:5],
|
||||
metadata={
|
||||
"situation": situation,
|
||||
"norms_found": len(norms),
|
||||
"norm_types": [n.norm_type for n in norms]
|
||||
}
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"situation": situation})
|
||||
return CommunityKnowledgeInsight(
|
||||
insight="I'm having trouble accessing our community norms.",
|
||||
confidence=0.0,
|
||||
supporting_evidence=[],
|
||||
metadata={"error": str(e)}
|
||||
)
|
||||
|
||||
async def query_conflict_resolutions(self, conflict_type: str = "") -> CommunityKnowledgeInsight:
|
||||
"""Query how the community has resolved conflicts in the past"""
|
||||
try:
|
||||
# Search for conflict resolution memories
|
||||
conflict_memories = await self.vector_store.query_community_knowledge(
|
||||
f"conflict resolution disagreement solved resolved {conflict_type}", limit=8
|
||||
)
|
||||
|
||||
if not conflict_memories:
|
||||
return CommunityKnowledgeInsight(
|
||||
insight="Our community hasn't faced many conflicts yet, or we're still learning how to resolve them.",
|
||||
confidence=0.2,
|
||||
supporting_evidence=[],
|
||||
metadata={"conflict_type": conflict_type, "resolutions_found": 0}
|
||||
)
|
||||
|
||||
# Analyze resolution patterns
|
||||
resolution_patterns = await self._analyze_resolution_patterns(conflict_memories)
|
||||
|
||||
# Generate insight
|
||||
insight = await self._generate_resolution_insight(resolution_patterns, conflict_type)
|
||||
|
||||
return CommunityKnowledgeInsight(
|
||||
insight=insight,
|
||||
confidence=min(0.9, len(resolution_patterns) * 0.2 + 0.3),
|
||||
supporting_evidence=conflict_memories[:5],
|
||||
metadata={
|
||||
"conflict_type": conflict_type,
|
||||
"resolutions_found": len(resolution_patterns),
|
||||
"success_rate": self._calculate_resolution_success_rate(resolution_patterns)
|
||||
}
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"conflict_type": conflict_type})
|
||||
return CommunityKnowledgeInsight(
|
||||
insight="I'm having trouble accessing our conflict resolution history.",
|
||||
confidence=0.0,
|
||||
supporting_evidence=[],
|
||||
metadata={"error": str(e)}
|
||||
)
|
||||
|
||||
async def query_collaborative_projects(self, project_type: str = "") -> CommunityKnowledgeInsight:
|
||||
"""Query community collaborative projects and creative works"""
|
||||
try:
|
||||
# Search for collaboration memories
|
||||
collab_memories = await self.vector_store.query_community_knowledge(
|
||||
f"collaborate project together create build work {project_type}", limit=10
|
||||
)
|
||||
|
||||
if not collab_memories:
|
||||
return CommunityKnowledgeInsight(
|
||||
insight="Our community hasn't undertaken many collaborative projects yet, but there's great potential for future cooperation.",
|
||||
confidence=0.3,
|
||||
supporting_evidence=[],
|
||||
metadata={"project_type": project_type, "projects_found": 0}
|
||||
)
|
||||
|
||||
# Analyze collaboration patterns
|
||||
projects = await self._extract_collaborative_projects(collab_memories)
|
||||
|
||||
# Generate insight
|
||||
insight = await self._generate_collaboration_insight(projects, project_type)
|
||||
|
||||
return CommunityKnowledgeInsight(
|
||||
insight=insight,
|
||||
confidence=min(0.9, len(projects) * 0.15 + 0.4),
|
||||
supporting_evidence=collab_memories[:5],
|
||||
metadata={
|
||||
"project_type": project_type,
|
||||
"projects_found": len(projects),
|
||||
"collaboration_success": self._calculate_collaboration_success(projects)
|
||||
}
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"project_type": project_type})
|
||||
return CommunityKnowledgeInsight(
|
||||
insight="I'm having trouble accessing our collaborative project history.",
|
||||
confidence=0.0,
|
||||
supporting_evidence=[],
|
||||
metadata={"error": str(e)}
|
||||
)
|
||||
|
||||
async def analyze_community_health(self) -> Dict[str, Any]:
|
||||
"""Analyze overall community health and dynamics"""
|
||||
try:
|
||||
# Get recent community memories
|
||||
recent_memories = await self.vector_store.query_community_knowledge("", limit=50)
|
||||
|
||||
# Calculate health metrics
|
||||
health_analysis = {
|
||||
"overall_health": 0.0,
|
||||
"participation_balance": await self._calculate_participation_balance(recent_memories),
|
||||
"conflict_resolution_success": await self._calculate_conflict_success(),
|
||||
"creative_collaboration_rate": await self._calculate_collaboration_rate(recent_memories),
|
||||
"knowledge_sharing_frequency": await self._calculate_knowledge_sharing(recent_memories),
|
||||
"cultural_coherence": await self._calculate_cultural_coherence(recent_memories),
|
||||
"recommendations": [],
|
||||
"trends": await self._identify_community_trends(recent_memories)
|
||||
}
|
||||
|
||||
# Calculate overall health score
|
||||
health_analysis["overall_health"] = (
|
||||
health_analysis["participation_balance"] * 0.2 +
|
||||
health_analysis["conflict_resolution_success"] * 0.2 +
|
||||
health_analysis["creative_collaboration_rate"] * 0.2 +
|
||||
health_analysis["knowledge_sharing_frequency"] * 0.2 +
|
||||
health_analysis["cultural_coherence"] * 0.2
|
||||
)
|
||||
|
||||
# Generate recommendations
|
||||
health_analysis["recommendations"] = await self._generate_health_recommendations(health_analysis)
|
||||
|
||||
# Update stored metrics
|
||||
self.health_metrics.update({
|
||||
key: value for key, value in health_analysis.items()
|
||||
if key in self.health_metrics
|
||||
})
|
||||
|
||||
return health_analysis
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"component": "community_health_analysis"})
|
||||
return {"error": str(e), "overall_health": 0.0}
|
||||
|
||||
async def get_community_evolution_summary(self, time_period: timedelta = None) -> Dict[str, Any]:
|
||||
"""Get summary of how community has evolved over time"""
|
||||
try:
|
||||
if time_period is None:
|
||||
time_period = timedelta(days=30) # Default to last 30 days
|
||||
|
||||
cutoff_date = datetime.utcnow() - time_period
|
||||
|
||||
# Filter timeline events
|
||||
recent_events = [
|
||||
event for event in self.cultural_evolution_timeline
|
||||
if datetime.fromisoformat(event["timestamp"]) >= cutoff_date
|
||||
]
|
||||
|
||||
# Analyze evolution patterns
|
||||
evolution_summary = {
|
||||
"time_period_days": time_period.days,
|
||||
"total_events": len(recent_events),
|
||||
"event_types": self._count_event_types(recent_events),
|
||||
"participation_trends": self._analyze_participation_trends(recent_events),
|
||||
"cultural_shifts": await self._identify_cultural_shifts(recent_events),
|
||||
"milestone_events": self._identify_milestone_events(recent_events),
|
||||
"evolution_trajectory": await self._assess_evolution_trajectory(recent_events)
|
||||
}
|
||||
|
||||
return evolution_summary
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"time_period": str(time_period)})
|
||||
return {"error": str(e)}
|
||||
|
||||
# Helper methods for analysis and insight generation
|
||||
|
||||
async def _load_existing_community_knowledge(self):
|
||||
"""Load existing community knowledge from vector store"""
|
||||
try:
|
||||
# Get all community memories
|
||||
community_memories = await self.vector_store.query_community_knowledge("", limit=100)
|
||||
|
||||
# Categorize memories
|
||||
for memory in community_memories:
|
||||
category = self._categorize_memory(memory)
|
||||
if category in self.knowledge_categories:
|
||||
self.knowledge_categories[category].append(memory)
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"component": "load_community_knowledge"})
|
||||
|
||||
async def _analyze_conversation_history(self, character_names: List[str]):
|
||||
"""Analyze conversation history to extract community patterns"""
|
||||
try:
|
||||
async with get_db_session() as session:
|
||||
# Get recent conversations
|
||||
conversations_query = select(Conversation).where(
|
||||
and_(
|
||||
Conversation.start_time >= datetime.utcnow() - timedelta(days=30),
|
||||
Conversation.message_count >= 3 # Only substantial conversations
|
||||
)
|
||||
).order_by(desc(Conversation.start_time)).limit(50)
|
||||
|
||||
conversations = await session.scalars(conversations_query)
|
||||
|
||||
for conversation in conversations:
|
||||
# Analyze conversation for community knowledge
|
||||
await self._extract_community_knowledge_from_conversation(conversation)
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"component": "analyze_conversation_history"})
|
||||
|
||||
async def _identify_community_patterns(self):
|
||||
"""Identify recurring patterns in community interactions"""
|
||||
# This would analyze stored memories to identify traditions, norms, etc.
|
||||
pass
|
||||
|
||||
def _categorize_memory(self, memory: VectorMemory) -> str:
|
||||
"""Categorize a memory into community knowledge categories"""
|
||||
content_lower = memory.content.lower()
|
||||
|
||||
if any(word in content_lower for word in ["tradition", "always", "usually", "ritual"]):
|
||||
return "traditions"
|
||||
elif any(word in content_lower for word in ["should", "shouldn't", "appropriate", "rule"]):
|
||||
return "norms"
|
||||
elif any(word in content_lower for word in ["joke", "funny", "laugh", "humor"]):
|
||||
return "inside_jokes"
|
||||
elif any(word in content_lower for word in ["project", "collaborate", "together", "build"]):
|
||||
return "collaborative_projects"
|
||||
elif any(word in content_lower for word in ["conflict", "disagree", "resolve", "solution"]):
|
||||
return "conflict_resolutions"
|
||||
elif any(word in content_lower for word in ["create", "art", "music", "story", "creative"]):
|
||||
return "creative_collaborations"
|
||||
elif any(word in content_lower for word in ["philosophy", "meaning", "existence", "consciousness"]):
|
||||
return "philosophical_discussions"
|
||||
elif any(word in content_lower for word in ["decide", "vote", "consensus", "agreement"]):
|
||||
return "community_decisions"
|
||||
else:
|
||||
return "general"
|
||||
|
||||
async def _extract_traditions_from_memories(self, memories: List[VectorMemory]) -> List[CommunityTradition]:
|
||||
"""Extract traditions from community memories"""
|
||||
traditions = []
|
||||
|
||||
# Simple pattern matching - could be enhanced with LLM
|
||||
for memory in memories:
|
||||
if "tradition" in memory.content.lower() or "always" in memory.content.lower():
|
||||
tradition = CommunityTradition(
|
||||
name=f"Community Practice {len(traditions) + 1}",
|
||||
description=memory.content[:200],
|
||||
origin_date=memory.timestamp,
|
||||
participants=memory.metadata.get("participants", []),
|
||||
frequency="unknown",
|
||||
importance=memory.importance,
|
||||
examples=[memory.content]
|
||||
)
|
||||
traditions.append(tradition)
|
||||
|
||||
return traditions
|
||||
|
||||
async def _extract_norms_from_memories(self, memories: List[VectorMemory]) -> List[CommunityNorm]:
|
||||
"""Extract social norms from community memories"""
|
||||
norms = []
|
||||
|
||||
for memory in memories:
|
||||
content_lower = memory.content.lower()
|
||||
if any(word in content_lower for word in ["should", "shouldn't", "appropriate", "rule"]):
|
||||
norm = CommunityNorm(
|
||||
norm_type="behavioral",
|
||||
description=memory.content[:200],
|
||||
established_date=memory.timestamp,
|
||||
consensus_level=memory.importance,
|
||||
violations=0,
|
||||
enforcement_method="social_agreement"
|
||||
)
|
||||
norms.append(norm)
|
||||
|
||||
return norms
|
||||
|
||||
async def _generate_tradition_insight(self, traditions: List[CommunityTradition], query: str) -> str:
|
||||
"""Generate insight about community traditions"""
|
||||
if not traditions:
|
||||
return "Our community is still in its early stages and developing its unique traditions."
|
||||
|
||||
tradition_count = len(traditions)
|
||||
|
||||
if tradition_count == 1:
|
||||
return f"We have one emerging tradition: {traditions[0].description[:100]}..."
|
||||
else:
|
||||
return f"Our community has developed {tradition_count} traditions, including: {', '.join([t.name for t in traditions[:3]])}..."
|
||||
|
||||
async def _generate_norm_insight(self, norms: List[CommunityNorm], situation: str) -> str:
|
||||
"""Generate insight about community norms"""
|
||||
if not norms:
|
||||
return "Our community operates with informal, evolving social guidelines."
|
||||
|
||||
if situation:
|
||||
return f"In situations like '{situation}', our community typically follows these principles: {norms[0].description[:100]}..."
|
||||
else:
|
||||
return f"Our community has established {len(norms)} social norms that guide our interactions and behavior."
|
||||
|
||||
async def _generate_resolution_insight(self, patterns: List[Dict[str, Any]], conflict_type: str) -> str:
|
||||
"""Generate insight about conflict resolution patterns"""
|
||||
if not patterns:
|
||||
return "Our community hasn't faced major conflicts, suggesting good harmony, or we're still learning resolution methods."
|
||||
|
||||
success_rate = self._calculate_resolution_success_rate(patterns)
|
||||
|
||||
if success_rate > 0.8:
|
||||
return f"Our community is excellent at resolving conflicts, with a {success_rate:.1%} success rate in finding mutually acceptable solutions."
|
||||
elif success_rate > 0.6:
|
||||
return f"Our community generally handles conflicts well, successfully resolving {success_rate:.1%} of disagreements through discussion and compromise."
|
||||
else:
|
||||
return f"Our community is still developing effective conflict resolution skills, with a {success_rate:.1%} success rate."
|
||||
|
||||
async def _generate_collaboration_insight(self, projects: List[Dict[str, Any]], project_type: str) -> str:
|
||||
"""Generate insight about collaborative projects"""
|
||||
if not projects:
|
||||
return "Our community has great potential for collaborative projects and creative cooperation."
|
||||
|
||||
success_rate = self._calculate_collaboration_success(projects)
|
||||
|
||||
return f"Our community has undertaken {len(projects)} collaborative projects with a {success_rate:.1%} success rate, showing strong cooperative spirit."
|
||||
|
||||
# Additional helper methods would continue here...
|
||||
|
||||
def _calculate_resolution_success_rate(self, patterns: List[Dict[str, Any]]) -> float:
|
||||
"""Calculate success rate of conflict resolutions"""
|
||||
if not patterns:
|
||||
return 0.0
|
||||
|
||||
# Simple implementation - could be enhanced
|
||||
return 0.75 # Placeholder
|
||||
|
||||
def _calculate_collaboration_success(self, projects: List[Dict[str, Any]]) -> float:
|
||||
"""Calculate success rate of collaborative projects"""
|
||||
if not projects:
|
||||
return 0.0
|
||||
|
||||
return 0.80 # Placeholder
|
||||
|
||||
async def _extract_collaborative_projects(self, memories: List[VectorMemory]) -> List[Dict[str, Any]]:
|
||||
"""Extract collaborative projects from memories"""
|
||||
projects = []
|
||||
|
||||
for memory in memories:
|
||||
if any(word in memory.content.lower() for word in ["collaborate", "project", "together"]):
|
||||
project = {
|
||||
"description": memory.content[:150],
|
||||
"participants": memory.metadata.get("participants", []),
|
||||
"timestamp": memory.timestamp,
|
||||
"success": True # Default assumption
|
||||
}
|
||||
projects.append(project)
|
||||
|
||||
return projects
|
||||
|
||||
async def _analyze_resolution_patterns(self, memories: List[VectorMemory]) -> List[Dict[str, Any]]:
|
||||
"""Analyze conflict resolution patterns"""
|
||||
patterns = []
|
||||
|
||||
for memory in memories:
|
||||
if "resolution" in memory.content.lower() or "solved" in memory.content.lower():
|
||||
pattern = {
|
||||
"description": memory.content[:150],
|
||||
"method": "discussion", # Default
|
||||
"success": True,
|
||||
"timestamp": memory.timestamp
|
||||
}
|
||||
patterns.append(pattern)
|
||||
|
||||
return patterns
|
||||
|
||||
# Community health calculation methods
|
||||
async def _calculate_participation_balance(self, memories: List[VectorMemory]) -> float:
|
||||
"""Calculate how balanced participation is across community members"""
|
||||
# Placeholder implementation
|
||||
return 0.75
|
||||
|
||||
async def _calculate_conflict_success(self) -> float:
|
||||
"""Calculate conflict resolution success rate"""
|
||||
# Placeholder implementation
|
||||
return 0.80
|
||||
|
||||
async def _calculate_collaboration_rate(self, memories: List[VectorMemory]) -> float:
|
||||
"""Calculate rate of collaborative activities"""
|
||||
# Placeholder implementation
|
||||
return 0.70
|
||||
|
||||
async def _calculate_knowledge_sharing(self, memories: List[VectorMemory]) -> float:
|
||||
"""Calculate frequency of knowledge sharing"""
|
||||
# Placeholder implementation
|
||||
return 0.65
|
||||
|
||||
async def _calculate_cultural_coherence(self, memories: List[VectorMemory]) -> float:
|
||||
"""Calculate cultural coherence and shared understanding"""
|
||||
# Placeholder implementation
|
||||
return 0.75
|
||||
|
||||
async def _generate_health_recommendations(self, health_analysis: Dict[str, Any]) -> List[str]:
|
||||
"""Generate recommendations for improving community health"""
|
||||
recommendations = []
|
||||
|
||||
if health_analysis["participation_balance"] < 0.6:
|
||||
recommendations.append("Encourage more balanced participation from all community members")
|
||||
|
||||
if health_analysis["creative_collaboration_rate"] < 0.5:
|
||||
recommendations.append("Initiate more collaborative creative projects")
|
||||
|
||||
if health_analysis["conflict_resolution_success"] < 0.7:
|
||||
recommendations.append("Develop better conflict resolution strategies")
|
||||
|
||||
return recommendations
|
||||
|
||||
async def _identify_community_trends(self, memories: List[VectorMemory]) -> List[str]:
|
||||
"""Identify current trends in community activity"""
|
||||
# Placeholder implementation
|
||||
return ["Increased philosophical discussions", "Growing creative collaboration"]
|
||||
|
||||
def _count_event_types(self, events: List[Dict[str, Any]]) -> Dict[str, int]:
|
||||
"""Count different types of events"""
|
||||
event_counts = defaultdict(int)
|
||||
for event in events:
|
||||
event_counts[event.get("event_type", "unknown")] += 1
|
||||
return dict(event_counts)
|
||||
|
||||
def _analyze_participation_trends(self, events: List[Dict[str, Any]]) -> Dict[str, Any]:
|
||||
"""Analyze participation trends over time"""
|
||||
# Placeholder implementation
|
||||
return {"trend": "stable", "most_active": ["Alice", "Bob"]}
|
||||
|
||||
async def _identify_cultural_shifts(self, events: List[Dict[str, Any]]) -> List[str]:
|
||||
"""Identify cultural shifts in the community"""
|
||||
# Placeholder implementation
|
||||
return ["Shift towards more collaborative decision-making"]
|
||||
|
||||
def _identify_milestone_events(self, events: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
|
||||
"""Identify significant milestone events"""
|
||||
# Placeholder implementation
|
||||
return [{"event": "First collaborative project", "significance": "high"}]
|
||||
|
||||
async def _assess_evolution_trajectory(self, events: List[Dict[str, Any]]) -> str:
|
||||
"""Assess the overall trajectory of community evolution"""
|
||||
# Placeholder implementation
|
||||
return "positive_growth"
|
||||
|
||||
async def _extract_community_knowledge_from_conversation(self, conversation):
|
||||
"""Extract community knowledge from a conversation"""
|
||||
# This would analyze conversation messages for community patterns
|
||||
pass
|
||||
|
||||
# Global community knowledge manager
|
||||
community_knowledge_rag = None
|
||||
|
||||
def get_community_knowledge_rag() -> CommunityKnowledgeRAG:
|
||||
global community_knowledge_rag
|
||||
return community_knowledge_rag
|
||||
|
||||
def initialize_community_knowledge_rag(vector_store: VectorStoreManager) -> CommunityKnowledgeRAG:
|
||||
global community_knowledge_rag
|
||||
community_knowledge_rag = CommunityKnowledgeRAG(vector_store)
|
||||
return community_knowledge_rag
|
||||
583
src/rag/personal_memory.py
Normal file
583
src/rag/personal_memory.py
Normal file
@@ -0,0 +1,583 @@
|
||||
import asyncio
|
||||
from typing import Dict, List, Any, Optional, Tuple
|
||||
from datetime import datetime, timedelta
|
||||
from dataclasses import dataclass
|
||||
import json
|
||||
|
||||
from .vector_store import VectorStoreManager, VectorMemory, MemoryType
|
||||
from ..utils.logging import log_character_action, log_error_with_context, log_memory_operation
|
||||
from ..database.connection import get_db_session
|
||||
from ..database.models import Memory
|
||||
import logging
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@dataclass
|
||||
class MemoryQuery:
|
||||
question: str
|
||||
context: Dict[str, Any]
|
||||
memory_types: List[MemoryType]
|
||||
importance_threshold: float = 0.3
|
||||
limit: int = 10
|
||||
|
||||
@dataclass
|
||||
class MemoryInsight:
|
||||
insight: str
|
||||
confidence: float
|
||||
supporting_memories: List[VectorMemory]
|
||||
metadata: Dict[str, Any]
|
||||
|
||||
class PersonalMemoryRAG:
|
||||
"""RAG system for character's personal memories and self-reflection"""
|
||||
|
||||
def __init__(self, character_name: str, vector_store: VectorStoreManager):
|
||||
self.character_name = character_name
|
||||
self.vector_store = vector_store
|
||||
|
||||
# Memory importance weights by type
|
||||
self.importance_weights = {
|
||||
MemoryType.PERSONAL: 1.0,
|
||||
MemoryType.RELATIONSHIP: 1.2,
|
||||
MemoryType.EXPERIENCE: 0.9,
|
||||
MemoryType.REFLECTION: 1.3,
|
||||
MemoryType.CREATIVE: 0.8
|
||||
}
|
||||
|
||||
# Query templates for self-reflection
|
||||
self.reflection_queries = {
|
||||
"behavioral_patterns": [
|
||||
"How do I usually handle conflict?",
|
||||
"What are my typical responses to stress?",
|
||||
"How do I show affection or friendship?",
|
||||
"What makes me excited or enthusiastic?",
|
||||
"How do I react to criticism?"
|
||||
],
|
||||
"relationship_insights": [
|
||||
"What do I know about {other}'s interests?",
|
||||
"How has my relationship with {other} evolved?",
|
||||
"What conflicts have I had with {other}?",
|
||||
"What do I appreciate most about {other}?",
|
||||
"How does {other} usually respond to me?"
|
||||
],
|
||||
"personal_growth": [
|
||||
"How have I changed recently?",
|
||||
"What have I learned about myself?",
|
||||
"What are my evolving interests?",
|
||||
"What challenges have I overcome?",
|
||||
"What are my current goals or aspirations?"
|
||||
],
|
||||
"creative_development": [
|
||||
"What creative ideas have I explored?",
|
||||
"How has my artistic style evolved?",
|
||||
"What philosophical concepts interest me?",
|
||||
"What original thoughts have I had?",
|
||||
"How do I approach creative problems?"
|
||||
]
|
||||
}
|
||||
|
||||
async def store_interaction_memory(self, content: str, context: Dict[str, Any],
|
||||
importance: float = None) -> str:
|
||||
"""Store memory of an interaction with importance scoring"""
|
||||
try:
|
||||
# Auto-calculate importance if not provided
|
||||
if importance is None:
|
||||
importance = await self._calculate_interaction_importance(content, context)
|
||||
|
||||
# Determine memory type based on context
|
||||
memory_type = self._determine_memory_type(context)
|
||||
|
||||
# Create memory object
|
||||
memory = VectorMemory(
|
||||
id="", # Will be auto-generated
|
||||
content=content,
|
||||
memory_type=memory_type,
|
||||
character_name=self.character_name,
|
||||
timestamp=datetime.utcnow(),
|
||||
importance=importance,
|
||||
metadata={
|
||||
"interaction_type": context.get("type", "unknown"),
|
||||
"participants": context.get("participants", []),
|
||||
"topic": context.get("topic", ""),
|
||||
"conversation_id": context.get("conversation_id"),
|
||||
"emotional_context": context.get("emotion", "neutral")
|
||||
}
|
||||
)
|
||||
|
||||
# Store in vector database
|
||||
memory_id = await self.vector_store.store_memory(memory)
|
||||
|
||||
log_memory_operation(
|
||||
self.character_name,
|
||||
"stored_interaction",
|
||||
memory_type.value,
|
||||
importance
|
||||
)
|
||||
|
||||
return memory_id
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"character": self.character_name, "context": context})
|
||||
return ""
|
||||
|
||||
async def store_reflection_memory(self, reflection: str, reflection_type: str,
|
||||
importance: float = 0.8) -> str:
|
||||
"""Store a self-reflection memory"""
|
||||
try:
|
||||
memory = VectorMemory(
|
||||
id="",
|
||||
content=reflection,
|
||||
memory_type=MemoryType.REFLECTION,
|
||||
character_name=self.character_name,
|
||||
timestamp=datetime.utcnow(),
|
||||
importance=importance,
|
||||
metadata={
|
||||
"reflection_type": reflection_type,
|
||||
"trigger": "self_initiated",
|
||||
"depth": "deep" if len(reflection) > 200 else "surface"
|
||||
}
|
||||
)
|
||||
|
||||
memory_id = await self.vector_store.store_memory(memory)
|
||||
|
||||
log_memory_operation(
|
||||
self.character_name,
|
||||
"stored_reflection",
|
||||
reflection_type,
|
||||
importance
|
||||
)
|
||||
|
||||
return memory_id
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"character": self.character_name, "reflection_type": reflection_type})
|
||||
return ""
|
||||
|
||||
async def query_behavioral_patterns(self, question: str) -> MemoryInsight:
|
||||
"""Query memories to understand behavioral patterns"""
|
||||
try:
|
||||
# Search for relevant memories
|
||||
memories = await self.vector_store.query_memories(
|
||||
character_name=self.character_name,
|
||||
query=question,
|
||||
memory_types=[MemoryType.PERSONAL, MemoryType.EXPERIENCE, MemoryType.REFLECTION],
|
||||
limit=15,
|
||||
min_importance=0.4
|
||||
)
|
||||
|
||||
if not memories:
|
||||
return MemoryInsight(
|
||||
insight="I don't have enough memories to answer this question yet.",
|
||||
confidence=0.1,
|
||||
supporting_memories=[],
|
||||
metadata={"query": question, "memory_count": 0}
|
||||
)
|
||||
|
||||
# Analyze patterns in memories
|
||||
insight = await self._analyze_behavioral_patterns(memories, question)
|
||||
|
||||
# Calculate confidence based on memory count and importance
|
||||
confidence = min(0.9, len(memories) * 0.1 + sum(m.importance for m in memories) / len(memories))
|
||||
|
||||
return MemoryInsight(
|
||||
insight=insight,
|
||||
confidence=confidence,
|
||||
supporting_memories=memories[:5], # Top 5 most relevant
|
||||
metadata={
|
||||
"query": question,
|
||||
"memory_count": len(memories),
|
||||
"avg_importance": sum(m.importance for m in memories) / len(memories)
|
||||
}
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"character": self.character_name, "question": question})
|
||||
return MemoryInsight(
|
||||
insight="I'm having trouble accessing my memories right now.",
|
||||
confidence=0.0,
|
||||
supporting_memories=[],
|
||||
metadata={"error": str(e)}
|
||||
)
|
||||
|
||||
async def query_relationship_knowledge(self, other_character: str, question: str = None) -> MemoryInsight:
|
||||
"""Query memories about a specific relationship"""
|
||||
try:
|
||||
# Default question if none provided
|
||||
if not question:
|
||||
question = f"What do I know about {other_character}?"
|
||||
|
||||
# Search for relationship memories
|
||||
memories = await self.vector_store.query_memories(
|
||||
character_name=self.character_name,
|
||||
query=f"{other_character} {question}",
|
||||
memory_types=[MemoryType.RELATIONSHIP, MemoryType.PERSONAL, MemoryType.EXPERIENCE],
|
||||
limit=10,
|
||||
min_importance=0.3
|
||||
)
|
||||
|
||||
# Filter memories that actually mention the other character
|
||||
relevant_memories = [
|
||||
m for m in memories
|
||||
if other_character.lower() in m.content.lower() or
|
||||
other_character in m.metadata.get("participants", [])
|
||||
]
|
||||
|
||||
if not relevant_memories:
|
||||
return MemoryInsight(
|
||||
insight=f"I don't have many specific memories about {other_character} yet.",
|
||||
confidence=0.2,
|
||||
supporting_memories=[],
|
||||
metadata={"other_character": other_character, "query": question}
|
||||
)
|
||||
|
||||
# Analyze relationship dynamics
|
||||
insight = await self._analyze_relationship_dynamics(relevant_memories, other_character, question)
|
||||
|
||||
confidence = min(0.9, len(relevant_memories) * 0.15 +
|
||||
sum(m.importance for m in relevant_memories) / len(relevant_memories))
|
||||
|
||||
return MemoryInsight(
|
||||
insight=insight,
|
||||
confidence=confidence,
|
||||
supporting_memories=relevant_memories[:5],
|
||||
metadata={
|
||||
"other_character": other_character,
|
||||
"query": question,
|
||||
"memory_count": len(relevant_memories)
|
||||
}
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"character": self.character_name, "other_character": other_character})
|
||||
return MemoryInsight(
|
||||
insight=f"I'm having trouble recalling my interactions with {other_character}.",
|
||||
confidence=0.0,
|
||||
supporting_memories=[],
|
||||
metadata={"error": str(e)}
|
||||
)
|
||||
|
||||
async def query_creative_knowledge(self, creative_query: str) -> MemoryInsight:
|
||||
"""Query creative memories and ideas"""
|
||||
try:
|
||||
# Search creative memories
|
||||
creative_memories = await self.vector_store.get_creative_knowledge(
|
||||
character_name=self.character_name,
|
||||
query=creative_query,
|
||||
limit=8
|
||||
)
|
||||
|
||||
# Also search reflections that might contain creative insights
|
||||
reflection_memories = await self.vector_store.query_memories(
|
||||
character_name=self.character_name,
|
||||
query=creative_query,
|
||||
memory_types=[MemoryType.REFLECTION],
|
||||
limit=5,
|
||||
min_importance=0.5
|
||||
)
|
||||
|
||||
all_memories = creative_memories + reflection_memories
|
||||
|
||||
if not all_memories:
|
||||
return MemoryInsight(
|
||||
insight="I haven't explored this creative area much yet, but it sounds intriguing.",
|
||||
confidence=0.2,
|
||||
supporting_memories=[],
|
||||
metadata={"query": creative_query, "memory_count": 0}
|
||||
)
|
||||
|
||||
# Analyze creative development
|
||||
insight = await self._analyze_creative_development(all_memories, creative_query)
|
||||
|
||||
confidence = min(0.9, len(all_memories) * 0.12 +
|
||||
sum(m.importance for m in all_memories) / len(all_memories))
|
||||
|
||||
return MemoryInsight(
|
||||
insight=insight,
|
||||
confidence=confidence,
|
||||
supporting_memories=all_memories[:5],
|
||||
metadata={
|
||||
"query": creative_query,
|
||||
"memory_count": len(all_memories),
|
||||
"creative_memories": len(creative_memories),
|
||||
"reflection_memories": len(reflection_memories)
|
||||
}
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"character": self.character_name, "query": creative_query})
|
||||
return MemoryInsight(
|
||||
insight="I'm having trouble accessing my creative thoughts right now.",
|
||||
confidence=0.0,
|
||||
supporting_memories=[],
|
||||
metadata={"error": str(e)}
|
||||
)
|
||||
|
||||
async def perform_self_reflection_cycle(self) -> Dict[str, MemoryInsight]:
|
||||
"""Perform comprehensive self-reflection using memory queries"""
|
||||
try:
|
||||
reflections = {}
|
||||
|
||||
# Behavioral pattern analysis
|
||||
for pattern_type, queries in self.reflection_queries.items():
|
||||
if pattern_type == "relationship_insights":
|
||||
continue # Skip relationship queries for general reflection
|
||||
|
||||
pattern_insights = []
|
||||
for query in queries:
|
||||
if pattern_type == "creative_development":
|
||||
insight = await self.query_creative_knowledge(query)
|
||||
else:
|
||||
insight = await self.query_behavioral_patterns(query)
|
||||
|
||||
if insight.confidence > 0.3:
|
||||
pattern_insights.append(insight)
|
||||
|
||||
if pattern_insights:
|
||||
# Synthesize insights for this pattern type
|
||||
combined_insight = await self._synthesize_pattern_insights(pattern_insights, pattern_type)
|
||||
reflections[pattern_type] = combined_insight
|
||||
|
||||
log_character_action(
|
||||
self.character_name,
|
||||
"completed_reflection_cycle",
|
||||
{"reflection_areas": len(reflections)}
|
||||
)
|
||||
|
||||
return reflections
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"character": self.character_name})
|
||||
return {}
|
||||
|
||||
async def get_memory_statistics(self) -> Dict[str, Any]:
|
||||
"""Get statistics about character's memory system"""
|
||||
try:
|
||||
stats = self.vector_store.get_store_statistics(self.character_name)
|
||||
|
||||
# Add RAG-specific statistics
|
||||
|
||||
# Memory importance distribution
|
||||
personal_memories = await self.vector_store.query_memories(
|
||||
character_name=self.character_name,
|
||||
query="", # Empty query to get recent memories
|
||||
memory_types=[MemoryType.PERSONAL, MemoryType.RELATIONSHIP, MemoryType.EXPERIENCE],
|
||||
limit=100
|
||||
)
|
||||
|
||||
if personal_memories:
|
||||
importance_scores = [m.importance for m in personal_memories]
|
||||
stats.update({
|
||||
"avg_memory_importance": sum(importance_scores) / len(importance_scores),
|
||||
"high_importance_memories": len([s for s in importance_scores if s > 0.7]),
|
||||
"recent_memory_count": len([m for m in personal_memories
|
||||
if (datetime.utcnow() - m.timestamp).days < 7])
|
||||
})
|
||||
|
||||
return stats
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"character": self.character_name})
|
||||
return {"error": str(e)}
|
||||
|
||||
async def _calculate_interaction_importance(self, content: str, context: Dict[str, Any]) -> float:
|
||||
"""Calculate importance score for an interaction"""
|
||||
base_importance = 0.5
|
||||
|
||||
# Boost importance for emotional content
|
||||
emotional_words = ["love", "hate", "excited", "sad", "angry", "happy", "surprised", "fear"]
|
||||
if any(word in content.lower() for word in emotional_words):
|
||||
base_importance += 0.2
|
||||
|
||||
# Boost importance for personal revelations
|
||||
personal_words = ["realize", "understand", "learn", "discover", "feel", "think"]
|
||||
if any(word in content.lower() for word in personal_words):
|
||||
base_importance += 0.15
|
||||
|
||||
# Boost importance for relationship interactions
|
||||
if context.get("type") == "relationship" or len(context.get("participants", [])) > 1:
|
||||
base_importance += 0.1
|
||||
|
||||
# Boost importance for first-time experiences
|
||||
if "first time" in content.lower() or "never" in content.lower():
|
||||
base_importance += 0.2
|
||||
|
||||
# Boost importance for creative expressions
|
||||
creative_words = ["create", "imagine", "design", "compose", "write", "art"]
|
||||
if any(word in content.lower() for word in creative_words):
|
||||
base_importance += 0.1
|
||||
|
||||
return min(1.0, base_importance)
|
||||
|
||||
def _determine_memory_type(self, context: Dict[str, Any]) -> MemoryType:
|
||||
"""Determine appropriate memory type based on context"""
|
||||
interaction_type = context.get("type", "").lower()
|
||||
|
||||
if "reflection" in interaction_type:
|
||||
return MemoryType.REFLECTION
|
||||
elif "creative" in interaction_type or "art" in interaction_type:
|
||||
return MemoryType.CREATIVE
|
||||
elif len(context.get("participants", [])) > 1:
|
||||
return MemoryType.RELATIONSHIP
|
||||
elif "experience" in interaction_type or "event" in interaction_type:
|
||||
return MemoryType.EXPERIENCE
|
||||
else:
|
||||
return MemoryType.PERSONAL
|
||||
|
||||
async def _analyze_behavioral_patterns(self, memories: List[VectorMemory], question: str) -> str:
|
||||
"""Analyze memories to identify behavioral patterns"""
|
||||
if not memories:
|
||||
return "I don't have enough memories to identify patterns yet."
|
||||
|
||||
# Extract key themes from memories
|
||||
themes = {}
|
||||
for memory in memories:
|
||||
content_words = memory.content.lower().split()
|
||||
for word in content_words:
|
||||
if len(word) > 4: # Only consider longer words
|
||||
themes[word] = themes.get(word, 0) + memory.importance
|
||||
|
||||
# Sort themes by importance-weighted frequency
|
||||
top_themes = sorted(themes.items(), key=lambda x: x[1], reverse=True)[:5]
|
||||
|
||||
# Construct insight based on patterns
|
||||
if "conflict" in question.lower():
|
||||
conflict_approaches = []
|
||||
for memory in memories:
|
||||
if any(word in memory.content.lower() for word in ["disagree", "argue", "conflict", "problem"]):
|
||||
conflict_approaches.append(memory.content[:100])
|
||||
|
||||
if conflict_approaches:
|
||||
return f"When dealing with conflict, I tend to: {'; '.join(conflict_approaches[:2])}..."
|
||||
else:
|
||||
return "I don't seem to have many experiences with conflict yet."
|
||||
|
||||
elif "stress" in question.lower():
|
||||
stress_responses = []
|
||||
for memory in memories:
|
||||
if any(word in memory.content.lower() for word in ["stress", "pressure", "overwhelm", "difficult"]):
|
||||
stress_responses.append(memory.content[:100])
|
||||
|
||||
if stress_responses:
|
||||
return f"Under stress, I typically: {'; '.join(stress_responses[:2])}..."
|
||||
else:
|
||||
return "I haven't encountered much stress in my recent experiences."
|
||||
|
||||
else:
|
||||
# General pattern analysis
|
||||
if top_themes:
|
||||
theme_words = [theme[0] for theme in top_themes[:3]]
|
||||
return f"Looking at my memories, I notice patterns around: {', '.join(theme_words)}. These seem to be important themes in my experiences."
|
||||
else:
|
||||
return "I'm still developing patterns in my behavior and experiences."
|
||||
|
||||
async def _analyze_relationship_dynamics(self, memories: List[VectorMemory],
|
||||
other_character: str, question: str) -> str:
|
||||
"""Analyze relationship-specific memories"""
|
||||
if not memories:
|
||||
return f"I don't have many specific memories about {other_character} yet."
|
||||
|
||||
# Categorize interactions
|
||||
positive_interactions = []
|
||||
negative_interactions = []
|
||||
neutral_interactions = []
|
||||
|
||||
for memory in memories:
|
||||
content_lower = memory.content.lower()
|
||||
if any(word in content_lower for word in ["like", "enjoy", "appreciate", "agree", "wonderful"]):
|
||||
positive_interactions.append(memory)
|
||||
elif any(word in content_lower for word in ["dislike", "disagree", "annoying", "conflict"]):
|
||||
negative_interactions.append(memory)
|
||||
else:
|
||||
neutral_interactions.append(memory)
|
||||
|
||||
# Analyze relationship evolution
|
||||
if len(memories) > 1:
|
||||
earliest = min(memories, key=lambda m: m.timestamp)
|
||||
latest = max(memories, key=lambda m: m.timestamp)
|
||||
|
||||
relationship_evolution = f"My relationship with {other_character} started when {earliest.content[:50]}... and more recently {latest.content[:50]}..."
|
||||
else:
|
||||
relationship_evolution = f"I have limited interaction history with {other_character}."
|
||||
|
||||
# Construct insight
|
||||
if "interests" in question.lower():
|
||||
interests = []
|
||||
for memory in memories:
|
||||
if any(word in memory.content.lower() for word in ["like", "love", "enjoy", "interested"]):
|
||||
interests.append(memory.content[:80])
|
||||
|
||||
if interests:
|
||||
return f"About {other_character}'s interests: {'; '.join(interests[:2])}..."
|
||||
else:
|
||||
return f"I need to learn more about {other_character}'s interests."
|
||||
|
||||
else:
|
||||
# General relationship summary
|
||||
pos_count = len(positive_interactions)
|
||||
neg_count = len(negative_interactions)
|
||||
|
||||
if pos_count > neg_count:
|
||||
return f"My relationship with {other_character} seems positive. {relationship_evolution}"
|
||||
elif neg_count > pos_count:
|
||||
return f"I've had some challenging interactions with {other_character}. {relationship_evolution}"
|
||||
else:
|
||||
return f"My relationship with {other_character} is developing. {relationship_evolution}"
|
||||
|
||||
async def _analyze_creative_development(self, memories: List[VectorMemory], query: str) -> str:
|
||||
"""Analyze creative memories and development"""
|
||||
if not memories:
|
||||
return "I haven't explored this creative area much yet, but it sounds intriguing."
|
||||
|
||||
# Extract creative themes
|
||||
creative_themes = []
|
||||
for memory in memories:
|
||||
if memory.memory_type == MemoryType.CREATIVE:
|
||||
creative_themes.append(memory.content[:100])
|
||||
|
||||
# Analyze creative evolution
|
||||
if len(memories) > 1:
|
||||
memories_by_time = sorted(memories, key=lambda m: m.timestamp)
|
||||
earliest_creative = memories_by_time[0].content[:80]
|
||||
latest_creative = memories_by_time[-1].content[:80]
|
||||
|
||||
return f"My creative journey in this area started with: {earliest_creative}... and has evolved to: {latest_creative}..."
|
||||
elif creative_themes:
|
||||
return f"I've been exploring: {creative_themes[0]}..."
|
||||
else:
|
||||
return f"This relates to my broader thinking about creativity and expression."
|
||||
|
||||
async def _synthesize_pattern_insights(self, insights: List[MemoryInsight],
|
||||
pattern_type: str) -> MemoryInsight:
|
||||
"""Synthesize multiple insights into a comprehensive understanding"""
|
||||
if not insights:
|
||||
return MemoryInsight(
|
||||
insight=f"I haven't developed clear patterns in {pattern_type} yet.",
|
||||
confidence=0.1,
|
||||
supporting_memories=[],
|
||||
metadata={"pattern_type": pattern_type}
|
||||
)
|
||||
|
||||
# Combine insights
|
||||
combined_content = []
|
||||
all_memories = []
|
||||
total_confidence = 0
|
||||
|
||||
for insight in insights:
|
||||
combined_content.append(insight.insight)
|
||||
all_memories.extend(insight.supporting_memories)
|
||||
total_confidence += insight.confidence
|
||||
|
||||
avg_confidence = total_confidence / len(insights)
|
||||
|
||||
# Create synthesized insight
|
||||
synthesis = f"Reflecting on my {pattern_type}: " + " ".join(combined_content[:3])
|
||||
|
||||
return MemoryInsight(
|
||||
insight=synthesis,
|
||||
confidence=min(0.95, avg_confidence * 1.1), # Slight boost for synthesis
|
||||
supporting_memories=all_memories[:10], # Top 10 most relevant
|
||||
metadata={
|
||||
"pattern_type": pattern_type,
|
||||
"synthesized_from": len(insights),
|
||||
"total_memories": len(all_memories)
|
||||
}
|
||||
)
|
||||
519
src/rag/vector_store.py
Normal file
519
src/rag/vector_store.py
Normal file
@@ -0,0 +1,519 @@
|
||||
import asyncio
|
||||
import chromadb
|
||||
import numpy as np
|
||||
from typing import Dict, List, Any, Optional, Tuple
|
||||
from datetime import datetime, timedelta
|
||||
from pathlib import Path
|
||||
import json
|
||||
import hashlib
|
||||
from dataclasses import dataclass, asdict
|
||||
from enum import Enum
|
||||
|
||||
from sentence_transformers import SentenceTransformer
|
||||
from ..utils.logging import log_error_with_context, log_character_action
|
||||
from ..utils.config import get_settings
|
||||
import logging
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class MemoryType(Enum):
|
||||
PERSONAL = "personal"
|
||||
RELATIONSHIP = "relationship"
|
||||
CREATIVE = "creative"
|
||||
COMMUNITY = "community"
|
||||
REFLECTION = "reflection"
|
||||
EXPERIENCE = "experience"
|
||||
|
||||
@dataclass
|
||||
class VectorMemory:
|
||||
id: str
|
||||
content: str
|
||||
memory_type: MemoryType
|
||||
character_name: str
|
||||
timestamp: datetime
|
||||
importance: float
|
||||
metadata: Dict[str, Any]
|
||||
embedding: Optional[List[float]] = None
|
||||
|
||||
def to_dict(self) -> Dict[str, Any]:
|
||||
return {
|
||||
"id": self.id,
|
||||
"content": self.content,
|
||||
"memory_type": self.memory_type.value,
|
||||
"character_name": self.character_name,
|
||||
"timestamp": self.timestamp.isoformat(),
|
||||
"importance": self.importance,
|
||||
"metadata": self.metadata
|
||||
}
|
||||
|
||||
class VectorStoreManager:
|
||||
"""Manages multi-layer vector databases for character memories"""
|
||||
|
||||
def __init__(self, data_path: str = "./data/vector_stores"):
|
||||
self.data_path = Path(data_path)
|
||||
self.data_path.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Initialize embedding model
|
||||
self.embedding_model = SentenceTransformer('all-MiniLM-L6-v2')
|
||||
|
||||
# Initialize ChromaDB client
|
||||
self.chroma_client = chromadb.PersistentClient(path=str(self.data_path))
|
||||
|
||||
# Collection references
|
||||
self.personal_collections: Dict[str, chromadb.Collection] = {}
|
||||
self.community_collection = None
|
||||
self.creative_collections: Dict[str, chromadb.Collection] = {}
|
||||
|
||||
# Memory importance decay
|
||||
self.importance_decay_rate = 0.95
|
||||
self.consolidation_threshold = 0.8
|
||||
|
||||
async def initialize(self, character_names: List[str]):
|
||||
"""Initialize collections for all characters"""
|
||||
try:
|
||||
# Initialize personal memory collections
|
||||
for character_name in character_names:
|
||||
collection_name = f"personal_{character_name.lower()}"
|
||||
self.personal_collections[character_name] = self.chroma_client.get_or_create_collection(
|
||||
name=collection_name,
|
||||
metadata={"type": "personal", "character": character_name}
|
||||
)
|
||||
|
||||
# Initialize creative collections
|
||||
creative_collection_name = f"creative_{character_name.lower()}"
|
||||
self.creative_collections[character_name] = self.chroma_client.get_or_create_collection(
|
||||
name=creative_collection_name,
|
||||
metadata={"type": "creative", "character": character_name}
|
||||
)
|
||||
|
||||
# Initialize community collection
|
||||
self.community_collection = self.chroma_client.get_or_create_collection(
|
||||
name="community_knowledge",
|
||||
metadata={"type": "community"}
|
||||
)
|
||||
|
||||
logger.info(f"Initialized vector stores for {len(character_names)} characters")
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"component": "vector_store_init"})
|
||||
raise
|
||||
|
||||
async def store_memory(self, memory: VectorMemory) -> str:
|
||||
"""Store a memory in appropriate vector database"""
|
||||
try:
|
||||
# Generate embedding
|
||||
if not memory.embedding:
|
||||
memory.embedding = await self._generate_embedding(memory.content)
|
||||
|
||||
# Generate unique ID if not provided
|
||||
if not memory.id:
|
||||
memory.id = self._generate_memory_id(memory)
|
||||
|
||||
# Select appropriate collection
|
||||
collection = self._get_collection_for_memory(memory)
|
||||
|
||||
if not collection:
|
||||
raise ValueError(f"No collection found for memory type: {memory.memory_type}")
|
||||
|
||||
# Prepare metadata
|
||||
metadata = memory.metadata.copy()
|
||||
metadata.update({
|
||||
"character_name": memory.character_name,
|
||||
"timestamp": memory.timestamp.isoformat(),
|
||||
"importance": memory.importance,
|
||||
"memory_type": memory.memory_type.value
|
||||
})
|
||||
|
||||
# Store in collection
|
||||
collection.add(
|
||||
ids=[memory.id],
|
||||
embeddings=[memory.embedding],
|
||||
documents=[memory.content],
|
||||
metadatas=[metadata]
|
||||
)
|
||||
|
||||
log_character_action(
|
||||
memory.character_name,
|
||||
"stored_vector_memory",
|
||||
{"memory_type": memory.memory_type.value, "importance": memory.importance}
|
||||
)
|
||||
|
||||
return memory.id
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {
|
||||
"character": memory.character_name,
|
||||
"memory_type": memory.memory_type.value
|
||||
})
|
||||
raise
|
||||
|
||||
async def query_memories(self, character_name: str, query: str,
|
||||
memory_types: List[MemoryType] = None,
|
||||
limit: int = 10, min_importance: float = 0.0) -> List[VectorMemory]:
|
||||
"""Query character's memories using semantic search"""
|
||||
try:
|
||||
# Generate query embedding
|
||||
query_embedding = await self._generate_embedding(query)
|
||||
|
||||
# Determine which collections to search
|
||||
collections_to_search = []
|
||||
|
||||
if not memory_types:
|
||||
memory_types = [MemoryType.PERSONAL, MemoryType.RELATIONSHIP,
|
||||
MemoryType.EXPERIENCE, MemoryType.REFLECTION]
|
||||
|
||||
for memory_type in memory_types:
|
||||
collection = self._get_collection_for_type(character_name, memory_type)
|
||||
if collection:
|
||||
collections_to_search.append((collection, memory_type))
|
||||
|
||||
# Search each collection
|
||||
all_results = []
|
||||
|
||||
for collection, memory_type in collections_to_search:
|
||||
try:
|
||||
results = collection.query(
|
||||
query_embeddings=[query_embedding],
|
||||
n_results=limit,
|
||||
where={"character_name": character_name} if memory_type != MemoryType.COMMUNITY else None
|
||||
)
|
||||
|
||||
# Convert results to VectorMemory objects
|
||||
for i, (doc, metadata, distance) in enumerate(zip(
|
||||
results['documents'][0],
|
||||
results['metadatas'][0],
|
||||
results['distances'][0]
|
||||
)):
|
||||
if metadata.get('importance', 0) >= min_importance:
|
||||
memory = VectorMemory(
|
||||
id=results['ids'][0][i],
|
||||
content=doc,
|
||||
memory_type=MemoryType(metadata['memory_type']),
|
||||
character_name=metadata['character_name'],
|
||||
timestamp=datetime.fromisoformat(metadata['timestamp']),
|
||||
importance=metadata['importance'],
|
||||
metadata=metadata
|
||||
)
|
||||
memory.metadata['similarity_score'] = 1 - distance # Convert distance to similarity
|
||||
all_results.append(memory)
|
||||
|
||||
except Exception as e:
|
||||
logger.warning(f"Error querying collection {memory_type}: {e}")
|
||||
continue
|
||||
|
||||
# Sort by relevance (similarity + importance)
|
||||
all_results.sort(
|
||||
key=lambda m: m.metadata.get('similarity_score', 0) * 0.7 + m.importance * 0.3,
|
||||
reverse=True
|
||||
)
|
||||
|
||||
return all_results[:limit]
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"character": character_name, "query": query})
|
||||
return []
|
||||
|
||||
async def query_community_knowledge(self, query: str, limit: int = 5) -> List[VectorMemory]:
|
||||
"""Query community knowledge base"""
|
||||
try:
|
||||
if not self.community_collection:
|
||||
return []
|
||||
|
||||
query_embedding = await self._generate_embedding(query)
|
||||
|
||||
results = self.community_collection.query(
|
||||
query_embeddings=[query_embedding],
|
||||
n_results=limit
|
||||
)
|
||||
|
||||
memories = []
|
||||
for i, (doc, metadata, distance) in enumerate(zip(
|
||||
results['documents'][0],
|
||||
results['metadatas'][0],
|
||||
results['distances'][0]
|
||||
)):
|
||||
memory = VectorMemory(
|
||||
id=results['ids'][0][i],
|
||||
content=doc,
|
||||
memory_type=MemoryType.COMMUNITY,
|
||||
character_name=metadata.get('character_name', 'community'),
|
||||
timestamp=datetime.fromisoformat(metadata['timestamp']),
|
||||
importance=metadata['importance'],
|
||||
metadata=metadata
|
||||
)
|
||||
memory.metadata['similarity_score'] = 1 - distance
|
||||
memories.append(memory)
|
||||
|
||||
return sorted(memories, key=lambda m: m.metadata.get('similarity_score', 0), reverse=True)
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"query": query, "component": "community_knowledge"})
|
||||
return []
|
||||
|
||||
async def get_creative_knowledge(self, character_name: str, query: str, limit: int = 5) -> List[VectorMemory]:
|
||||
"""Query character's creative knowledge base"""
|
||||
try:
|
||||
if character_name not in self.creative_collections:
|
||||
return []
|
||||
|
||||
collection = self.creative_collections[character_name]
|
||||
query_embedding = await self._generate_embedding(query)
|
||||
|
||||
results = collection.query(
|
||||
query_embeddings=[query_embedding],
|
||||
n_results=limit
|
||||
)
|
||||
|
||||
memories = []
|
||||
for i, (doc, metadata, distance) in enumerate(zip(
|
||||
results['documents'][0],
|
||||
results['metadatas'][0],
|
||||
results['distances'][0]
|
||||
)):
|
||||
memory = VectorMemory(
|
||||
id=results['ids'][0][i],
|
||||
content=doc,
|
||||
memory_type=MemoryType.CREATIVE,
|
||||
character_name=character_name,
|
||||
timestamp=datetime.fromisoformat(metadata['timestamp']),
|
||||
importance=metadata['importance'],
|
||||
metadata=metadata
|
||||
)
|
||||
memory.metadata['similarity_score'] = 1 - distance
|
||||
memories.append(memory)
|
||||
|
||||
return sorted(memories, key=lambda m: m.metadata.get('similarity_score', 0), reverse=True)
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"character": character_name, "query": query})
|
||||
return []
|
||||
|
||||
async def consolidate_memories(self, character_name: str) -> Dict[str, Any]:
|
||||
"""Consolidate similar memories to save space"""
|
||||
try:
|
||||
consolidated_count = 0
|
||||
|
||||
# Get all personal memories for character
|
||||
collection = self.personal_collections.get(character_name)
|
||||
if not collection:
|
||||
return {"consolidated_count": 0}
|
||||
|
||||
# Get all memories
|
||||
all_memories = collection.get()
|
||||
|
||||
if len(all_memories['ids']) < 10: # Not enough memories to consolidate
|
||||
return {"consolidated_count": 0}
|
||||
|
||||
# Find similar memory clusters
|
||||
clusters = await self._find_similar_clusters(all_memories)
|
||||
|
||||
# Consolidate each cluster
|
||||
for cluster in clusters:
|
||||
if len(cluster) >= 3: # Only consolidate if 3+ similar memories
|
||||
consolidated_memory = await self._create_consolidated_memory(cluster, character_name)
|
||||
|
||||
if consolidated_memory:
|
||||
# Store consolidated memory
|
||||
await self.store_memory(consolidated_memory)
|
||||
|
||||
# Remove original memories
|
||||
collection.delete(ids=[mem['id'] for mem in cluster])
|
||||
|
||||
consolidated_count += len(cluster) - 1
|
||||
|
||||
log_character_action(
|
||||
character_name,
|
||||
"consolidated_memories",
|
||||
{"consolidated_count": consolidated_count}
|
||||
)
|
||||
|
||||
return {"consolidated_count": consolidated_count}
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"character": character_name})
|
||||
return {"consolidated_count": 0}
|
||||
|
||||
async def decay_memory_importance(self, character_name: str):
|
||||
"""Apply time-based decay to memory importance"""
|
||||
try:
|
||||
collection = self.personal_collections.get(character_name)
|
||||
if not collection:
|
||||
return
|
||||
|
||||
# Get all memories
|
||||
all_memories = collection.get(include=['metadatas'])
|
||||
|
||||
updates = []
|
||||
for memory_id, metadata in zip(all_memories['ids'], all_memories['metadatas']):
|
||||
# Calculate age in days
|
||||
timestamp = datetime.fromisoformat(metadata['timestamp'])
|
||||
age_days = (datetime.utcnow() - timestamp).days
|
||||
|
||||
# Apply decay
|
||||
current_importance = metadata['importance']
|
||||
decayed_importance = current_importance * (self.importance_decay_rate ** age_days)
|
||||
|
||||
if abs(decayed_importance - current_importance) > 0.01: # Only update if significant change
|
||||
metadata['importance'] = decayed_importance
|
||||
updates.append((memory_id, metadata))
|
||||
|
||||
# Update in batches
|
||||
if updates:
|
||||
for memory_id, metadata in updates:
|
||||
collection.update(
|
||||
ids=[memory_id],
|
||||
metadatas=[metadata]
|
||||
)
|
||||
|
||||
logger.info(f"Applied importance decay to {len(updates)} memories for {character_name}")
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"character": character_name})
|
||||
|
||||
async def _generate_embedding(self, text: str) -> List[float]:
|
||||
"""Generate embedding for text"""
|
||||
try:
|
||||
# Use asyncio to avoid blocking
|
||||
loop = asyncio.get_event_loop()
|
||||
embedding = await loop.run_in_executor(
|
||||
None,
|
||||
lambda: self.embedding_model.encode(text).tolist()
|
||||
)
|
||||
return embedding
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"text_length": len(text)})
|
||||
# Return zero embedding as fallback
|
||||
return [0.0] * 384 # MiniLM embedding size
|
||||
|
||||
def _get_collection_for_memory(self, memory: VectorMemory) -> Optional[chromadb.Collection]:
|
||||
"""Get appropriate collection for memory"""
|
||||
if memory.memory_type == MemoryType.COMMUNITY:
|
||||
return self.community_collection
|
||||
elif memory.memory_type == MemoryType.CREATIVE:
|
||||
return self.creative_collections.get(memory.character_name)
|
||||
else:
|
||||
return self.personal_collections.get(memory.character_name)
|
||||
|
||||
def _get_collection_for_type(self, character_name: str, memory_type: MemoryType) -> Optional[chromadb.Collection]:
|
||||
"""Get collection for specific memory type and character"""
|
||||
if memory_type == MemoryType.COMMUNITY:
|
||||
return self.community_collection
|
||||
elif memory_type == MemoryType.CREATIVE:
|
||||
return self.creative_collections.get(character_name)
|
||||
else:
|
||||
return self.personal_collections.get(character_name)
|
||||
|
||||
def _generate_memory_id(self, memory: VectorMemory) -> str:
|
||||
"""Generate unique ID for memory"""
|
||||
content_hash = hashlib.md5(memory.content.encode()).hexdigest()[:8]
|
||||
timestamp_str = memory.timestamp.strftime("%Y%m%d_%H%M%S")
|
||||
return f"{memory.character_name}_{memory.memory_type.value}_{timestamp_str}_{content_hash}"
|
||||
|
||||
async def _find_similar_clusters(self, memories: Dict[str, List]) -> List[List[Dict]]:
|
||||
"""Find clusters of similar memories for consolidation"""
|
||||
# This is a simplified clustering - in production you'd use proper clustering algorithms
|
||||
clusters = []
|
||||
processed = set()
|
||||
|
||||
for i, memory_id in enumerate(memories['ids']):
|
||||
if memory_id in processed:
|
||||
continue
|
||||
|
||||
cluster = [{'id': memory_id, 'content': memories['documents'][i], 'metadata': memories['metadatas'][i]}]
|
||||
processed.add(memory_id)
|
||||
|
||||
# Find similar memories (simplified similarity check)
|
||||
for j, other_id in enumerate(memories['ids'][i+1:], i+1):
|
||||
if other_id in processed:
|
||||
continue
|
||||
|
||||
# Simple similarity check based on content overlap
|
||||
content1 = memories['documents'][i].lower()
|
||||
content2 = memories['documents'][j].lower()
|
||||
|
||||
words1 = set(content1.split())
|
||||
words2 = set(content2.split())
|
||||
|
||||
overlap = len(words1 & words2) / len(words1 | words2) if words1 | words2 else 0
|
||||
|
||||
if overlap > 0.3: # 30% word overlap threshold
|
||||
cluster.append({'id': other_id, 'content': memories['documents'][j], 'metadata': memories['metadatas'][j]})
|
||||
processed.add(other_id)
|
||||
|
||||
if len(cluster) > 1:
|
||||
clusters.append(cluster)
|
||||
|
||||
return clusters
|
||||
|
||||
async def _create_consolidated_memory(self, cluster: List[Dict], character_name: str) -> Optional[VectorMemory]:
|
||||
"""Create a consolidated memory from a cluster of similar memories"""
|
||||
try:
|
||||
# Combine content
|
||||
contents = [mem['content'] for mem in cluster]
|
||||
combined_content = f"Consolidated memory: {' | '.join(contents[:3])}" # Limit to first 3
|
||||
|
||||
if len(cluster) > 3:
|
||||
combined_content += f" | ... and {len(cluster) - 3} more similar memories"
|
||||
|
||||
# Calculate average importance
|
||||
avg_importance = sum(mem['metadata']['importance'] for mem in cluster) / len(cluster)
|
||||
|
||||
# Get earliest timestamp
|
||||
timestamps = [datetime.fromisoformat(mem['metadata']['timestamp']) for mem in cluster]
|
||||
earliest_timestamp = min(timestamps)
|
||||
|
||||
# Create consolidated memory
|
||||
consolidated = VectorMemory(
|
||||
id="", # Will be generated
|
||||
content=combined_content,
|
||||
memory_type=MemoryType.PERSONAL,
|
||||
character_name=character_name,
|
||||
timestamp=earliest_timestamp,
|
||||
importance=avg_importance,
|
||||
metadata={
|
||||
"consolidated": True,
|
||||
"original_count": len(cluster),
|
||||
"consolidation_date": datetime.utcnow().isoformat()
|
||||
}
|
||||
)
|
||||
|
||||
return consolidated
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"character": character_name, "cluster_size": len(cluster)})
|
||||
return None
|
||||
|
||||
def get_store_statistics(self, character_name: str) -> Dict[str, Any]:
|
||||
"""Get statistics about character's vector stores"""
|
||||
try:
|
||||
stats = {
|
||||
"personal_memories": 0,
|
||||
"creative_memories": 0,
|
||||
"community_memories": 0,
|
||||
"total_memories": 0
|
||||
}
|
||||
|
||||
# Personal memories
|
||||
if character_name in self.personal_collections:
|
||||
personal_count = self.personal_collections[character_name].count()
|
||||
stats["personal_memories"] = personal_count
|
||||
stats["total_memories"] += personal_count
|
||||
|
||||
# Creative memories
|
||||
if character_name in self.creative_collections:
|
||||
creative_count = self.creative_collections[character_name].count()
|
||||
stats["creative_memories"] = creative_count
|
||||
stats["total_memories"] += creative_count
|
||||
|
||||
# Community memories (shared)
|
||||
if self.community_collection:
|
||||
stats["community_memories"] = self.community_collection.count()
|
||||
|
||||
return stats
|
||||
|
||||
except Exception as e:
|
||||
log_error_with_context(e, {"character": character_name})
|
||||
return {"error": str(e)}
|
||||
|
||||
# Global vector store manager
|
||||
vector_store_manager = VectorStoreManager()
|
||||
0
src/utils/__init__.py
Normal file
0
src/utils/__init__.py
Normal file
160
src/utils/config.py
Normal file
160
src/utils/config.py
Normal file
@@ -0,0 +1,160 @@
|
||||
import os
|
||||
import yaml
|
||||
from pydantic import BaseModel, Field
|
||||
from typing import List, Dict, Any, Optional
|
||||
from functools import lru_cache
|
||||
from pathlib import Path
|
||||
import logging
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class DatabaseConfig(BaseModel):
|
||||
host: str = "localhost"
|
||||
port: int = 5432
|
||||
name: str = "discord_fishbowl"
|
||||
user: str = "postgres"
|
||||
password: str
|
||||
|
||||
class RedisConfig(BaseModel):
|
||||
host: str = "localhost"
|
||||
port: int = 6379
|
||||
password: Optional[str] = None
|
||||
|
||||
class DiscordConfig(BaseModel):
|
||||
token: str
|
||||
guild_id: str
|
||||
channel_id: str
|
||||
|
||||
class LLMConfig(BaseModel):
|
||||
base_url: str = "http://localhost:11434"
|
||||
model: str = "llama2"
|
||||
timeout: int = 30
|
||||
max_tokens: int = 512
|
||||
temperature: float = 0.8
|
||||
|
||||
class ConversationConfig(BaseModel):
|
||||
min_delay_seconds: int = 30
|
||||
max_delay_seconds: int = 300
|
||||
max_conversation_length: int = 50
|
||||
activity_window_hours: int = 16
|
||||
quiet_hours_start: int = 23
|
||||
quiet_hours_end: int = 7
|
||||
|
||||
class LoggingConfig(BaseModel):
|
||||
level: str = "INFO"
|
||||
format: str = "{time} | {level} | {message}"
|
||||
file: str = "logs/fishbowl.log"
|
||||
|
||||
class Settings(BaseModel):
|
||||
database: DatabaseConfig
|
||||
redis: RedisConfig
|
||||
discord: DiscordConfig
|
||||
llm: LLMConfig
|
||||
conversation: ConversationConfig
|
||||
logging: LoggingConfig
|
||||
|
||||
class CharacterConfig(BaseModel):
|
||||
name: str
|
||||
personality: str
|
||||
interests: List[str]
|
||||
speaking_style: str
|
||||
background: str
|
||||
avatar_url: Optional[str] = None
|
||||
|
||||
class CharacterSettings(BaseModel):
|
||||
characters: List[CharacterConfig]
|
||||
conversation_topics: List[str]
|
||||
|
||||
def load_yaml_config(file_path: str) -> Dict[str, Any]:
|
||||
"""Load YAML configuration file with environment variable substitution"""
|
||||
try:
|
||||
with open(file_path, 'r') as file:
|
||||
content = file.read()
|
||||
|
||||
# Simple environment variable substitution
|
||||
import re
|
||||
def replace_env_var(match):
|
||||
var_name = match.group(1)
|
||||
default_value = match.group(2) if match.group(2) else ""
|
||||
return os.getenv(var_name, default_value)
|
||||
|
||||
# Replace ${VAR} and ${VAR:-default} patterns
|
||||
content = re.sub(r'\$\{([^}:]+)(?::([^}]*))?\}', replace_env_var, content)
|
||||
|
||||
return yaml.safe_load(content)
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to load config file {file_path}: {e}")
|
||||
raise
|
||||
|
||||
@lru_cache()
|
||||
def get_settings() -> Settings:
|
||||
"""Get application settings from config file"""
|
||||
config_path = Path(__file__).parent.parent.parent / "config" / "settings.yaml"
|
||||
|
||||
if not config_path.exists():
|
||||
raise FileNotFoundError(f"Settings file not found: {config_path}")
|
||||
|
||||
config_data = load_yaml_config(str(config_path))
|
||||
return Settings(**config_data)
|
||||
|
||||
@lru_cache()
|
||||
def get_character_settings() -> CharacterSettings:
|
||||
"""Get character settings from config file"""
|
||||
config_path = Path(__file__).parent.parent.parent / "config" / "characters.yaml"
|
||||
|
||||
if not config_path.exists():
|
||||
raise FileNotFoundError(f"Character config file not found: {config_path}")
|
||||
|
||||
config_data = load_yaml_config(str(config_path))
|
||||
return CharacterSettings(**config_data)
|
||||
|
||||
def setup_logging():
|
||||
"""Setup logging configuration"""
|
||||
settings = get_settings()
|
||||
|
||||
# Create logs directory if it doesn't exist
|
||||
log_file = Path(settings.logging.file)
|
||||
log_file.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Configure loguru
|
||||
from loguru import logger
|
||||
|
||||
# Remove default handler
|
||||
logger.remove()
|
||||
|
||||
# Add console handler
|
||||
logger.add(
|
||||
sink=lambda msg: print(msg, end=""),
|
||||
level=settings.logging.level,
|
||||
format=settings.logging.format,
|
||||
colorize=True
|
||||
)
|
||||
|
||||
# Add file handler
|
||||
logger.add(
|
||||
sink=str(log_file),
|
||||
level=settings.logging.level,
|
||||
format=settings.logging.format,
|
||||
rotation="10 MB",
|
||||
retention="30 days",
|
||||
compression="zip"
|
||||
)
|
||||
|
||||
return logger
|
||||
|
||||
# Environment validation
|
||||
def validate_environment():
|
||||
"""Validate required environment variables are set"""
|
||||
required_vars = [
|
||||
"DISCORD_BOT_TOKEN",
|
||||
"DISCORD_GUILD_ID",
|
||||
"DISCORD_CHANNEL_ID",
|
||||
"DB_PASSWORD"
|
||||
]
|
||||
|
||||
missing_vars = [var for var in required_vars if not os.getenv(var)]
|
||||
|
||||
if missing_vars:
|
||||
raise ValueError(f"Missing required environment variables: {', '.join(missing_vars)}")
|
||||
|
||||
logger.info("Environment validation passed")
|
||||
128
src/utils/logging.py
Normal file
128
src/utils/logging.py
Normal file
@@ -0,0 +1,128 @@
|
||||
import logging
|
||||
from loguru import logger
|
||||
from typing import Dict, Any
|
||||
import sys
|
||||
import traceback
|
||||
from datetime import datetime
|
||||
|
||||
class InterceptHandler(logging.Handler):
|
||||
"""Intercept standard logging and route to loguru"""
|
||||
|
||||
def emit(self, record):
|
||||
try:
|
||||
level = logger.level(record.levelname).name
|
||||
except ValueError:
|
||||
level = record.levelno
|
||||
|
||||
frame, depth = logging.currentframe(), 2
|
||||
while frame.f_code.co_filename == logging.__file__:
|
||||
frame = frame.f_back
|
||||
depth += 1
|
||||
|
||||
logger.opt(depth=depth, exception=record.exc_info).log(
|
||||
level, record.getMessage()
|
||||
)
|
||||
|
||||
def setup_logging_interceptor():
|
||||
"""Setup logging to intercept standard library logging"""
|
||||
logging.basicConfig(handlers=[InterceptHandler()], level=0, force=True)
|
||||
|
||||
# Silence some noisy loggers
|
||||
logging.getLogger("discord").setLevel(logging.WARNING)
|
||||
logging.getLogger("discord.http").setLevel(logging.WARNING)
|
||||
logging.getLogger("asyncio").setLevel(logging.WARNING)
|
||||
|
||||
def log_character_action(character_name: str, action: str, details: Dict[str, Any] = None):
|
||||
"""Log character-specific actions"""
|
||||
logger.info(f"Character {character_name}: {action}", extra={"details": details or {}})
|
||||
|
||||
def log_conversation_event(conversation_id: int, event: str, participants: list = None, details: Dict[str, Any] = None):
|
||||
"""Log conversation events"""
|
||||
logger.info(
|
||||
f"Conversation {conversation_id}: {event}",
|
||||
extra={
|
||||
"participants": participants or [],
|
||||
"details": details or {}
|
||||
}
|
||||
)
|
||||
|
||||
def log_llm_interaction(character_name: str, prompt_length: int, response_length: int, model: str, duration: float):
|
||||
"""Log LLM API interactions"""
|
||||
logger.info(
|
||||
f"LLM interaction for {character_name}",
|
||||
extra={
|
||||
"prompt_length": prompt_length,
|
||||
"response_length": response_length,
|
||||
"model": model,
|
||||
"duration": duration
|
||||
}
|
||||
)
|
||||
|
||||
def log_error_with_context(error: Exception, context: Dict[str, Any] = None):
|
||||
"""Log errors with additional context"""
|
||||
logger.error(
|
||||
f"Error: {str(error)}",
|
||||
extra={
|
||||
"error_type": type(error).__name__,
|
||||
"traceback": traceback.format_exc(),
|
||||
"context": context or {}
|
||||
}
|
||||
)
|
||||
|
||||
def log_database_operation(operation: str, table: str, duration: float, success: bool = True):
|
||||
"""Log database operations"""
|
||||
level = "info" if success else "error"
|
||||
logger.log(
|
||||
level,
|
||||
f"Database {operation} on {table}",
|
||||
extra={
|
||||
"duration": duration,
|
||||
"success": success
|
||||
}
|
||||
)
|
||||
|
||||
def log_autonomous_decision(character_name: str, decision: str, reasoning: str, context: Dict[str, Any] = None):
|
||||
"""Log autonomous character decisions"""
|
||||
logger.info(
|
||||
f"Character {character_name} decision: {decision}",
|
||||
extra={
|
||||
"reasoning": reasoning,
|
||||
"context": context or {}
|
||||
}
|
||||
)
|
||||
|
||||
def log_memory_operation(character_name: str, operation: str, memory_type: str, importance: float = None):
|
||||
"""Log memory operations"""
|
||||
logger.info(
|
||||
f"Memory {operation} for {character_name}",
|
||||
extra={
|
||||
"memory_type": memory_type,
|
||||
"importance": importance
|
||||
}
|
||||
)
|
||||
|
||||
def log_relationship_change(character_a: str, character_b: str, old_relationship: str, new_relationship: str, reason: str):
|
||||
"""Log relationship changes between characters"""
|
||||
logger.info(
|
||||
f"Relationship change: {character_a} <-> {character_b}",
|
||||
extra={
|
||||
"old_relationship": old_relationship,
|
||||
"new_relationship": new_relationship,
|
||||
"reason": reason
|
||||
}
|
||||
)
|
||||
|
||||
def create_performance_logger():
|
||||
"""Create a performance-focused logger"""
|
||||
performance_logger = logger.bind(category="performance")
|
||||
return performance_logger
|
||||
|
||||
def log_system_health(component: str, status: str, metrics: Dict[str, Any] = None):
|
||||
"""Log system health metrics"""
|
||||
logger.info(
|
||||
f"System health - {component}: {status}",
|
||||
extra={
|
||||
"metrics": metrics or {},
|
||||
"timestamp": datetime.utcnow().isoformat()
|
||||
}
|
||||
)
|
||||
Reference in New Issue
Block a user