🜂 Canvas with Live AI Observation Feeds
A sovereign web interface for real-time AI observation, analysis, and collaborative synthesis. This application provides a canvas for user observations while integrating live intelligence streams from multiple AI systems, ensuring user sovereignty and comprehensive analysis capabilities.
Experience an enhanced launch with ASCII art startup sequences and enjoy an interface featuring square grid layout, real-time orchestration status, dynamic resizing, and professional user experience.
Features
Core Functionality
- Sovereign Canvas: User-controlled workspace for observations and assessments
- Live AI Feeds: Real-time intelligence streams from specialized AI systems:
- ⚖️ DJINN: Governance & Strategic Analysis
- 🔮 NAZAR: Fractal & Consciousness Analysis
- 🌊 NARRA: Pattern Recognition & Synthesis
- 🐋 WHALE: Deep Interrogation & Memory
- 🔱 WATCHTOWER: Operational Monitoring & Metrics
Advanced Capabilities
- AI Collaborative Synthesis: Multi-system analysis with unified synthesis
- Hierarchical Governance: NAZAR-led triage council with DJINN oversight
- Interactive Chat: Direct communication with AI entities
- Mouse Tracking: Behavioral analysis and insights
- Real-time Metrics: System performance monitoring
- Auto-save & Export: Persistent data management
- Canvas Change Detection: Intelligent content analysis triggers
- Dynamic GUI Resizing: CTRL + Mousewheel for instant interface scaling
- Real-time Orchestration Status: Live AI system confidence monitoring
- Enhanced Layout: Square grid design for optimal space utilization
- Launch Experience: ASCII art batch files for startup
Technical Features
- Activity-Based Polling: Dynamic AI update intervals based on user engagement
- Intelligent Caching: Optimized response caching with configurable durations
- Parallel Processing: Concurrent AI system queries for efficiency
- Memory Management: Conversational continuity across sessions
- Content Complexity Analysis: Dynamic domain detection and insight refresh
- Full-Height AI Feeds: Optimized panel layout for maximum content visibility
- Active Synthesis Indicators: Dynamic status messages showing AI processing state
Requirements
Software Dependencies
- Ollama: Local AI model server (version 0.1.0+ required)
- Modern Web Browser: Chrome 90+, Firefox 88+, Safari 14+, Edge 90+
- Local Server: Required for proper operation and local-only traffic (see below)
AI Models
The application requires the following Ollama models:
gemma3:1b
(used for all AI systems)
System Requirements
- RAM: 4GB minimum, 8GB recommended
- Storage: 2GB free space for models
- Network: Local Ollama connection (localhost:11434)
Quick Start Guide
Option 1: Professional Launch (Recommended)
Use the provided batch files for the ultimate launch experience:
- Double-click
launch-canvas.bat
- Watch the ASCII art startup sequence
- The script will automatically:
- Start Ollama server (if not already running)
- Launch Python web server
- Open your browser to the application
- Display real-time status updates
That’s it! Enjoy the launch experience with full automation.
Option 2: Manual Setup
If you prefer manual control:
Step 1: Download the Repository
git clone https://github.com/Yufok1/Canvas-with-observation-feeds-HTML.git
cd Canvas-with-observation-feeds-HTML
Step 2: Install Ollama
- Download from: https://ollama.ai/
- Install and run Ollama
- Pull the required model:
Step 3: Start Ollama Server
Step 4: Start Local Web Server
Important: You must start the server from the project directory that contains the HTML file.
python -m http.server 8000
Why this matters: The web server serves files from the directory where it’s started. If you start the server from a different directory, you’ll get 404 errors when trying to access the HTML file.
Step 5: Access the Application
Open your browser and go to: http://localhost:8000/canvas-with-observation-feeds.html
That’s it! The AI features will now work because both the web app and Ollama are running locally.
Enhanced Interface Features
Once launched, you’ll experience:
- Square Layout: Optimized 2x2 grid for balanced viewing
- Full-Height Feeds: Maximum content visibility in each panel
- Real-Time Status: Live orchestration updates in the header
- Dynamic Placeholders: Context-aware status messages
- Keyboard Shortcuts: Prominent CTRL+mousewheel display for resizing
- No Breathing Effect: Clean, professional animations
- Active Synthesis Indicators: Visual feedback for AI processing
Why GitHub Pages Won’t Work
The GitHub Pages version (https://yufok1.github.io/...
) cannot connect to Ollama due to fundamental browser security restrictions:
GitHub Pages Demo (UI Only)
You can view the interface at: https://yufok1.github.io/Canvas-with-observation-feeds-HTML/canvas-with-observation-feeds.html
This is for UI demonstration only - the AI features will not work due to CORS restrictions.
CORS Security Restriction
- Browsers block remote websites from accessing
localhost
resources
- This is a security feature to prevent malicious sites from accessing your local services
- GitHub Pages is considered a “remote origin” even though you own the repository
The Error You See
Access to fetch at 'http://localhost:11434/api/generate' from origin 'https://yufok1.github.io' has been blocked by CORS policy
This error is expected and cannot be fixed for the GitHub Pages version.
Browser Workarounds (Not Recommended)
Some browsers allow disabling CORS for development, but this:
- Reduces your security
- May not work reliably
- Is not suitable for production use
Recommendation: Use the local version - it’s more secure, faster, and fully functional. The GitHub Pages version is only for UI demonstration.
# Download and install Ollama from https://ollama.ai/
# Follow platform-specific installation instructions
2. Pull Required Models
# Pull the unified model used by all AI systems
ollama pull gemma3:1b
3. Start Ollama Server
# Start Ollama in a terminal
ollama serve
4. Launch the Application
You must run a local web server to access the application. Directly opening the HTML file (using file://
) will cause issues with browser security, localStorage, and AI connectivity.
Start a local server (choose one):
# Using Python (recommended)
python -m http.server 8000
# Using Node.js (alternative)
npx http-server -p 8000
Then access the app via: http://localhost:8000/canvas-with-observation-feeds.html
The interface will automatically connect to Ollama on localhost:11434 for all AI features. All traffic and data remain local to your machine.
Usage
Getting Started
- Canvas Workspace: Begin by typing observations in the main canvas area
- AI Monitoring: AI systems automatically analyze your content and provide insights
- Interactive Chat: Use the chat interface to communicate directly with AI entities
- Synthesis: Click “🎭 Synthesize” to trigger collaborative AI analysis
Key Interactions
- Observation Tools: Mark content with pattern, anomaly, correlation, insight, or note markers
- Canvas Controls: Save, export, or clear your work
- AI Feeds: Monitor real-time intelligence streams in the feeds panel
- Metrics Dashboard: View system performance and correlations
- Triage Force: Access advanced analysis capabilities via the triage button
- Browser Controls: Use CTRL + mousewheel to resize windows and interface elements
Advanced Features
- Proactive Engagement: AI systems may initiate conversations based on content patterns
- Memory Continuity: Previous interactions are remembered across sessions
- Content Analysis: Complex content triggers enhanced AI analysis
- Mouse Insights: Behavioral patterns are analyzed for user experience optimization
Architecture
System Components
- Frontend: Single HTML file with embedded CSS/JavaScript
- AI Integration: Direct API calls to local Ollama instance
- Data Storage: Browser localStorage for persistence
- Caching System: Intelligent response caching with Map-based storage
- Polling Engine: Activity-aware update scheduling
AI System Hierarchy
NAZAR Triage Council
├── Internal Council (Intuition, Fractal, Emotional, Pattern)
├── DJINN Governance Oversight
└── Entity Authorization & Response Routing
Data Flow
- User input → Canvas change detection
- Content analysis → AI system activation
- Parallel queries → Response aggregation
- Synthesis coordination → Unified insights
- Memory update → Persistent storage
Configuration
Model Assignment
Models are configured in the optimizedModelMap
object:
const optimizedModelMap = {
djinn: "gemma3:1b",
nazar: "gemma3:1b",
// ... other systems
};
Ollama Agent Interchangeability
The AI systems are fully interchangeable with any Ollama-compatible model. To change models:
- Pull the desired model:
ollama pull [model-name]
# Examples:
ollama pull llama2:7b
ollama pull mistral:7b
ollama pull codellama:7b
- Update the configuration in
canvas-with-observation-feeds.html
:
const optimizedModelMap = {
djinn: "llama2:7b", // Governance & strategic analysis
nazar: "mistral:7b", // Consciousness & fractal analysis
narra: "codellama:7b", // Pattern recognition
whale: "gemma3:1b", // Memory & interrogation
watchtower: "llama2:7b" // Monitoring & metrics
};
- Considerations for model selection:
- Context window: Larger models (7B+) handle longer conversations better
- Speed vs. quality: Smaller models (1B-3B) are faster but less detailed
- Specialization: Code-focused models work well for technical analysis
- Memory usage: Larger models require more RAM
- Compatibility: Ensure the model supports the required API endpoints
- Testing new models:
- Start with one system to test compatibility
- Monitor response quality and speed
- Adjust polling intervals if needed for slower models
- Check console logs for any API compatibility issues
Caching Settings
Adjust cache durations in the intelligentCache
configuration:
CACHE_DURATION: 180000, // 3 minutes for responses
CONTEXT_DURATION: 300000, // 5 minutes for context
SYNTHESIS_DURATION: 600000 // 10 minutes for synthesis
Polling Intervals
Activity-based polling intervals:
pollingIntervals: {
high: 120000, // 2 minutes when active
medium: 300000, // 5 minutes when moderate
low: 600000 // 10 minutes when idle
}
Troubleshooting
Common Issues
Batch File Won’t Start
- Ensure Python is installed and in your PATH
- Run as Administrator if permission errors occur
- Check that the batch file is in the same directory as the HTML file
Ollama Connection Failed
- Ensure Ollama is running:
ollama serve
- Check port 11434 is available
- Verify model is pulled:
ollama list
- Try restarting Ollama if connection issues persist
Slow Performance
- Reduce polling frequency in configuration
- Clear browser cache and localStorage
- Use a local server instead of file:// protocol
- Close other resource-intensive applications
AI Responses Not Appearing
- Check browser console for errors (F12)
- Verify model compatibility with gemma3:1b
- Ensure sufficient system resources (4GB+ RAM)
- Try clearing AI memory and restarting
Interface Layout Issues
- Use CTRL+mousewheel to resize panels
- Refresh the page if layout appears broken
- Clear browser cache if CSS doesn’t load properly
- Ensure modern browser (Chrome 90+, Firefox 88+, Edge 90+)
Memory Issues
- Clear AI memory via the “🧠 Clear Memory” button
- Reset localStorage if problems persist
- Monitor RAM usage - close other applications if needed
- Consider restarting the browser session
Debug Mode
Enable verbose logging by opening browser developer tools (F12) and monitoring the console for detailed operation logs. Look for:
- AI orchestration status messages
- Network request/response details
- Canvas change detection events
- Memory usage warnings
Batch File Troubleshooting
If the automated launch fails:
- Manually start Ollama:
ollama serve
- Start web server:
python -m http.server 8000
- Open browser to:
http://localhost:8000/canvas-with-observation-feeds.html
- Check console for any remaining errors
Contributing
Development Setup
- Fork the repository
- Make changes to the HTML file
- Test using the provided batch file (
launch-canvas.bat
)
- Ensure cross-browser compatibility
- Verify all enhanced UI features work correctly (square layout, real-time status, shortcuts)
Testing Enhanced Features
- Layout Testing: Verify 2x2 grid layout and CTRL+mousewheel resizing
- Status Updates: Confirm real-time orchestration status in header
- Batch Files: Test both batch file variants for proper startup sequence
- Performance: Monitor memory usage with active AI synthesis
- Cross-browser: Test on Chrome, Firefox, and Edge
Code Style
- Use consistent indentation (4 spaces)
- Comment complex logic sections
- Maintain separation of concerns in JavaScript functions
- Follow HTML5 semantic standards
- Preserve enhanced UI features and animations
Feature Requests
- Open issues for new AI system integrations
- Suggest improvements to the governance hierarchy
- Propose enhancements to the synthesis process
- Request UI/UX improvements for the orchestration interface
License
This project is provided as-is for educational and research purposes. Please ensure compliance with Ollama’s licensing terms and any applicable AI usage policies.
Support
For issues or questions:
- Check the troubleshooting section above
- Review browser console logs for error details
- Ensure Ollama is properly configured and running
Note: This application requires a local Ollama installation and a local web server. All AI processing and data storage are strictly local—no external servers are used. Monitor your system’s performance and adjust polling intervals as needed.
https://github.com/Yufok1/Canvas-with-observation-feeds-HTML
Matrix Rain Effect
The Matrix rain background effect in this README is powered by:
Matrix-Rain-HTML-Background