Canvas-with-observation-feeds-HTML

🜂 Canvas with Live AI Observation Feeds

A sovereign web interface for real-time AI observation, analysis, and collaborative synthesis. This application provides a canvas for user observations while integrating live intelligence streams from multiple AI systems, ensuring user sovereignty and comprehensive analysis capabilities.

Experience an enhanced launch with ASCII art startup sequences and enjoy an interface featuring square grid layout, real-time orchestration status, dynamic resizing, and professional user experience.

Features

Core Functionality

Advanced Capabilities

Technical Features

Requirements

Software Dependencies

AI Models

The application requires the following Ollama models:

System Requirements

Quick Start Guide

Use the provided batch files for the ultimate launch experience:

  1. Double-click launch-canvas.bat
  2. Watch the ASCII art startup sequence
  3. The script will automatically:
    • Start Ollama server (if not already running)
    • Launch Python web server
    • Open your browser to the application
    • Display real-time status updates

That’s it! Enjoy the launch experience with full automation.

Option 2: Manual Setup

If you prefer manual control:

Step 1: Download the Repository

git clone https://github.com/Yufok1/Canvas-with-observation-feeds-HTML.git
cd Canvas-with-observation-feeds-HTML

Step 2: Install Ollama

Step 3: Start Ollama Server

ollama serve

Step 4: Start Local Web Server

Important: You must start the server from the project directory that contains the HTML file.

python -m http.server 8000

Why this matters: The web server serves files from the directory where it’s started. If you start the server from a different directory, you’ll get 404 errors when trying to access the HTML file.

Step 5: Access the Application

Open your browser and go to: http://localhost:8000/canvas-with-observation-feeds.html

That’s it! The AI features will now work because both the web app and Ollama are running locally.

Enhanced Interface Features

Once launched, you’ll experience:

Why GitHub Pages Won’t Work

The GitHub Pages version (https://yufok1.github.io/...) cannot connect to Ollama due to fundamental browser security restrictions:

GitHub Pages Demo (UI Only)

You can view the interface at: https://yufok1.github.io/Canvas-with-observation-feeds-HTML/canvas-with-observation-feeds.html

This is for UI demonstration only - the AI features will not work due to CORS restrictions.

CORS Security Restriction

The Error You See

Access to fetch at 'http://localhost:11434/api/generate' from origin 'https://yufok1.github.io' has been blocked by CORS policy

This error is expected and cannot be fixed for the GitHub Pages version.

Some browsers allow disabling CORS for development, but this:

Recommendation: Use the local version - it’s more secure, faster, and fully functional. The GitHub Pages version is only for UI demonstration.

# Download and install Ollama from https://ollama.ai/
# Follow platform-specific installation instructions

2. Pull Required Models

# Pull the unified model used by all AI systems
ollama pull gemma3:1b

3. Start Ollama Server

# Start Ollama in a terminal
ollama serve

4. Launch the Application

You must run a local web server to access the application. Directly opening the HTML file (using file://) will cause issues with browser security, localStorage, and AI connectivity.

Start a local server (choose one):

# Using Python (recommended)
python -m http.server 8000

# Using Node.js (alternative)
npx http-server -p 8000

Then access the app via: http://localhost:8000/canvas-with-observation-feeds.html

The interface will automatically connect to Ollama on localhost:11434 for all AI features. All traffic and data remain local to your machine.

Usage

Getting Started

  1. Canvas Workspace: Begin by typing observations in the main canvas area
  2. AI Monitoring: AI systems automatically analyze your content and provide insights
  3. Interactive Chat: Use the chat interface to communicate directly with AI entities
  4. Synthesis: Click “🎭 Synthesize” to trigger collaborative AI analysis

Key Interactions

Advanced Features

Architecture

System Components

AI System Hierarchy

NAZAR Triage Council
├── Internal Council (Intuition, Fractal, Emotional, Pattern)
├── DJINN Governance Oversight
└── Entity Authorization & Response Routing

Data Flow

  1. User input → Canvas change detection
  2. Content analysis → AI system activation
  3. Parallel queries → Response aggregation
  4. Synthesis coordination → Unified insights
  5. Memory update → Persistent storage

Configuration

Model Assignment

Models are configured in the optimizedModelMap object:

const optimizedModelMap = {
    djinn: "gemma3:1b",
    nazar: "gemma3:1b",
    // ... other systems
};

Ollama Agent Interchangeability

The AI systems are fully interchangeable with any Ollama-compatible model. To change models:

  1. Pull the desired model:
    ollama pull [model-name]
    # Examples:
    ollama pull llama2:7b
    ollama pull mistral:7b
    ollama pull codellama:7b
    
  2. Update the configuration in canvas-with-observation-feeds.html:
    const optimizedModelMap = {
        djinn: "llama2:7b",        // Governance & strategic analysis
        nazar: "mistral:7b",       // Consciousness & fractal analysis
        narra: "codellama:7b",     // Pattern recognition
        whale: "gemma3:1b",        // Memory & interrogation
        watchtower: "llama2:7b"    // Monitoring & metrics
    };
    
  3. Considerations for model selection:
    • Context window: Larger models (7B+) handle longer conversations better
    • Speed vs. quality: Smaller models (1B-3B) are faster but less detailed
    • Specialization: Code-focused models work well for technical analysis
    • Memory usage: Larger models require more RAM
    • Compatibility: Ensure the model supports the required API endpoints
  4. Testing new models:
    • Start with one system to test compatibility
    • Monitor response quality and speed
    • Adjust polling intervals if needed for slower models
    • Check console logs for any API compatibility issues

Caching Settings

Adjust cache durations in the intelligentCache configuration:

CACHE_DURATION: 180000,    // 3 minutes for responses
CONTEXT_DURATION: 300000,  // 5 minutes for context
SYNTHESIS_DURATION: 600000 // 10 minutes for synthesis

Polling Intervals

Activity-based polling intervals:

pollingIntervals: {
    high: 120000,    // 2 minutes when active
    medium: 300000,  // 5 minutes when moderate
    low: 600000      // 10 minutes when idle
}

Troubleshooting

Common Issues

Batch File Won’t Start

Ollama Connection Failed

Slow Performance

AI Responses Not Appearing

Interface Layout Issues

Memory Issues

Debug Mode

Enable verbose logging by opening browser developer tools (F12) and monitoring the console for detailed operation logs. Look for:

Batch File Troubleshooting

If the automated launch fails:

  1. Manually start Ollama: ollama serve
  2. Start web server: python -m http.server 8000
  3. Open browser to: http://localhost:8000/canvas-with-observation-feeds.html
  4. Check console for any remaining errors

Contributing

Development Setup

  1. Fork the repository
  2. Make changes to the HTML file
  3. Test using the provided batch file (launch-canvas.bat)
  4. Ensure cross-browser compatibility
  5. Verify all enhanced UI features work correctly (square layout, real-time status, shortcuts)

Testing Enhanced Features

Code Style

Feature Requests

License

This project is provided as-is for educational and research purposes. Please ensure compliance with Ollama’s licensing terms and any applicable AI usage policies.

Support

For issues or questions:


Note: This application requires a local Ollama installation and a local web server. All AI processing and data storage are strictly local—no external servers are used. Monitor your system’s performance and adjust polling intervals as needed.

https://github.com/Yufok1/Canvas-with-observation-feeds-HTML

Matrix Rain Effect

The Matrix rain background effect in this README is powered by:
Matrix-Rain-HTML-Background