The Chrome Extension That Knows What You're Reading
See It In Action
This video shows the voice interface in action. The Quake-style console and more features coming soon.
Your AI Co-Pilot for the Entire Web
Reading an article? Watching a YouTube video? Browsing GitHub?
Press ` (backtick) and your AI assistant drops down with the page context available. The extension captures what you’re looking at and shares it with your AI - whether you’re typing in the chat or using voice.
The context flows between all interfaces. Voice knows what you’re reading. Chat knows what tab you’re on.
The Problem With Creating Content
You Find Gold, Then Lose It
- Read amazing article → Switch to X → Forget key points
- Watch insightful video → Open new tab → Lost the moment
- See brilliant code → Copy link → Never share insights
- Find perfect quote → Screenshot → Buried in photos
Context Switching Kills Creativity
Every tab switch is a creativity leak. By the time you open X, the insight is gone.
Enter: Context-Aware Chrome Extension
Press ` for Context-Aware Chat
The Quake-style console drops down. The extension has captured:
- What article you’re reading
- Which tweet you’re viewing
- What video is playing
- Which code you’re examining
This context is available to your AI assistant. You can ask questions about the page, request summaries, or create content inspired by what you’re viewing. The AI has the context as background knowledge.
How Context Awareness Works
graph LR
subgraph "Page Context"
A[X/Twitter] -->|Detect| E[Tweet Info]
B[YouTube] -->|Extract| F[Video Data]
C[Articles] -->|Capture| G[Article Text]
D[GitHub] -->|Parse| H[Code Context]
end
subgraph "Context Available To"
E --> I[Chat Interface]
F --> I
G --> I
H --> I
E --> J[Voice Interface]
F --> J
G --> J
H --> J
end
subgraph "User Actions"
I --> K[Ask Questions]
J --> K
I --> L[Create Content]
J --> L
end
style I fill:#2d4a3a,stroke:#2e2a3d,stroke-width:2px,color:#fff
style J fill:#2d4a3a,stroke:#2e2a3d,stroke-width:2px,color:#fff
style A fill:#1d5a8c,stroke:#2e2a3d,stroke-width:2px,color:#fff
style B fill:#663333,stroke:#2e2a3d,stroke-width:2px,color:#fff
style D fill:#29273d,stroke:#2e2a3d,stroke-width:2px,color:#fff
What Context Gets Captured
On X/Twitter:
- Current tweet, author, metrics
- Available for: Creating quote tweets, replies, or related content
On YouTube:
- Video title, current timestamp
- Available for: Discussing the video, creating summaries
On Articles:
- Headline, author, selected text
- Available for: Writing commentary, extracting insights
On GitHub:
- Repository, file name, code language
- Available for: Explaining code, creating tutorials
The UI That Feels Native
Quake Console Design
- Activation: ` (backtick) or Alt+T
- Position: Slides down from top
- Height: 50% of viewport
- Animation: Smooth 200ms
- Backdrop: Subtle blur
Terminal Aesthetic
- Font: Monospace for that hacker feel
- Background: Semi-transparent dark overlay
- Border: Signature lavender accent
- Shadow: Deep shadow for depth
Context Display Bar
Shows what context is available to the AI:
[Twitter: @siimh]
→ Tweet context loaded[YouTube: Video Title]
→ Video context available[Article: Headline]
→ Article context captured
This context enriches your conversations - the AI understands what you’re referring to without you having to explain.
How Context Sharing Works
Between Chat and Voice
The magic happens when you switch between interfaces:
- Reading an article in your browser
- Open the extension chat - context is there
- Switch to voice mode - same context available
- The AI knows what you’re looking at regardless of input method
Example Workflows
Article Commentary:
- Read interesting article
- Press ` to open console
- “What’s the main argument here?”
- AI references the article context
- “Write a thread disagreeing with this”
- Creates counterargument based on the article
Video Discussion:
- Watching YouTube video
- Alt+Space for voice mode
- “Summarize what I just watched”
- AI uses video context to respond
- “Turn this into a tweet thread”
- Creates thread from video insights
Code Explanation:
- Browsing GitHub code
- Open extension console
- “Explain this pattern”
- AI sees the code context
- “Write a beginner tutorial about this”
- Creates tutorial from the code
The Technical Architecture
sequenceDiagram
participant User
participant Page
participant Extension
participant AI
participant X
User->>Page: Browse content
Page->>Extension: Auto-detect context
User->>Extension: Press backtick
Extension->>Extension: Extract context
Extension->>AI: Send context + prompt
AI->>Extension: Return enhanced post
Extension->>User: Show in console
User->>Extension: Click post
Extension->>X: Publish
How Context Detection Works
graph TD
A[Page Loads] --> B{Which Site?}
B -->|Twitter| C[Extract Tweet Data]
B -->|YouTube| D[Extract Video Info]
B -->|GitHub| E[Extract Code Context]
B -->|Other| F[Extract Article Data]
C --> G[Format Context]
D --> G
E --> G
F --> G
G --> H[Send to AI]
H --> I[Context-Aware Response]
style B fill:#4a3a5c,stroke:#2e2a3d,stroke-width:2px,color:#fff
style G fill:#2d4a3a,stroke:#2e2a3d,stroke-width:2px,color:#fff
Communication Flow
- Content Script: Runs on every page, extracts context
- Background Script: Coordinates between page and AI
- Console UI: Displays interface, handles user input
- Message Passing: Chrome APIs connect all parts
Keyboard Shortcuts
Global Shortcuts
- ` (backtick) - Toggle console
- Alt+T - Alternative toggle
- Esc - Close console
- Cmd/Ctrl+Enter - Post immediately
Privacy & Security
What We Track
- Current page URL (for context)
- Selected text (if any)
- Page title and meta
- Never passwords or personal data
Local Processing
- Context extraction happens locally
- Only final content sent to server
- No background tracking
- No data collection
Permissions Explained
- activeTab: Read current tab only
- storage: Save preferences
- scripting: Inject context reader
Installation
The extension is currently in development. We’re polishing the experience before the Chrome Web Store launch. Stay tuned!
Advanced Features
Voice Mode in Browser
Alt+Space activates floating voice widget:
- Draggable interface
- Records your voice
- Includes page context
- Creates contextual posts
Multi-Tab Synthesis
Open multiple related articles, then: “Synthesize insights from all tabs” AI reads all contexts, finds connections
Visual Context
On image-heavy pages: “Describe and create post about this design” AI analyzes visuals too
Code Understanding
On GitHub or code blocks: “Explain this pattern for beginners” Technical content made accessible
What’s Next
We’re shipping fast. The voice interface is live, and we’re working on:
- The full Quake-style console interface
- Firefox and Safari versions
- Enhanced context detection
- More seamless integration with the main app
For Developers
The extension is partially open source:
- Context extraction logic
- Keyboard handling
- Component architecture
- Chrome extension patterns
Check our GitHub: @x11social
The Competition
Pocket/Instapaper
Save for later (never). We help you share now.
Grammarly
Fixes writing. We eliminate writing.
Buffer Extension
Schedule from anywhere. We create from anywhere.
X11 Extension
Read anywhere. Create anywhere. Post anywhere.
The Bottom Line
The web is full of inspiration. But it’s trapped in tabs.
Our extension sets it free. Every page becomes postable.
Stop switching tabs. Start shipping thoughts.
Questions? @x11_social