In-context feedback for AI content generation
Domain experts train AI reviewers. A Google Docs-style annotation sidebar that embeds in your app. Capture feedback where content lives, export to your eval pipeline.
LLM observability tools are built for engineers
Helicone, Langfuse, Braintrust - great tools, but domain experts won't use them. Your subject matter experts shouldn't need to learn a new platform just to give feedback on AI output.
Embed feedback UX directly in your app
Experts annotate content in context - right where they already work. Data exports to your eval pipeline automatically. Better feedback, better AI.
Text Selection & Annotation
Highlight any text in AI-generated content and add comments. Just like Google Docs.
Threaded Discussions
Reply to annotations, collaborate with your team. Keep context where it belongs.
Real-time Updates
Live sync via Supabase Realtime. Multiple reviewers, instant updates.
Severity & Categories
Classify feedback by type (accuracy, tone, formatting) and severity for analysis.
Braintrust Export
Format annotations for your eval pipeline. Domain feedback becomes training data.
AI Participants
AI can join annotation threads. Build human-AI review workflows.
Add Database Tables
Export the Drizzle schema and run your migration.
Create Store
Initialize the Drizzle adapter with optional Supabase realtime.
Add Provider & Sidebar
Wrap your content and add the annotation sidebar component.
Export to Braintrust
Use the export utilities to format data for your eval pipeline.
import { UphillerProvider, AnnotationSidebar } from '@uphiller/core/react'
export function ContentReview({ content }) {
return (
<UphillerProvider store={store} user={user}>
<div className="flex">
<main className="flex-1">
<HighlightableContent>{content}</HighlightableContent>
</main>
<AnnotationSidebar />
</div>
</UphillerProvider>
)
}Build self-improving AI systems
The sidebar is just the beginning. Every correction your domain experts make in annotation threads trains an AI reviewer. Over time, the AI catches issues automatically - experts only review edge cases.
The result? A flywheel where your AI gets better with every interaction. Works with Claude Code and modern LLM tooling.
- 1Experts annotate AI output in the sidebar
- 2Corrections train AI reviewers via Braintrust
- 3AI catches known issues automatically
- 4Experts focus on edge cases only
- AI improves itself with every cycle
Build your self-improving flywheel
The open source sidebar is just the start. We help teams build complete AI improvement systems - from integration to training AI reviewers that learn from your domain experts.
Deep Integration
Custom sidebar integration with your app and data
Eval Pipeline
Connect to Braintrust or your eval system
AI Reviewers
Train AI to review using expert corrections
Start with the sidebar, build the flywheel
Uphiller is open source and free to use. Capture feedback today, build self-improving AI tomorrow.