Open Source

In-context feedback for AI content generation

Domain experts train AI reviewers. A Google Docs-style annotation sidebar that embeds in your app. Capture feedback where content lives, export to your eval pipeline.

$npm install @uphiller/core

The Problem

LLM observability tools are built for engineers

Helicone, Langfuse, Braintrust - great tools, but domain experts won't use them. Your subject matter experts shouldn't need to learn a new platform just to give feedback on AI output.

The Solution

Embed feedback UX directly in your app

Experts annotate content in context - right where they already work. Data exports to your eval pipeline automatically. Better feedback, better AI.

Your App + Uphiller
Domain Expert Feedback
Braintrust / Evals
Better AI

Text Selection & Annotation

Highlight any text in AI-generated content and add comments. Just like Google Docs.

Threaded Discussions

Reply to annotations, collaborate with your team. Keep context where it belongs.

Real-time Updates

Live sync via Supabase Realtime. Multiple reviewers, instant updates.

Severity & Categories

Classify feedback by type (accuracy, tone, formatting) and severity for analysis.

Braintrust Export

Format annotations for your eval pipeline. Domain feedback becomes training data.

AI Participants

AI can join annotation threads. Build human-AI review workflows.

1

Add Database Tables

Export the Drizzle schema and run your migration.

2

Create Store

Initialize the Drizzle adapter with optional Supabase realtime.

3

Add Provider & Sidebar

Wrap your content and add the annotation sidebar component.

4

Export to Braintrust

Use the export utilities to format data for your eval pipeline.

content-review.tsx
import { UphillerProvider, AnnotationSidebar } from '@uphiller/core/react'

export function ContentReview({ content }) {
  return (
    <UphillerProvider store={store} user={user}>
      <div className="flex">
        <main className="flex-1">
          <HighlightableContent>{content}</HighlightableContent>
        </main>
        <AnnotationSidebar />
      </div>
    </UphillerProvider>
  )
}

Build self-improving AI systems

The sidebar is just the beginning. Every correction your domain experts make in annotation threads trains an AI reviewer. Over time, the AI catches issues automatically - experts only review edge cases.

The result? A flywheel where your AI gets better with every interaction. Works with Claude Code and modern LLM tooling.

  • 1
    Experts annotate AI output in the sidebar
  • 2
    Corrections train AI reviewers via Braintrust
  • 3
    AI catches known issues automatically
  • 4
    Experts focus on edge cases only
  • AI improves itself with every cycle
Self-Improving AI
Expert Feedback
Train Evals
AI Reviews
Catch Issues

Pro Services

Build your self-improving flywheel

The open source sidebar is just the start. We help teams build complete AI improvement systems - from integration to training AI reviewers that learn from your domain experts.

1

Deep Integration

Custom sidebar integration with your app and data

2

Eval Pipeline

Connect to Braintrust or your eval system

3

AI Reviewers

Train AI to review using expert corrections

Get in touch

Start with the sidebar, build the flywheel

Uphiller is open source and free to use. Capture feedback today, build self-improving AI tomorrow.