Retour au blog
TutorialNotionGetting Started

Turn Your Notion Docs into an AI Support Widget

LaunchChat Team6 min read

Before You Start

You'll need three things:

  • A Notion workspace with your support or help documentation
  • A LaunchChat account — the free tier includes everything you need to get started
  • Your website where you want the chat widget to appear

If your docs live somewhere other than Notion, LaunchChat also supports file uploads and website crawling — but this guide focuses on the Notion integration since it's the most popular path.

Step 1: Connect Notion via OAuth

After signing up, head to the Setup page in your dashboard and click "Connect Notion." This initiates a standard OAuth flow — Notion will ask you to authorize LaunchChat to read your selected pages.

A few important details:

  • We only request read access. LaunchChat never modifies, deletes, or creates content in your Notion workspace.
  • You choose exactly which pages to share. You can grant access to individual pages, entire databases, or your full workspace.
  • The connection can be revoked at any time from your Notion settings.

This is the same OAuth flow used by tools like Slack, Zapier, and other Notion integrations. Your credentials are never stored — we use Notion's official API tokens.

Step 2: Select Your Knowledge Base Pages

Once connected, you'll see a list of your Notion pages. Select the ones that contain your support documentation — FAQ pages, help articles, getting started guides, API docs, troubleshooting pages, etc.

Tips for selecting pages:

  • Include pages that answer common customer questions
  • Skip internal-only pages (meeting notes, roadmaps, etc.)
  • You can always add or remove pages later — changes sync automatically

LaunchChat will automatically parse all content types: headings, paragraphs, bulleted and numbered lists, tables, code blocks, callouts, and toggle blocks. Images are noted as context but not embedded in responses.

Step 3: Automatic Ingestion Pipeline

LaunchChat RAG Pipeline — ingestion and query flow
LaunchChat RAG Pipeline — ingestion and query flow

Once you confirm your page selection, LaunchChat's RAG pipeline kicks in automatically:

Parsing

Each Notion page is fetched via the Notion API and converted from Notion's block format to clean, structured text. Heading hierarchy is preserved — this is critical for retrieval quality because it gives the AI context about which section a piece of information belongs to.

Chunking

The parsed text is split into segments of approximately 400 tokens each, with overlap between chunks to prevent context loss at boundaries. For example, if a paragraph spans two chunks, both chunks will contain the overlapping portion so the AI has full context.

Embedding

Each chunk is converted to a 1536-dimensional vector using OpenAI's text-embedding-3-small model. These vectors capture the semantic meaning of the text — so "How do I change my password?" and "Reset login credentials" are recognized as similar queries even though they use different words.

Storage

Vectors are stored in PostgreSQL with the pgvector extension, enabling fast cosine similarity search across your entire knowledge base. For a typical 50-page Notion workspace, the entire ingestion process completes in under 2 minutes.

Step 4: Configure Your Widget

Before embedding, customize the widget to match your brand and support style:

Appearance settings:

  • Primary color — match your brand palette
  • Widget position — bottom-right or bottom-left
  • Custom bot avatar — upload your logo or mascot
  • Welcome message — set the initial greeting users see

Behavior settings:

  • Confidence threshold — the minimum similarity score required before the AI answers. Set this higher (e.g., 0.75) for strict accuracy, or lower (e.g., 0.5) for broader coverage.
  • Refusal message — what to show when confidence is too low. Example: "I couldn't find a clear answer in our docs. Would you like to contact support?"
  • [Auto-escalation](/features) — when enabled, low-confidence responses automatically show a contact form so users can reach your team.
  • Max output tokens — control response length to keep answers concise.

AI model settings:

  • Choose your preferred model (Claude 3 Haiku is the default — fast and cost-effective)
  • Or bring your own API key for OpenAI or Anthropic — you pay them directly with zero markup from LaunchChat

Step 5: Embed the Widget

Add two lines to your website's HTML, just before the closing </body> tag:

<script>
  window.NotionSupportConfig = { widgetId: 'your-widget-id' };
</script>
<script src="https://launchchat.dev/widget.js"></script>

That's it. The widget loads asynchronously (won't slow down your page), renders a chat bubble in the corner, and is ready to answer questions.

Framework-specific notes:

  • React/Next.js: Add the script to your layout component or use a <Script> tag with strategy="afterInteractive"
  • WordPress: Paste the snippet in your theme's footer or use a "Header/Footer Scripts" plugin
  • Webflow: Add to the custom code section in Project Settings → Custom Code → Footer Code
  • Static HTML: Paste directly before </body>

What Happens When Users Ask Questions

Here's the flow when a visitor types a question in the widget:

  1. The question is sent to LaunchChat's API
  2. The question is embedded into a vector using the same model used during ingestion
  3. The vector is compared against all chunks in your knowledge base using cosine similarity
  4. The top matching chunks (typically 3-5) are retrieved
  5. These chunks are passed to the LLM along with the question and system instructions
  6. The LLM generates an answer with [Source N] citations referencing the retrieved chunks
  7. If confidence is below your threshold, the escalation form is shown instead

The entire process takes 1-3 seconds, depending on the model and response length.

After Launch: The Knowledge Gap Feedback Loop

Knowledge Gap Feedback Loop — detect, draft, review, improve
Knowledge Gap Feedback Loop — detect, draft, review, improve

The most valuable feature isn't the chatbot itself — it's what happens when it can't answer. When a user asks something not covered in your docs, LaunchChat:

  1. Detects the gap — the question didn't match any chunks above the confidence threshold
  2. Logs it — the question appears in your Knowledge Gaps dashboard with frequency data
  3. Drafts a suggestion — AI generates a draft article based on the question pattern
  4. Notifies you — you review, edit, and publish the new content to Notion

This creates a virtuous cycle: every unanswered question improves your knowledge base, which reduces future support load. Teams using this feedback loop typically see their deflection rate improve by 10-15% per month.

Tips for Better Answers

  1. Structure your Notion docs with clear headings. The chunker preserves heading hierarchy, so well-structured docs produce better retrieval. Use H1 for page titles, H2 for sections, H3 for subsections.
  1. Keep pages focused. One topic per page beats a mega-page covering everything. "How to reset your password" should be its own page, not buried in a "General FAQ" page.
  1. Write for questions, not just statements. Include the actual questions users ask as headings. "How do I cancel my subscription?" is better than "Subscription Management" for retrieval.
  1. Update regularly. Changes sync automatically — no manual re-indexing needed. When you update a Notion page, the new content is re-chunked and re-embedded within minutes.
  1. Monitor your [analytics dashboard](/features). Track which questions get answered well, which trigger escalations, and which reveal knowledge gaps. Use this data to prioritize documentation improvements.