Engineering
June 17, 2025

Integrating Ragie Connect in Your AI App: A Step-by-Step Guide for Fast RAG Deployment

Bob Remeika
,
Co-founder & CEO

Introduction

AI assistants are getting better at answering questions, but they still struggle when it comes to context that lives outside the model like internal documentation, notes, or team knowledge stored in tools like Notion. That’s where retrieval-augmented generation (RAG) comes in. It allows AI systems to ground their answers in custom data sources, making responses more accurate and useful.

In this guide, we’ll walk through a simple example of connecting a Notion knowledge base to an AI interface using Ragie Connect. The setup uses a demo app to show how documents can be synced and queried in just a few steps.

AI assistants are getting better at answering questions, but they still struggle when it comes to context that lives outside the model, like internal documentation, support content, or company knowledge stored across tools like Notion, Google Drive, Slack, Confluence, Jira, Salesforce, or Dropbox. That’s where retrieval-augmented generation (RAG) comes in. It allows AI systems to ground their answers in external data sources, making responses more accurate and actually useful in real-world scenarios.

In this blog post, we’ll walk through an example using Notion as the data source. We’ll show how to connect a Notion knowledge base to an existing AI tool using Ragie Connect, and how documents can be synced and queried in just a few steps.

What is Ragie connect and why use it?

Ragie connect is a fully managed connector system that lets you easily integrate external data sources into your AI applications. It takes care of authentication, data ingestion, and syncing, so your app can pull content from tools like Notion, Google Drive, Slack, Confluence, Jira, Salesforce, Dropbox, Gmail, and others without the usual friction.

What makes Ragie connect especially useful is that it’s designed to be embedded directly into your app. This means you can give your users the ability to connect their own data securely and with minimal setup, without ever leaving your interface. Ragie handles the heavy lifting in the background: parsing, chunking, indexing, and preparing the data for retrieval through its RAG engine. In every way you stay in control, you can set limits on how much content is ingested, customize the look and feel of the connector to match your product, and receive real-time updates through webhooks.

Instead of building a full ingestion pipeline or managing a bunch of third-party API integrations, this gives you everything you need out of the box. It’s ideal if you’re building AI tools that need access to live business data, whether it’s for internal assistants, support bots, or smart knowledge retrieval in your product.

It’s especially powerful for teams building internal tools, customer-facing chatbots, or any application where grounding answers in private knowledge is essential.

Overview of the demo app

To demonstrate how Ragie connect works in a real application, we’ll be using Base Chat, an open-source demo platform we built. It comes with a built-in integration for Ragie connect and supports multi-tenant AI chat experiences out of the box.

Before we jump into the app, we first need to configure a few things on the Ragie dashboard to get our environment ready.

Part 1: Setting up in Ragie dashboard:

Step 1: Generate an API key

To connect your application to Ragie, you’ll need an API key. Head over to the API Keys section in the Ragie dashboard.

  1. Click “Generate new key”
  2. Give your key a name (e.g. yourproject-name-key)
  3. Click “Create” to generate the key

Copy this key, you’ll use it in your app’s configuration to connect to Ragie.

Step 2: Enable notion in embedded connectors

Next, go to the Connectors section and click on “Manage embedded connectors”. From the list of available services, enable Notion (or any other third-party service you want to use in your app). This allows your app to support document syncing through that integration.

Once that is done you are ready to integrate Ragie connect into your AI app, you can use our SDK for ease and rapid development, also feel free to have a look at our documentation here on how the connector works and what documents are synced. Given we have integrated Ragie in our Base Chat project after setting up and running it on our own server or locally we can then start connecting the knowledge base through the interface we have built, remember this is your user connecting their data source, grounding the AI to answer specifically using their documents as knowledge base, making it hard for the model to hallucinate. For steps on how to setup Base Chat to see how this all works under check the readme in the repo here.

Part 2: Connecting the knowledge base via Ragie connect inside Base Chat

Now that we’ve generated our API key and enabled Notion in the Ragie dashboard, the next step is to connect the data source directly from within the app.

We’re going to use Ragie Connect inside ****Base Chat to link a Notion workspace. This is where the end-user (your customer, teammate, or client) will authenticate with their Notion account and allow your app to sync content securely. For this to work, make sure you’ve added the API key from earlier into your .env file . This is what essentially gives your app access to Ragie’s backend services.

Connecting Notion in Base Chat

  1. Navigate to the Data section inside Base Chat.
  2. Click “Add Connection”. You’ll see options to upload a file or connect an app. In this case, we’re using Notion as the sync source.
  3. Click Notion.

A popup will appear prompting you to connect your Notion account. If you’re not already logged in, you’ll be asked to do so.

After logging in, you’ll see a screen saying “Ragie is requesting access to [your_name]’s Notion”.

Select the Notion workspace or specific pages you want to sync, then click “Allow access”.

Part 3: Using the Base Chat interface to interact with the knowledge base

Once your data source is connected and the documents have finished syncing (you can confirm this in the Documents tab of your Ragie dashboard, they’ll be marked as “Ready”), your users can begin interacting with their content through the chat interface.

To do this, simply navigate to the chat section inside Base Chat. Here, users can ask questions based on the synced Notion documents, whether it’s company policies, product documentation, internal notes, or anything else stored in the workspace.

Behind the scenes, Ragie retrieves the most relevant chunks from the indexed content and returns a response grounded in that data. The result feels like a typical AI assistant, but it’s answering based on your actual documents.

This step requires no extra setup. As long as the connector is active and the documents are synced, the chat is ready to use.

To test how well the model is using the connected knowledge base, let’s ask a more specific question:

“How much did we spend on rent and on dinner with Rachel?”

This type of query forces the model to reference actual details from the synced documents, not just general knowledge.

Here is the original document:

As shown above, the model correctly pulls information from the Notion document and even points to the source of the answer. This confirms that the retrieval pipeline is working as expected and that responses are grounded in the synced content.

Behind the scenes: How Ragie handles retrieval

Under the hood, Base Chat integrates with Ragie using their SDK and API to handle document ingestion, chunking, indexing, and semantic retrieval. This section breaks down how that process works from both the developer’s perspective and what the user sees in the UI.

At a glance, here’s how everything fits together:

  • Base Chat (Next.js/TypeScript, this can be any technology you choose to use**)**
    Handles user login, chat UI, routing, and connection management per tenant.
  • Ragie client SDK
    Used on the server to handle document upload, retrieval, polling, and more.
  • Ragie API (documentation here)
    Performs document chunking, semantic search, reranking, recency bias, and more.

1. Document upload, chunking & indexing

When a user connects a data source (like Notion), Ragie ingests the selected pages or files. documents are parsed, chunked into passages, and embedded for semantic search.

The app polls for document here:

export async function GET(request: NextRequest) {
  const { client, partition } = await getRagieClientAndPartition(tenant.id);

  const res = await client.documents.list({
    partition,
    pageSize: 50,
    cursor,
  });

  return NextResponse.json({
    documents: res.result.documents,
    nextCursor: res.result.pagination.nextCursor,
    totalCount: res.result.pagination.totalCount,
  });
}

path: app/api/tenants/current/documents/route.ts

On the frontend, polling updates the UI with file status:

const checkFilesStatus = async () => {
  const response = await fetch("/api/tenants/current/documents", {
    headers: { tenant: tenant.slug },
  });
  const data = await response.json();
  // Update file status in UI...
};

path: app/(main)/o/[slug]/data/files-table.tsx

To fetch a specific document and its summary, a dedicated route is used, something similar could be implemented in your AI app.

export async function GET(request: NextRequest, { params }) {
  const document = await client.documents.get({
    partition,
    documentId: params.id,
  });

  const summary = await client.documents.getSummary({
    partition,
    documentId: params.id,
  });

  return Response.json({ ...document, summary: summary.summary });
}

2. Retrieval: Handling user queries

When a user sends a question in the chat interface, the app sends a request to Ragie’s retrieval API:

export async function getRetrievalSystemPrompt(
  tenant, query, isBreadth, rerankEnabled, prioritizeRecent
) {
  const { client, partition } = await getRagieClientAndPartition(tenant.id);
  const topK = isBreadth || rerankEnabled ? 100 : 6;

  let response = await client.retrievals.retrieve({
    partition,
    query,
    topK,
    rerank: rerankEnabled,
    recencyBias: prioritizeRecent,
    ...(isBreadth ? { maxChunksPerDocument: 4 } : {}),
  });

  if (response.scoredChunks.length === 0 && rerankEnabled) {
    // fallback: retry without reranking
    response = await client.retrievals.retrieve({
      partition,
      query,
      topK,
      rerank: false,
      recencyBias: prioritizeRecent,
      ...(isBreadth ? { maxChunksPerDocument: 4 } : {}),
    });
  }

  const sources = response.scoredChunks.map((chunk) => {
    return {
      ...chunk.documentMetadata,
      // stream/download links, document name, etc.
    };
  });

  return {
    content: renderSystemPrompt({ company, chunks }, tenant.systemPrompt),
    sources,
  };
}

path: app/api/conversations/[conversationId]/messages/utils.ts

This request tells Ragie how to rank, retrieve, and filter relevant chunks:

await client.retrievals.retrieve({
  partition,
  query,
  topK,
  rerank,         // use semantic reranking
  recencyBias,    // prioritize newer content
  maxChunksPerDocument // used for breadth-style search
});

Formatting and response delivery

The scored chunks returned from Ragie are used to:

  • Build the system prompt for the AI model.
  • Show citations and links in the chat UI.
  • Attribute the response to a specific document and location (as shown in earlier screenshots).

Everything happens in real time, and the user sees a grounded, relevant answer with references drawn from their connected knowledge base.

Testing the experience

At this point, everything is in place: the notion knowledge base is connected, the documents are synced and marked as ready in the dashboard, and the chat interface is active.

To test the experience, simply head over to the chat section in Base Chat and start asking questions. Begin with something broad to confirm basic functionality, then ask specific, detail-based questions to verify that the AI is actually referencing your documents.

You should see clear, grounded responses that pull directly from your synced content, often with sources or document names attached. If something feels off, you can revisit the documents tab in the Ragie dashboard to confirm sync status, or check the logs in your app to inspect API calls to Ragie.

This hands-on flow is a good check that everything—from API key setup to retrieval—is wired up correctly.

Final thoughts and what’s next?

Ragie connect makes it easy to build AI-powered experiences that are deeply tied to your actual business data. With just a few steps, you can give your users the ability to ask questions about their own content, stored in platforms they already use, like Notion, Confluence, Jira, Google Drive, Slack, and more.

In this guide, we focused on Notion, but the same setup works for other connectors with very little change. You could extend this setup to support multiple data sources, enable connector management per workspace, or customize the chat experience with branding, theming, and access control.

Whether you’re building an internal assistant, a support automation tool, or a customer-facing knowledge app, Ragie connect gives you the flexibility and infrastructure to do it fast, and at scale.

Share