Once we’ve synced documents across File Storage systems, we embed and chunk them so you can power your RAG applications and enable advanced retrieval search.

Step 1: Import the code snippet

Use the SDK

Congrats ! You should be able to get back your embeddings and chunks for the query !

If you selfhost, please make sure to do step 2 or directly fill these env vars in your .env here!

By default, for embedding we use OpenAI ADA-002 model and Pinecone managed vector database for storing the chunks.

Step 2 (Optional): Choose your own Vector DB + Embedding Model

In Configuration page, choose the RAG settings page and provide your own credentials for vector database and embedding model.