How Doti uses AI?

Doti is an AI-powered agent platform - but unlike many tools in the space, we approach AI use with transparency, control, and enterprise-grade safety.

Here’s exactly how we use AI under the hood:


We Never Train on Your Data

Doti never fine-tunes or trains any LLMs on your content. Your data is used only to retrieve relevant context at query time, and never leaves your control (in hybrid mode, not even Doti stores it).


Multiple LLMs, Zero Retention

We leverage a multi-LLM architecture, dynamically selecting the best model for each part of the workflow:

  • OpenAI (ChatGPT)

  • Google Gemini

  • Meta LLaMA

  • Others

All models are used with a zero-retention policy and operate statelessly - meaning your data is never stored by third-party providers (Not even in audit logs).


Our Architecture: Graph-RAG + Agents

Doti doesn’t just forward a user query to an LLM. Instead, we use a hybrid graph-based retrieval-augmented generation (RAG) engine:

  • Real-time vector search

  • Structured + unstructured data fusion

  • Dynamic query planning

  • Permission-aware filtering

  • LLM selection per task (reasoning, summarizing, answering, routing)


Can I Choose Which LLM Is Used?

Partially. Every question you ask Doti typically invokes 10+ separate LLM tasks, each optimized for a specific role (search planning, context triage, summarization, etc).

You can choose the final-answer LLM (e.g., ChatGPT vs Gemini) - but the internal pipeline is orchestrated by Doti for performance and quality.


Can I Use “Raw” AI with No Internal Data?

Yes! Doti automatically detects when your question doesn't need enterprise context (e.g., “Write a haiku about coffee”) - and will route it directly to an LLM with no internal search.

You can also explicitly control this by switching to the "Clean LLM Mode" in the interface or asking things like:

“Use GPT only, no company data”


Bottom Line

  • No model training on your data

  • Multi-model, zero-retention, stateless pipeline

  • Permissions-aware reasoning

  • Hybrid RAG meets structured agent flows

Last updated

Was this helpful?