VERA
← Home Actian VectorAI DB

How Actian Integration Works

Semantic search over NEPA documents: embeddings, K-NN search, and a local fallback when Actian isn’t available.

What it does

Actian VectorAI DB is our vector store. We turn document chunks into dense vectors (embeddings) and store them in Actian. When you type a query, we embed it the same way and ask Actian for the most similar chunks by meaning—not just keyword match. That powers semantic search and the “Ask the Archive” RAG answers.

Indexing (getting data into Actian)

When you click Index Documents on the Semantic Search page:

After indexing, the same chunks can be retrieved by semantic similarity in milliseconds.

Search (semantic similarity)

When you run a Semantic Search query:

1 Your query is embedded with the same OpenAI model.
2 We call Actian’s K-NN search: “return the top‑k vectors most similar to this query vector” (cosine similarity).
3 Optional filters (e.g. process_type = EIS, state = NV) restrict results to matching chunks.
4 Actian returns the top chunks with similarity scores; we show them in the UI.

So: your question → one vector → Actian finds the closest stored vectors → we show those passages. No keyword index is used for this path.

Ask the Archive (RAG)

When you use Ask the Archive:

Actian’s job is retrieval; Ollama’s job is synthesis. The document text in the prompt never leaves your machine.

If Actian isn’t available

We check once (with a short timeout) whether Actian’s gRPC port is reachable. If it isn’t—e.g. on Apple Silicon without the x86 Docker image, or in an air-gapped environment—we switch to a local numpy store:

So the app always supports semantic search; Actian is the preferred backend when it’s running.

Key files to look at

In one sentence

We embed NEPA chunks with OpenAI, store the vectors in Actian VectorAI DB (or a local numpy fallback), and use K-NN search to find the most relevant chunks for your query or for RAG with Ollama.