Verified Environmental Review & Attestation
NEPA compliance risk for lenders — in seconds, with proof on Solana
Every major U.S. infrastructure project must clear NEPA. Lenders need to know: Is this project's environmental review litigation-ready?
Eight named risk flags aligned to real litigation patterns. Each flag shows the verbatim excerpt and char offset. Regex-based, testable, 18 pytest tests. No black box.
Attest on Solana via SPL Memo. Hash is permanent. Anyone — lender, counterparty, regulator — can verify without trusting our backend.
Actian VectorAI DB + OpenAI embeddings. Ask in plain language across 61K+ projects. RAG answers grounded in retrieved excerpts only.
Project + global chat, AI flag explanations, FAST-41 Stuckness Radar with OPEF Copilot, NEPA Observatory. All LLM inference on-device — no document text leaves your machine.
/docsFrom search to tamper-proof compliance record:
Side by side with the main flow:
Ask any question about a project's documents. Retrieval-augmented — model answers only from retrieved excerpts.
Natural-language queries across the full corpus via Actian. "Which EIS documents have weak EJ analysis?"
We ingest NEPATEC2.0 (PNNL / HuggingFace): CE, EA, and EIS JSONL — 61,881 projects · ~6.97M pages · 60+ agencies.
Eight deterministic regex detectors. Every flag: verbatim excerpt, char offset, severity. Auditable, testable, reproducible.
| Flag | Sev. | What it catches |
|---|---|---|
| deferred_mitigation | High | Mitigation pushed to "future phases" or "final design" |
| future_studies_reliance | High | Approval contingent on incomplete studies |
| ej_absent | High | No EJ analysis present (EA/EIS only) |
| no_action_absent | High | No-action alternative missing (EA/EIS only) |
| ej_thin_coverage | Med | EJ mentioned in passing only (EA/EIS only) |
| no_action_thin | Med | No-action dismissed without analysis (EA/EIS only) |
| cumulative_impacts_thin | Med | Cumulative impacts deferred or minimal (EA/EIS only) |
| tribal_interests | Info | Tribal consultation found — review completeness |
CE documents are exempt from EJ/no-action/cumulative flags (process-type gating). 18 pytest tests enforce this.
Actian VectorAI DB stores OpenAI text-embedding-3-small vectors for document chunks (600 chars, 80 overlap). Query → embed → K-NN → top chunks.
K-NN similarity. Filter by process_type, agency, state. Returns ranked passages with scores.
Retrieve top-k chunks → "answer only from these excerpts" prompt → Ollama answers locally. No hallucination.
If Actian unreachable (Apple Silicon, air-gapped), numpy cosine-similarity store activates automatically. Same API — callers can't tell.
After a scan: SHA-256(project_id + timestamp + flag detail + doc hashes) → write to SPL Memo program on devnet. One instruction, no custom contract.
Compact JSON memo (≤566 bytes): pid, ts, flag counts, sha256: hash. Full detail stays in our API. Anyone can recompute the hash from our flags endpoint.
A third party can verify without calling our Verify endpoint: fetch the raw tx from any Solana Explorer, decode the memo, call /api/.../flags, recompute SHA-256, compare. No trust in VERA required.
Doc hashes in the payload prove integrity of the underlying documents — if the document changed after attestation, the hash won't match.
Ask anything about a project's documents. Context: project metadata + document chunks + stored flags. Ollama answers from retrieved excerpts only.
Per-flag LLM explanation: given the flag type and triggering excerpt, Ollama explains in 1–2 sentences why this matters to a lender.
Every LLM call is logged: prompt SHA-256, model, tokens, response, timestamp. Full audit trail for every generated output.
VERA exposes its data as 10 structured tools via the Model Context Protocol — project search, document retrieval, compliance flags, stats. Any MCP-compatible AI assistant (Claude Desktop, etc.) can query VERA as a structured knowledge source. Run with stdio or SSE transport.
20+ FastAPI endpoints: search, scan, flags, attest, verify, project chat, global chat, semantic search/ask/index, dashboard, radar. Full interactive docs at /docs. All endpoints are JSON; easy to integrate into any lender workflow.
In-app explainer pages for every major feature: /actian.html · /solana.html · /signals.html · /stuckness.html · /data-pipeline.html
Walk through in ~2 minutes:
ej_absent, deferred_mitigation) with excerpts + char offsetApp: index.html · Docs: /docs
Today: Lenders get NEPA compliance risk in seconds. Flags are auditable and testable. Attestations are tamper-proof and verifiable by any third party without trusting us. All LLM inference is on-device — safe for sensitive deal documents.
Next: Mainnet attestations (multisig), expanded signal library (climate, water, species, per-agency tuning), bulk attest for portfolio due diligence, live ePlanning API data feed, embeddings for the full 60K+ project corpus.
Thank you.