Docs / Platform Overview

From demo app to your own deployment.

See what kinds of data the runtime can absorb, how the hosted and local paths relate, and where the repo takes over.

Install and deployment

Disaster Clippy is currently a demo-plus-GitHub product. The hosted app helps you evaluate the interaction model. The public repository is where you actually clone, configure, and run your own deployment. A simpler launcher or wrapper can come later, but that is not the install path today.

The hosted app

The hosted app is the public example of the system in use. It demonstrates a trustworthy pattern: ask a question, search a curated collection, return an answer with citations, and let the user inspect the source. The preparedness dataset is the example deployment that makes that pattern real.

What you can bring

The local tooling can ingest websites, PDFs, static HTML archives, video transcripts, Substack exports, MediaWiki sites, and ZIM archives. The point is not one perfect source type. The point is that a mixed body of knowledge can be normalized into the same searchable runtime.

That includes Kiwix ZIM archives. For many users, this is the easiest test path: set up a local deployment, choose Kiwix libraries you already trust, and turn them into a searchable, source-cited collection.

How the architecture stays simple

The runtime, the collection, and the source-building workflows are separated on purpose. The runtime handles search and retrieval. The collection is the knowledge layer you can inspect and swap. Local admin handles ingestion, validation, translation, and packaging. That separation lets the same interface run hosted, local, or fully offline without changing the core product shape.

What "bring your own data" actually means

You do not need to rebuild the app from scratch to adapt it to a different domain. Point the ingestion pipeline at your sources, build a collection, and run the same search experience against it. Preparedness is the current example. Building codes, technical references, internal documentation, and humanitarian archives are all in scope.

What you are working with

The runtime is Python. Vector search uses ChromaDB for local installs and Pinecone for cloud-backed deployments. Embeddings run at two dimensions: 768 for local and offline use, 1536 for cloud deployments. Offline language model support is handled through Ollama -- no external API calls required. The same codebase runs on a laptop, a home server, a Raspberry Pi, or an air-gapped node. The admin and ingestion tools are included in the public repository with no separate installation.

GitHub

GitHub is where the actual getting-started path lives: clone, configure, run locally, and explore the ingestion tools in detail.

Last updated: March 2026