Running LoraDB as an HTTP Server
Overview
lora-server wraps the Rust engine in a small Axum HTTP server —
useful for probing the engine with curl, serving a polyglot stack,
or running demos. One process serves exactly one graph. The graph
lives in memory while the process runs, and can optionally be paired
with snapshots and a WAL for recovery across restarts.
Installation / Setup
Install
cargo install --path crates/lora-server
Or, inside the workspace:
cargo run --release -p lora-server
Configure
lora-server # 127.0.0.1:4747
lora-server --host 0.0.0.0 --port 8080
LORA_SERVER_HOST=0.0.0.0 LORA_SERVER_PORT=8080 lora-server
Precedence (first match wins): CLI flags → environment variables →
built-in defaults (127.0.0.1:4747).
Most runtime flags also have an env-var equivalent:
| Flag | Env var | Default | Description |
|---|---|---|---|
--host <ADDR> | LORA_SERVER_HOST | 127.0.0.1 | Bind address. |
--port <PORT> | LORA_SERVER_PORT | 4747 | Bind port. |
--snapshot-path <PATH> | LORA_SERVER_SNAPSHOT_PATH | unset | Default file for the admin snapshot endpoints. Also gates whether they are mounted — unset = 404. |
--restore-from <PATH> | — | unset | Load a snapshot at boot, before accepting queries. |
--wal-dir <DIR> | LORA_SERVER_WAL_DIR | unset | Attach a write-ahead log at this directory and enable the WAL admin routes. |
--wal-sync-mode <MODE> | LORA_SERVER_WAL_SYNC_MODE | per-commit | WAL durability cadence: per-commit, group, or none. |
Snapshots, WAL, and restore
LoraDB can persist the live graph in two complementary ways:
-
Snapshots for explicit point-in-time save / load.
-
WAL for replaying committed writes after a crash or restart.
-
--snapshot-path <PATH>(orLORA_SERVER_SNAPSHOT_PATH) enables the admin endpointsPOST /admin/snapshot/saveandPOST /admin/snapshot/load, and supplies the default file they operate on. If unset, the admin routes return404. -
--wal-dir <DIR>(orLORA_SERVER_WAL_DIR) attaches a write-ahead log at that directory and enablesPOST /admin/checkpoint,POST /admin/wal/status, andPOST /admin/wal/truncate. -
--wal-sync-mode <MODE>chooses when the WALfsyncs:per-commit(default),group, ornone. -
--restore-from <PATH>loads a snapshot at startup. A missing file is fine — the server starts with an empty graph and logs a message. A malformed file is fatal. When--wal-diris also set, committed WAL records newer than the snapshot fence are replayed before the server begins accepting queries.
Typical cron-friendly setup — boot from, and save back to, the same file:
lora-server \
--host 127.0.0.1 --port 4747 \
--snapshot-path /var/lib/lora/db.bin \
--restore-from /var/lib/lora/db.bin
WAL-only setup:
lora-server \
--host 127.0.0.1 --port 4747 \
--wal-dir /var/lib/lora/wal
With only --wal-dir, WAL recovery works and the WAL admin routes are
mounted. Checkpoints need an explicit path in the request body because
there is no configured snapshot default:
curl -sX POST http://127.0.0.1:4747/admin/checkpoint \
-H 'content-type: application/json' \
-d '{"path":"/var/lib/lora/checkpoint.bin"}'
WAL plus checkpoint default path:
lora-server \
--host 127.0.0.1 --port 4747 \
--wal-dir /var/lib/lora/wal \
--snapshot-path /var/lib/lora/db.bin \
--restore-from /var/lib/lora/db.bin
Now a body-less checkpoint writes to --snapshot-path:
curl -sX POST http://127.0.0.1:4747/admin/checkpoint
Inspect WAL state:
curl -sX POST http://127.0.0.1:4747/admin/wal/status
Save on demand:
curl -sX POST http://127.0.0.1:4747/admin/snapshot/save
# => {"formatVersion":1,"nodeCount":1024,"relationshipCount":4096,"walLsn":null,"path":"/var/lib/lora/db.bin"}
Or save to an ad-hoc override path for a single call:
curl -sX POST http://127.0.0.1:4747/admin/snapshot/save \
-H 'content-type: application/json' \
-d '{"path":"/var/backups/lora/2026-04-24.bin"}'
Load (restores on top of the live graph — serialises against every other query):
curl -sX POST http://127.0.0.1:4747/admin/snapshot/load
--restore-from is independent of --snapshot-path. You can restore
from a read-only seed (/var/lib/lora/seed.bin) and snapshot to a
writable path (/var/lib/lora/runtime.bin).
The admin endpoints have no authentication and the optional path
body field is passed straight to the OS — anyone who can reach the
admin port can write files anywhere the server UID can write, or swap
the live graph by pointing load at an attacker-staged file. The same
warning applies to /admin/checkpoint. See
Limitations → HTTP server and the
HTTP API reference before
exposing them.
See also the canonical Snapshots guide for the metadata shape, file format, and every binding's save / load API, and WAL and checkpoints for recovery and checkpoint semantics.
Creating a Client / Connection
The client is any HTTP client. Verify the server is alive before sending queries:
curl http://127.0.0.1:4747/health
# { "status": "ok" }
Running Your First Query
curl -s http://127.0.0.1:4747/query \
-H 'content-type: application/json' \
-d '{"query":"CREATE (:Person {name: \"Ada\"})"}'
Then read it back:
curl -s http://127.0.0.1:4747/query \
-H 'content-type: application/json' \
-d '{"query":"MATCH (p:Person) RETURN p.name AS name","format":"rows"}'
Examples
Minimal working example with curl
Shown above. Two POST /query calls.
Parameterised query
POST /query does not currently accept a params body field —
see Limitations → Parameters.
Interpolate constants safely into the query string yourself, or use
the Rust API. HTTP parameters are on the roadmap.
Safe-enough pattern — build the literal server-side when the values are trusted and fully encoded:
NAME='Ada'
curl -s http://127.0.0.1:4747/query \
-H 'content-type: application/json' \
--data-binary "$(jq -n --arg q "MATCH (p:Person {name: '$NAME'}) RETURN p" '{query:$q}')"
For anything user-supplied, run against the Rust binding with real parameters and expose a narrower API on top.
Structured result handling with jq
curl -s http://127.0.0.1:4747/query \
-H 'content-type: application/json' \
-d '{"query":"MATCH (p:Person) RETURN p.name AS name","format":"rows"}' \
| jq '.rows[].name'
Node client example
async function runQuery(query: string) {
const res = await fetch('http://127.0.0.1:4747/query', {
method: 'POST',
headers: { 'content-type': 'application/json' },
body: JSON.stringify({ query, format: 'rows' }),
});
if (!res.ok) {
const body = await res.json().catch(() => ({}));
throw new Error(body.error ?? `http ${res.status}`);
}
return res.json() as Promise<{ columns: string[]; rows: any[] }>;
}
const { rows } = await runQuery('MATCH (p:Person) RETURN count(*) AS n');
console.log(rows[0].n);
Handle errors
HTTP status codes:
| Status | Meaning |
|---|---|
200 | Query executed successfully; body is a QueryResult |
400 | Parse / semantic / runtime error; body is { "error": "…" } |
{ "error": "parse error: expected ')' at position 17" }
Handle both explicitly; never assume 200 on a mis-typed query.
Embedding in a larger Axum app
lora-server is also a library — embed it in a larger Axum
application, or run several processes on different ports for
isolation:
use std::sync::Arc;
use lora_database::Database;
use lora_server::build_app;
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let db = Arc::new(Database::in_memory());
let app = build_app(Arc::clone(&db));
let listener = tokio::net::TcpListener::bind("127.0.0.1:4747").await?;
axum::serve(listener, app).await?;
Ok(())
}
Mount build_app(db) under any sub-path, combine it with your own
routes, add middleware — it's a standard Axum Router.
Endpoints
GET /health
Liveness check.
curl http://127.0.0.1:4747/health
# { "status": "ok" }
POST /query
Request body:
{
"query": "MATCH (n) RETURN n",
"format": "rowArrays"
}
query— Cypher string (required).format— one of"rows","rowArrays","graph","combined"(optional; defaults to"graph"). See Result formats for the full shape of each.
POST /admin/snapshot/save (opt-in)
POST /admin/snapshot/load (opt-in)
Both are mounted only when the server is started with
--snapshot-path <PATH> (or LORA_SERVER_SNAPSHOT_PATH). Otherwise
they return 404. See Snapshots, WAL, and restore
above, and the full reference in
HTTP API → Admin endpoints (opt-in).
POST /admin/checkpoint (opt-in)
Mounted when the server is started with --wal-dir <DIR>. Uses
--snapshot-path as the default target when configured; otherwise the
request body must supply { "path": "..." }.
POST /admin/wal/status (opt-in)
Mounted when the server is started with --wal-dir <DIR>. Returns the
current durable LSN, next LSN, active and oldest segment ids, and any
latched background fsync failure.
POST /admin/wal/truncate (opt-in)
Mounted when the server is started with --wal-dir <DIR>. Drops sealed
WAL segments up to a fence LSN. With no body, the server truncates up
to the current durable LSN.
Common Patterns
Seed via stdin
cat seed.cypher | while IFS= read -r q; do
curl -s http://127.0.0.1:4747/query \
-H 'content-type: application/json' \
--data-binary "$(jq -n --arg q "$q" '{query:$q}')" > /dev/null
done
Where seed.cypher has one Cypher statement per line.
Health check script
status=$(curl -s -o /dev/null -w '%{http_code}' http://127.0.0.1:4747/health)
[ "$status" = 200 ] && echo 'ok' || echo 'down'
Embedding with custom routes
use axum::routing::get;
use std::sync::Arc;
use lora_database::Database;
use lora_server::build_app;
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let db = Arc::new(Database::in_memory());
let app = build_app(Arc::clone(&db))
.route("/version", get(|| async { "loradb custom" }));
let listener = tokio::net::TcpListener::bind("127.0.0.1:4747").await?;
axum::serve(listener, app).await?;
Ok(())
}
Multiple graphs
One process serves exactly one graph. Run multiple processes on different ports and put a reverse proxy in front when you need isolation.
Error Handling
| Symptom | Likely cause | Fix |
|---|---|---|
Address already in use | Port held by another process | See Troubleshooting → Server |
400 on every request | Missing content-type: application/json | Add the header |
| Silent empty rows | Query targets a label that doesn't exist yet | Seed before reading |
What's not here
- Authentication, TLS, rate limiting — none. Bind to
127.0.0.1or put it behind a reverse proxy. The admin snapshot and WAL endpoints also ship without auth — see Snapshots, WAL, and restore. - Parameter binding over HTTP — the
/querybody does not currently accept aparamsfield. Bind via the Rust API today; HTTP params are on the roadmap. See Limitations. - Multiple databases — one process serves exactly one graph. Run multiple processes on different ports if you need isolation.
Performance / Best Practices
- Put the server behind a reverse proxy (nginx, Caddy, Traefik) for TLS and rate limiting — the built-in server has none.
- Bind to
127.0.0.1unless you control the network. - For a polyglot stack, embed
build_app(db)into a larger Axum process rather than running a separatelora-server.
See also
- HTTP API reference — endpoint-by-endpoint reference.
- HTTP API → Admin endpoints (opt-in) — full reference for snapshot and WAL admin routes.
- Snapshots guide — canonical feature page: metadata shape, file format, every binding's API.
- WAL and checkpoints — recovery model, sync modes, and admin routes.
- Result formats — the four response shapes.
- Rust guide — native API (what the server wraps).
- Queries — the query language the server exposes.
- Cookbook — scenario-based recipes, including backup-and-restore.
- Limitations → HTTP server — auth, TLS, parameters.
- Troubleshooting → Snapshots — 404, malformed file, version mismatch.
- Troubleshooting → WAL and checkpoints — missing WAL routes, checkpoint path errors, poisoned WALs.
- Troubleshooting → Server — port conflicts, connection issues.