v0.1.9 Governed Capability Expansion · Open Source · Available on GitHub

AI that reasons with you.
Execution that stays governed.

VoxeraOS is an open-source governed execution layer for AI — still alpha, built intentionally.
Vera handles conversational reasoning and planning. Real actions pass through VoxeraOS: policy-checked, approval-gated, artifact-backed.
A one-person evenings/weekends project. Real framework, real end-to-end demo, still evolving.

Real system, real execution

See VoxeraOS in action.

What you see below is actual runtime behavior — Vera preparing governed actions, VoxeraOS executing through queue, approvals, and artifacts.

Vera — conversational layer
VoxeraOS — execution + approvals

Why Voxera exists

AI can propose. Voxera governs.

Most AI systems connect reasoning directly to execution: LLM → tool call. That's powerful, but hard to trust. VoxeraOS introduces the boundary between the two.

🤖

Vera converses and plans

Vera is the conversational surface. She helps you think, investigate, shape intent, and prepare governed requests — but she cannot directly mutate the system.

⚙️

VoxeraOS governs execution

Every side-effecting action passes through the VoxeraOS queue runtime — evaluated against policy, approval-gated if required, and executed with a full artifact record.

📋

Every action is provable

Jobs produce evidence artifacts: what was requested, what policy allowed, what executed, and what the outcome was. You can prove exactly what happened and why.

How it works

From intent to outcome. Five steps.

Every governed action follows the same path through the queue. No shortcuts, no surprises.

💬
Intent
📐
Policy
Approval
⚙️
Execute
📋
Audit

The queue is the execution boundary

AI should ask, not assume.

Natural language becomes a durable queue job. Jobs move through an explicit lifecycle. Every governed action produces evidence.

📥

Durable queue jobs

Intent becomes a queue job with an explicit lifecycle: inbox → pending → approval → running → done. Each state transition is tracked and recoverable.

Approval gates that pause

When a job requires approval, execution stops. The job waits in pending approvals until an operator explicitly allows it. Nothing advances on its own.

📋

Artifact-backed outcomes

Every completed job produces evidence: the plan, what policy allowed, what executed, and the outcome. Provable. Inspectable. Permanent.

Architecture

Two distinct surfaces.
One trust boundary.

VoxeraOS separates the conversational assistant from the governed execution runtime. They are deliberately different things.

01
Vera — the conversational surface

Vera converses, investigates, plans, and prepares governed handoffs. She shapes intent and explains results. She does not directly execute side-effecting system actions.

02
VoxeraOS — the governed execution runtime

The durable queue runtime that evaluates requests against policy, pauses for approval when required, executes jobs, and produces artifact-backed evidence of every outcome.

03
Your Linux — unchanged

Ubuntu, Fedora, or any distribution. VoxeraOS runs alongside it — never replacing your system foundations.

Built on

RuntimePython 3.10+
InterfaceCLI + Web Panel
AI ProviderOpenRouter (officially tested path · Gemini 3 Flash minimum)
IsolationRootless containers

Meet Vera →

Progress · v0.1.9

Where we are, where we're going.

Stability before scale. Operator trust before expanding capabilities. This is open-source alpha — built intentionally, iterated openly.

Current · v0.1.9 — Governed Capability Expansion

Governed writing and code/script draft lanes, investigation handoff, evidence-grounded results, and filesystem expansion

Vera can now draft documents, notes, and scripts as governed previews. Investigation results route into governed writing. Job outcomes surface with full evidence backing. Live weather via Brave Search. Voice foundation behind feature flags. Filesystem expansion with find, grep, tree, copy, move, and rename classifiers. Session continuity across drafts, files, and job results. Vera UX hardening and full release cut — hardened, documented, and available on GitHub.

Next directions

Hardening and operator experience

Richer recovery inspection, degradation-aware runtime, and broader operator tooling. Broad direction — details will evolve.

On the horizon

Voice-first and AI-first expansion

Voice loop, signed skills, and platform expansion. These are directional — the specifics will be shaped by what the project learns along the way.

View full roadmap →

The story so far

Built as a one-person project. Now open source.

VoxeraOS started as a proof of concept — one person, evenings and weekends, exploring what's missing from AI systems today. It's now an open-source alpha with a real framework and working end-to-end demo. v0.1.9 is working alpha software, not a finished platform. Some implementations are still rough around the edges.

🧪

Started as a proof of concept

The first versions tested a single idea: can AI execution be governed by default, with boundaries that hold? That question became a working system.

🌙

Evenings and weekends

This is a one-person project built in personal time. That shapes the pace — deliberate, focused, and honest about what's ready and what isn't.

🌱

Open source and evolving

The framework works. The end-to-end demo is real. Many things will change over time. Try it, inspect it, report back — contributions and feedback are welcome.

Open source · provider support

Open source now. Honest about what's tested.

VoxeraOS is open source and available on GitHub. The repo is the product — inspect it, run it, build on it. Here's what's tested and where things stand with provider support.

🔓

Open source on GitHub

The full codebase is public. Transparency, inspectability, and collaboration are how governed AI software should be built. Clone it, read it, try it.

🛣

OpenRouter is the tested path

OpenRouter is the only officially tested and fully built provider path so far. Gemini 3 Flash is the current minimum supported requirement. Other OpenRouter brains may work — try them and report back.

🔧

Some things are still rough

This is alpha software from a one-person project. Some implementations are still rough around the edges. Some choices are transitional and will change. That's the honest state of things.

Why this exists

Grounded in what AI is missing today.

This project is grounded in present-day gaps in AI systems: trust, review, execution boundaries, and proof of outcome. Not speculative hype — concrete problems that need solving now.

🔍

Trust through boundaries

Trust comes from structural limits on what AI can do without permission — not from personality, tone, or reassurance.

📋

Evidence-backed outcomes

Every governed action should produce proof: what was requested, what was allowed, what ran, and what resulted. No black boxes.

👤

Operator control

The person running the system should always be in charge. Approval gates, policy boundaries, and clear audit trails make that real.

Open Source · Available Now

Get VoxeraOS.

VoxeraOS is open source and available on GitHub right now. Clone the repo, follow the README, and run your first governed job in minutes. Feedback, issues, and contributions are welcome.

Open-source alpha · one-person project · still evolving