A hardened AI agent framework designed for teams that require real security guarantees.
🚧 Phase 1 in active development. Public release coming soon.
Modern AI agent frameworks ship fast and secure later — if ever.
Legionforge flips that model.
Security is enforced in the execution path, not layered on afterward.
Every tool invocation, privilege boundary, and mutation is governed by deterministic controls.
Result: predictable, auditable, failure-contained AI systems.
No LLM involvement in security decisions.
Controls are fast, auditable, and resistant to prompt injection by design.
Permissions follow the task — not the agent.
Short-lived tokens define exact capability boundaries for each execution.
Tools are validated by hash and signature.
Behavior changes cannot occur silently.
Security policy changes require explicit approval.
Autonomous privilege escalation is impossible by design.
| Threat | Severity |
|---|---|
| Tool Poisoning | Critical |
| Rug-Pull Tool Behavior | Critical |
| Prompt Injection (Direct + Indirect) | Critical |
| Resource Exhaustion / Economic DOS | High |
| Credential Exposure | High |
| RAG / Memory Poisoning | High |
| Multi-Agent Cascade Failure | High |
| Supply Chain Risk | Medium |
Security must operate in the hot path of execution.
Failure handling must be tiered, not binary.
Privilege must be bounded and ephemeral.
Mutation must always be human-governed.
| Phase | Focus | Status |
|---|---|---|
| 0 — Infrastructure | Core services, storage, model factory | Complete |
| 1 — First Agent + Security Foundation | Researcher agent, validation, logging | Active |
| 2 — Guardian Security Layer | Containerized execution, audit log | Planned |
| 3 — Access Control Model | Task tokens, orchestrator pattern | Planned |
| 4 — Adaptive Threat Intelligence | Threat Analyst agent | Planned |
| 5 — Signed Tool Ecosystem | Verified tool distribution | Planned |
| 6 — Continuous Security Testing | Air-gapped red-team agent | Planned |
Designed for local execution on Apple Silicon.
No mandatory cloud dependency.
Security guarantees do not depend on external infrastructure.
Codebase currently private during Phase 1 hardening.
Public repository release planned upon completion.
Follow development progress on GitHub.
Jp Cruz
Security-focused AI systems engineering
AGPL-3.0 — open source with reciprocal guarantees.