Project LOGOS
Learning Optimal G* of Systems
Phase 1 Complete - Now in Phase 2: Perception & UX
Building autonomous agents that reason with graphs, not words. A cognitive architecture using Neo4j and Milvus for causal reasoning and structured knowledge representation.
The Four Corners
What makes LOGOS different from other AI architectures
Non-Linguistic Cognition
Language models think in words. LOGOS thinks in structures. Knowledge is represented as graphs, not token sequences, enabling reasoning that doesn't depend on linguistic plausibility but on structural correctness.
Causal World Model
The agent maintains an explicit model of cause and effect. Actions have preconditions and consequences. Plans are validated against causal dependencies before execution, not generated and hoped for the best.
Evolving Self-Model
LOGOS models its own capabilities, limitations, and state. The agent knows what it can do, tracks what it has done, and updates its self-understanding as it learns. Introspection is built in, not bolted on.
Formal Verification
Every node, relationship, and plan is validated against SHACL constraints. The knowledge graph has a schema. Plans must satisfy preconditions. No hallucinations, just verified structure.
Latest from the Blog
Technical deep-dives on cognitive architectures and AI systems
Building the Foundation for a Non-Linguistic AI Mind: Two Weeks In
Part 4A progress report on Project LOGOS after two weeks of focused development, covering Phase 2 milestones, infrastructure challenges, and engineering lessons from building a causal cognitive architecture.
Causal Planning Without Language Models
Part 3Exploring backward chaining, hybrid causal graphs, and formal validation as alternatives to probabilistic LLM-based planning for autonomous agents.
Non-Linguistic Cognition: Why Graphs Matter
Part 2Introducing Project LOGOS, a graph-based cognitive architecture that uses structured knowledge representation instead of token sequences for causal reasoning.