Veritheia Glossary
Critical Architectural Concepts
Neurosymbolic Architecture, Transcended
The fundamental innovation that differentiates Veritheia from all legacy systems.
Traditional symbolic AI systems encode rules in formal languages—Prolog predicates, LISP expressions, Python functions. These systems require programmers to translate user understanding into code. Traditional neurosymbolic systems combine neural networks with these coded symbolic rules, creating hybrid architectures where the symbolic component remains in the realm of formal programming languages.
Veritheia transcends this limitation. In our neurosymbolic transcended architecture:
- The Symbolic Component: The user’s natural language framework—their research questions, definitions, assessment criteria, theoretical orientation—expressed in their own words, not code
- The Neural Component: Large language models that interpret and apply these natural language frameworks with semantic understanding
- The Transcendence: The symbolic system is no longer coded by programmers but authored by users in natural language, making every user a symbolic system designer
This transcendence is revolutionary because:
- Users author their own symbolic systems without knowing programming
- The symbolic framework is unique to each journey, not hardcoded universally
- Natural language becomes the symbolic representation, interpreted by neural understanding
- Every user becomes a knowledge engineer through their questions and definitions
Example:
- Traditional Symbolic:
relevance(Paper, Topic) :- contains(Paper, keyword(Topic)).
- Neurosymbolic: Neural network + the Prolog rule above
- Neurosymbolic Transcended: User writes “Papers are relevant if they provide empirical evidence about how LLMs detect zero-day exploits in production environments” - this natural language IS the symbolic rule, interpreted by the LLM
The accumulated intellectual capacity that develops through structured engagement with documents projected through user-authored frameworks.
Formation is not:
- Information consumed from AI summaries
- Knowledge transferred from system to user
- Insights generated by the system
Formation is:
- Understanding developed through engagement with systematically processed documents
- Intellectual capacity built through making decisions (inclusion/exclusion)
- Patterns recognized through viewing documents in your projection space
- Synthesis authored by connecting documents through your framework
Authorship means:
- You provide the questions that cannot be overridden
- You define the terms that govern assessment
- You set the criteria that determine relevance
- You write the connections between documents
- The system CANNOT change your framework—it can only apply it
Mechanical Orchestration
The systematic, deterministic application of user-authored frameworks to EVERY document without exception or selective judgment.
Mechanical means:
- No AI discretion about which documents to process
- No selective attention or prioritization
- No skipping documents deemed “unimportant”
- Every document gets identical treatment through the user’s framework
Orchestration means:
- Coordinating the processing pipeline
- Ensuring complete coverage
- Maintaining processing order
- Guaranteeing systematic application
This is critical because it ensures:
- Fairness: All documents receive equal treatment
- Completeness: No document is overlooked
- Consistency: The same framework applies throughout
- Sovereignty: User’s framework governs without AI override
Journey Projection Space
A journey-specific intellectual environment where documents are transformed according to user-authored frameworks.
Documents don’t exist generically in Veritheia. They exist only as projections within journeys. The same PDF becomes:
- Segmented by methodology sections in a systematic review journey
- Segmented by learning objectives in an educational journey
- Segmented by legal precedents in a policy analysis journey
Each projection includes:
- Segmentation: Documents divided according to user’s needs
- Embedding: Vectors generated with user’s vocabulary as context
- Assessment: Measurements against user’s specific criteria
- Organization: Structure reflecting user’s intellectual framework
User Partition Sovereignty
The architectural guarantee that intellectual work remains isolated and sovereign through database-level partition boundaries.
Every user’s data lives in their partition with:
- Composite Primary Keys: (UserId, Id) ensuring partition isolation
- No Cross-Partition Queries: Database schema prevents accessing other users’ data
- Explicit Bridges: Sharing requires conscious creation of auditable bridges
- Ownership by Default: Private unless explicitly shared
This is not a privacy policy—it’s architectural enforcement through PostgreSQL constraints.
Technical Implementation Terms
Direct DbContext Usage
Using Entity Framework Core’s DbContext directly without repository abstraction layers.
Why this matters: The DbContext already IS a repository pattern implementation. Adding another repository layer would be abstracting an abstraction, violating our principle that PostgreSQL with its constraints IS our domain model.
Semantic Boundary Detection
Document segmentation based on meaning and structure rather than arbitrary size chunks.
Not: Splitting every 512 tokens regardless of content
Instead: Recognizing natural boundaries—paragraph ends, section breaks, topic shifts—ensuring segments maintain semantic coherence
Process Engine
The runtime that mechanically orchestrates the application of user-authored frameworks through neural understanding.
Responsibilities:
- Execute processes within journey boundaries
- Ensure all documents receive identical treatment
- Maintain user partition isolation
- Coordinate between Knowledge Database and Cognitive System
Cognitive System Adapter
The interface to large language models that interprets and applies user-authored symbolic frameworks.
Key principle: The adapter ONLY measures and assesses—it never generates insights or makes decisions. It interprets the user’s framework and applies it consistently.
Domain Concepts
Journey
A specific instance of a user engaging with documents through a process with their authored framework.
Journey = User + Persona + Process + Framework + Documents + Time
Each journey:
- Creates a unique projection space
- Maintains its own context
- Accumulates formation
- Cannot be transferred (insights are meaningful only within their journey)
Persona
A domain-specific intellectual context representing how a user approaches problems in different roles.
Examples:
- Researcher Persona: Academic vocabulary, investigation methods
- Student Persona: Learning-focused patterns, foundational concepts
- Professional Persona: Industry terminology, practical criteria
Personas are not profiles—they’re evolving representations of intellectual style within domains.
Intellectual Sovereignty
The inviolable principle that users own their intellectual work—their questions, frameworks, formations, and authored understanding.
Enforced through:
- Architectural design (partition boundaries)
- Mechanical orchestration (no AI override)
- Formation through authorship (user creates understanding)
- Anti-surveillance structure (no cross-user analytics)
Key Distinctions
Internal vs External (for testing/mocking)
Internal (NOT mockable):
- PostgreSQL database (it IS the domain model)
- Entity Framework DbContext
- Process Engine
- Domain services
External (mockable):
- Large Language Models
- File storage (S3, Azure Blob, local filesystem)
- Third-party APIs
- Email services
MVP vs Future
MVP (Launch demonstration):
- Single user with multiple personas
- Two journey types: Literature review, Lesson plan creation
- Monolithic deployment
- No cross-user sharing
Future (architectural capability):
- Multi-user collaboration
- Federation across instances
- Distributed deployment
- Journal sharing with attribution
This glossary is a living document. Terms should be added as they emerge and refined as understanding deepens. The critical insight remains: Veritheia’s neurosymbolic transcended architecture enables users to author their own symbolic systems in natural language, making formation through authorship possible at scale.