Trust Model
How Vera turns raw system data into inspectable, explainable trust signals
What Vera Actually Shows
Vera doesn't judge gameplay. Vera doesn't declare anyone guilty.
Vera simply records what software was running — and makes that evidence transparent.
So when legitimacy matters, the facts already exist.
From Data to Trust
Every piece of evidence on a Vera profile follows the same path — from raw observation to explainable finding. Nothing is hidden, nothing is assumed.
Collect
The Vera agent observes processes, drivers, and system posture during gameplay
→
Detect
Game sessions are identified by matching activity against curated detection rules
→
Analyze
Evidence is evaluated against the active trust model — every data point is preserved
→
Publish
Results appear on the creator's public profile — fully inspectable, linked to evidence
Evidence Levels
1
Observed
Captured directly from the system during a verified session. This is the foundation of trust — raw data, not interpretations.
Examples:
- • Process names, paths, and publishers
- • Kernel driver inventory and signatures
- • Secure Boot, HVCI, and testsigning posture
2
Correlated
Derived by applying curated rules to observed evidence. Confidence scales with rule quality and catalog coverage.
Examples:
- • Known-risk driver matches
- • Game session detection via catalog rules
- • Finding severity based on threat model versions
What Vera Can't Prove
- × Vera cannot prove the absence of cheating — only show what was observed
- × Vera cannot detect every bypass method — visibility has limits
- × Vera cannot deliver verdicts — evidence is presented for human interpretation
- × Vera cannot replace human judgment — context always matters
How to Read a Vera Profile
- ✓ Review evidence in context — a single session tells part of the story
- ✓ Look for patterns across multiple game sessions over time
- ✓ Consider the full picture: processes, drivers, and integrity posture together
- ✓ Treat evidence as a starting point for conversation, not a conclusion
