Trust Model
How Vera turns raw system data into inspectable, explainable trust signals
From Data to Trust
Every piece of evidence on a Vera profile follows the same path — from raw observation to explainable finding. Nothing is hidden, nothing is assumed.
Collect
The Vera agent observes processes, drivers, and system posture during gameplay
→
Detect
Game sessions are identified by matching activity against curated detection rules
→
Analyze
Findings are generated by evaluating evidence against the active threat model
→
Publish
Results appear on the creator's public profile — fully inspectable, linked to evidence
Evidence Levels
1
Observed
Captured directly from the system during a verified session. This is the foundation of trust — raw data, not interpretations.
Examples:
- • Process names, paths, and publishers
- • Kernel driver inventory and signatures
- • Secure Boot, HVCI, and testsigning posture
2
Correlated
Derived by applying curated rules to observed evidence. Confidence scales with rule quality and catalog coverage.
Examples:
- • Known-risk driver matches
- • Game session detection via catalog rules
- • Finding severity based on threat model versions
What Vera Can't Prove
- × Vera cannot prove the absence of cheating — only show what was observed
- × Vera cannot detect every bypass method — visibility has limits
- × Vera cannot deliver verdicts — findings are starting points, not conclusions
- × Vera cannot replace human judgment — context always matters
How to Read a Vera Profile
- ✓ Review evidence in context — a single session tells part of the story
- ✓ Look for patterns across multiple game sessions over time
- ✓ Consider the full picture: processes, drivers, integrity posture, and findings together
- ✓ Treat findings as conversation starters backed by evidence, not accusations
