01. Why verification matters
Organizations using MindBank to inform programs, policy, training, or services need to trust that the insights they receive are authentic. At the same time, survivors should never be required to prove their experience in invasive, retraumatizing, or gatekeeping ways.
This framework solves both at once: a verification structure that is transparent to organizations, dignity-preserving for contributors, and overseen by an independent council.
Trust the contributor. Verify the system. Authenticity comes from the integrity of the platform, not from extracted proof of harm.
02. The three tiers
Each level earns a credibility badge that organizations can see when accessing aggregated insights or anonymized stories. Higher tiers do not mean "more believable" — they reflect the level of context the contributor has chosen to share.
The baseline for every contributor. Self-verification confirms only that the person submitting is a real human, has consented to platform terms, and is engaging once per identity.
Method: Secure digital identity verification, or phone/email verification. Identity data is held separately from story content and is never shared with organizations.
For contributors who choose to share supportive context. This signals to organizations that the contributor's broad experience category aligns with plausible indicators — without ever requiring detailed proof of any specific event.
Method: Optional submission of context such as community group involvement, participation in survivor-support networks, or referral from a partner organization.
The highest tier, used primarily for research contracts, government policy work, and academic studies where institutional credibility is required.
Method: A trusted institutional partner (community organization, support service, healthcare provider, or accredited researcher) confirms the contributor is a known participant in their work — with the contributor's explicit, separately recorded consent.
03. What contributors are never asked to provide
- Documentation of any specific traumatic event
- Medical, legal, or police records
- Names of perpetrators or third parties
- Photographs or images of injury or evidence
- Detailed timelines or specific locations
- Validation from family, friends, or community members
A contributor's word, supported by platform-level safeguards, is the standard.
04. AI plus human oversight
MindBank uses a layered review model. AI handles pattern detection at scale; humans handle judgment.
AI's role
- Flagging duplicate or near-duplicate submissions across accounts
- Identifying language patterns associated with platform abuse or coordinated inauthentic behaviour
- Detecting personal identifiers that need to be removed before storage
- Surfacing distress markers so support resources can be offered
Human moderators' role
- Reviewing every AI flag with trauma-informed training
- Making final decisions on account standing and story acceptance
- Outreach to contributors flagged for distress, with a survivor-led approach
- Hearing first-tier appeals before escalation to the Ethics Council
AI is never the final decision-maker on a contributor's account or story.
05. Anonymous proof-of-experience tokens
An emerging feature, currently in pilot. The token system allows trusted partners (e.g., a community support service) to confirm to MindBank that a contributor is a known participant in a relevant program — without revealing the contributor's identity to MindBank or to organizations accessing aggregated insights.
This creates a credentialing layer that adds research-grade credibility while preserving complete contributor anonymity.
06. Two-way trust
Verification is not a one-way burden on contributors. Organizations partnered with MindBank are also evaluated.
Organization conduct ratings
- Compliance with the ethical use agreement
- Honouring the consent terms attached to specific stories or themes
- Quality and timeliness of impact reporting back to MindBank
- Engagement in ongoing partnership rather than extractive use
- Reports from contributors who interacted with the organization in advisory or consultation work
Conduct ratings are visible to contributors before they consent to specific uses, allowing survivors to make informed choices about which organizations they want to engage with.
07. Appeals
Any contributor whose verification status, story, or account is affected by a moderation decision can appeal:
- First tier: Review by a senior human moderator within 7 business days
- Second tier: Review by the Ethics Council within 30 days
- Appeals are heard with trauma-informed practice. Contributors are not required to re-explain or justify their experience
- Outcomes and reasoning are communicated in plain language
08. Standards review
The Ethics Council reviews verification standards quarterly and may require changes to the framework. The framework is also updated:
- When new evidence emerges in trauma-informed verification research
- When platform data shows the framework is producing unintended barriers
- In response to contributor feedback collected through annual surveys
- When regulatory requirements (PIPEDA, provincial privacy legislation) change
All changes are published in the annual transparency report with reasoning.