Skip to main content

Trust Scoring

OMATrust defines attestations, proof formats, and verification rules. It does not define how to score, rank, or weight services. That's intentional — different consumers have different trust requirements, and a one-size-fits-all score would be reductive.

A DeFi protocol cares most about security assessments. A consumer app cares about user reviews. An AI agent might weight endorsements from specific organizations. OMATrust provides the verifiable data; you decide what it means.

What You Have to Work With

After verifying attestations for a subject DID (see Verification Flow), you have a set of structured, verified data points:

  • Which attestation types are present (User Reviews, Security Assessments, Endorsements, Certifications, Key Bindings, Controller Witnesses)
  • Whether each attestation's proofs verified successfully
  • The identity and credibility of each attester
  • Attestation age, expiration status, and revocation status
  • For User Reviews: rating values, whether the review is proven (has valid proofs) or unproven
  • For Security Assessments: assessment kind, outcome, vulnerability metrics
  • For Certifications: program ID, certification level, assessor identity

Example: Trust Levels for User Reviews

The specification does not define trust levels or tiers for User Reviews — that's left to consumers. But here's a natural way to think about proof strength that follows from the proof types available for reviews:

Trust LevelCriteriaConfidence
Verified (high)Review includes a valid x402-receipt proof — the reviewer paid for and received the serviceHighest
Verified (medium)Review includes a valid tx-interaction proof — the reviewer submitted a transaction to the service's smart contractMedium-high
Verified (low)Review includes a valid evidence-pointer proof — the reviewer has an account on the serviceMedium
UnverifiedReview has no proofs, or proofs failed verificationLowest

In a world of automated bot farms and airdrop farmers, unverified reviews should be taken very lightly. Without a proof, there is no evidence the reviewer ever interacted with the service — the review could be one of thousands generated by a script. Proofs are what separate signal from noise.

Building Your Own Scoring

Here are some approaches to consider. None of these are prescribed by OMATrust — they're patterns you can adapt.

Weighted Attestation Presence

Assign weights to attestation types based on your use case:

score = 0
if has_key_binding: score += 15
if has_controller_witness: score += 10
if has_security_assessment: score += 25
if has_certification: score += 20
if has_endorsement: score += 10
if has_verified_reviews: score += 20 (scaled by count and average rating)

Proof-Weighted Review Aggregation

When aggregating user reviews, weight proven reviews higher:

weighted_sum = 0
weight_total = 0

for review in reviews:
if review.has_x402_receipt:
weight = 3.0
elif review.has_tx_interaction:
weight = 2.0
elif review.has_evidence_pointer:
weight = 1.5
else:
weight = 1.0

weighted_sum += review.rating * weight
weight_total += weight

weighted_average = weighted_sum / weight_total

Threshold-Based Trust

Define minimum requirements for different trust tiers:

TIER_HIGH:
- At least one non-expired Security Assessment with outcome = "pass"
- Key Binding + Controller Witness present
- Average verified review rating >= 3.5

TIER_MEDIUM:
- Key Binding present
- At least 3 user reviews (any proof level)

TIER_LOW:
- Service is registered but has minimal attestations

TIER_UNKNOWN:
- No attestations found

Attester-Weighted Scoring

Not all attesters are equal. A security assessment from a well-known auditor carries more weight than one from an unknown entity:

attester_weights = {
"did:web:certik.com": 1.0,
"did:web:openzeppelin.com": 1.0,
"did:web:unknown-auditor.xyz": 0.3,
}

for assessment in security_assessments:
weight = attester_weights.get(assessment.attester, 0.1)
score += base_assessment_score * weight

Opportunity: Trust Scoring as a Service

There is a real opportunity for developers to build an integrated trust scoring service on top of OMATrust. Such a service would:

  • Aggregate attestations across subjects
  • Apply consistent, transparent scoring logic
  • Expose scores via an API that OMATrust clients (indexers, frontends, AI agents) can consume directly
  • Provide configurable scoring profiles for different use cases (DeFi security, consumer trust, API reliability)

An indexer that already aggregates attestations is a natural place to layer scoring on top. By offering scoring as a queryable service, you make it easy for downstream consumers to get trust signals without implementing their own verification and scoring pipeline.

This is an open design space — OMATrust provides the attestation data and verification primitives, and a scoring service translates that into actionable trust signals for the ecosystem.

OMA3 will be providing additional infrastructure in the future that allows developers to offer trust scoring in a trust-minimized manner. Centralized trust indexers can also fill the need — the ecosystem benefits from any service that makes attestation data easier to consume, and the underlying attestations remain independently verifiable regardless of how the scoring layer is implemented.

Considerations

When building scoring logic, keep in mind:

  • Expired attestations are stale — An expired security assessment shouldn't contribute the same weight as a current one. You might discount or exclude expired attestations entirely.
  • Revoked attestations are inactive — Never include revoked attestations in positive scoring.
  • Attester credibility matters — A security assessment is only as trustworthy as the auditor. Consider maintaining attester allowlists or reputation tiers.
  • Recency matters — A review from last week is more relevant than one from two years ago. Consider time-decay functions.
  • Absence of attestations is not negative — A new service with no attestations is unknown, not untrustworthy. Distinguish between "no data" and "negative data."
  • Supersession — For User Reviews, only the most recent review from a given attester for a given subject should count.
  • Gaming resistance — Unverified reviews are easier to spam than verified ones. Weight accordingly.

Further Reading