Skip to main content

· 10 min read
Juan Caballero

A year ago today, I started full-time working on Verite, coordinating research on my fronts and moving from prototype and "cookbook for an ecosystem" to production trials, compliance research, and use-case exploration. It's been a great year and we have a lot to be proud of-- but also a backlog of updates to the documentation, as the exploration has yielded iteration and variation, new frontiers and new constellations of design. This post will serve as a milestone report overviewing those discoveries, new research directions, and ongoing conversations.

Compliant Membership

For all of this year, we remained laser-focused on one use-case: "membership proofs" that attest to a specific blockchain address being a member of a coarsely-defined "compliant pool" that has been checked to a minimal level, which reveal minimal identifiable data on-chain or on the open web. Delivering this value is what Verite was created to do, and all other use-cases we have been working on follow from it and build on it -- or, to put it another way, are unlikely to go to production until our primary use-case has users and traction, as "follow-on" products and "knock-on" network effects.

Our research uncovered much complexity and diversity among the on-chain consumers of these membership proofs, which leads us to support different variations and form-factors for this use case and the supporting technical work needed to go beyond mere membership. We found:

  1. Some smart-contract-based consumers only wanted to consume membership proofs on-chain, making a strict "nothing on-chain" approach across all Verite architectures unworkable
  2. Other smart-contract-based consumers want to outsource the enforcement of a compliance policy (usually roughly defined, and expected to be refined over time) to a single service, with an expectations of real-time monitoring and complete separation of concerns.
  3. These services will need to retool as regulatory requirements and reliance frameworks evolve over time.
  4. Everyone (even the services!) worries about centralization and silos forming -- the services want to be able to work with one another's end-users, and thus provide a larger (and more decentralized!) customer-base to their on-chain customers.

Our earlier research assumed on-chain activity could be gated by Verifiable Credential (or "off-chain soulbound tokens" as they are sometimes called in web3), but the further we delved into the "customer requirements" of on-chain platforms, the more it became apparent that it did not fit in their technical or business model to accept and validate verifiable credentials, with or without input towards the schematization of those policies into composable bits. They wanted a distinct service provider (#2) to handle that, and off-chain tokens would allow these service providers to federate their customer bases and interoperate (#4), as well as to evolve over time (#3). Their secret sauce and competition was best left to their relationships with on-chain customers, as well as their specific tooling and integration patterns (#1).

Governance and Compliance

Another aspect that shifted substantially over the last year was the role of governance in the system. When the work began, without splitting on-chain relying parties from verification services, it was easier to imagine a simple feedback loop between relying parties and the schema design for the contents of the Verite credentials which would be issued and/or validated by verification services. But if verification services would be independent actors rather than implemented by and owned by the relying parties, the complexity of governance needed for this network was growing.

In particular, we worried that on-chain communication would be the hardest to align on given the business model pressures of this 4-actor model. While "schema design", as we had been calling it, sounded like a relatively simple technical question of data modeling, it turned out to be a far more complex beast across different chains of reliance and slight architectural variants.

For this reason, less schema design happened in Centre-hosted working groups than we had expected. As end-to-end products continue to evolve, this kind of alignment could perhaps be premature, and many Verite partners have opted to defer this kind of harmonization until after there is signification adoption and traction. The schemas published so far are adequate to geographically-limited launches of the initial use-case, and as the sphere of supported geographies and use-cases expands, or as more players enter the space and demand grows for federation and interoperability, we look forward to truly viable self-governance. In the meantime, we will continue technical research and design work.

Specifically, that falls in four categories:

  1. On-chain Design
  2. Architectural Design & Adoption
  3. Technical Standardization
  4. Additional Use-Cases

Ongoing On-Chain Research

One of the most active groups Verite hosted over the last year was the On-Chain Best Practices group, which zoomed in on the "last mile" between the on-chain consumers relying on Verite to deliver it safe addresses, and the off-chain verifiers handling all the messy details and off-chain data with privacy and nuance. This group focused on the subtle difference (lost on many readers of the Verite documentation!) between Verification Results (verbose, identifying, necessarily off-chain and archival for reporting and legal reasons) and Verification Records (concise, anonymous, essentially just a log entry or a pointer to archival results documentation). Only the latter can go on-chain, and thus the latter needs to be enough for on-chain relying parties to act; the former, however, enables all the flexibility and nuance needed for a policy engine, a data translation layer, and a federation of competitors.

Surveying the landscape, a Verite research team and a few academic interlocutors cooperated on a synthetic comparison of architectures we saw for addressing the on-chain compliance problem space. The result was a thorough lightpaper laying out where Verite fit in the broader landscape of on-chain identity with and without verifiable credentials. Zooming in even more on the relationship between on-chain consumers and identity-enabled on-chain tokens, Keith Kowal from the Hedera-focused research team at Swirld Labs wrote up another useful primer. It focuses in on where on-chain tokens do and don't fit the "soul-bound" model for a "decentralized society," as well as how on-chain artefacts can and cannot meet the requirements of compliance in today's jurisdiction-based and centralized society of laws.

As products launch and evolve in the marketplace, this group remains confident that alignment on incrementally more technologically prescriptive guidance will be valuable, leading to shared interfaces and shared security models as federations form. For this reason, Verite's most active working group plans to keep meeting and keep designing on-chain registries and smart contracts across multiple languages and blockchain environments. Look forward to additions to Verite's documentation along these lines in Q2 and Q3 of this year.

Ongoing Architectural Design

The RWoT lightpaper linked above implicitly compared the Verite architecture to simpler end-to-end products already active in the DeFi space today, but this comparison only works at a certain level of abstraction. Zooming in a bit, Verite has actually forked into two separate-but-equal architectures over the last year, and works hard to keep feature parity and use-case parity between the two.

The original Verite design assumed an end-user either using TWO wallets (an identity wallet and a crypto wallet), or a "fat wallet" controlling both a private and portable DID and one or more blockchain private keys). This defined our initial schemas, and the original Verite implementation guide imagines multi-chain Crypto wallets adding support for a DID in much the same way they handle adding support for a new blockchain or private-key type to be able to handle private, DID-based offchain VCs. This dual-wallet, both "SSI" (self-sovereign identity) and "Crypto", is a cornerstone of the "Web5" approach, allowing SSI for next-gen (and post-cookie!) Web2 use-cases, strong privacy and pseudonymity for web3 use-cases, and even a few carefully guard-railed bridges between the two. While we fully endorse this approach, we are waiting for more implementation interest and DeFi demand to further prototype technical artefacts for this architecture.

In the meantime, a simpler architectural pattern (outlined in detail in a prior blog post) emerged in prototyping work with Circle Engineering: an address-based credential, issued not a DID controlled by a specific wallet client, but to an address (which might even be controlled from multiple wallet clients). This enables a different path to wallet support, piggybacking on the WalletConnect version 2 upgrade cycle to get wallets implementing lightweight, "minimum-viable" support for verifiable credentials without the need to handle DID keys or complex VC-specific protocols.

In the coming months, expect to see the Wallet Connect instructions on the Verite documentation expanded and entries added to other pages as more wallets roll out support for Verite credentials via Wallet Connect rails. This is a major ongoing research topic with Circle and WalletConnect, testing the hypothesis that thin wallets want to stay thin just as much as dapps wants to stay ignorant of the identities of their users (even in compliant products).

Ongoing Technical Standards Contribution

It bears repeating often that Verite is not a product or a solution, but rather a protype and a "cookbook" for overlapping DeFi ecosystems. While Centre strives to be a neutral convener of co-development between startups and major players, Centre is not a "standards organization" but encourages use-case alignment with subsequent development in appropriate standards organizations. Rather than design or specify them, Centre strives to integrate and socialize technologies. As such, a key design goal for Verite from the beginning was to invent as little new technology as possible and use standards-track building blocks wherever possible to maximize interoperability between competitors and sustainability of technical decisions, two crucial ingredients in any sustainable ecosystem.

For this reason, while Verite is not a product or a solution, it could be called a "protocol," aiming at adoption by wallets and dapps. Or to be more precise, Verite could be seen as a "profile" and sample implementation of the more general-purpose Presentation Exchange protocol for Verifiable Credentials. For this reason, Centre is committed to maintaining and iterating the Presentation Exchange specification at the Decentralized Identity Foundation.

Of course, Verite relies on many other standards and open-source initiatives, to which it is also committed. The bedrock W3C data model specification on which all VC work depends is currently being iterated and Verite plans to take advantage of new features and improvements with time. The nuts and bolts of crypto-wallet adoption, off-chain VC presentation, and signing involves representing VC use-cases at the Chain-Agnostic Standards Alliance (CASA).

Ongoing Research Towards Other Use Cases

While the focus has always been on privacy-preserving exchange of off-chain credentials to enable on-chain reliance, lots of doors are opened by the adoption of Verite building blocks for other credentials. We've discussed:

  • "Travel Rule" use-cases and the role of VCs in a more open-world version of today's VASP-to-VASP discovery and exchange protocols (led by Nota Bene)
  • "Heavier" credentials expressing not just membership but actual personal information for cross-organization de-duplication or identification, or in its more complex formulation, "reusable on-boarding" and "portable identity verification"
  • Identity assurance FOR verifiable credentials, leaning on existing web2 trust frameworks or "holder-binding" mechanisms for insuring the same actual person identified and onboarded is the same person presenting the credentials later, whether they be "membership" credentials or more high-stakes heavy ones
  • Zero-knowledge mechanisms for generating IN THE WALLET a verification record that can be sent to an on-chain relying party, bringing us back to the fat-wallet, 3-party model
  • Schemas for future use-cases, that might happen outside rather than inside depending on the co-designers and their IP constraints

Each of these directions presents new possibilities and its own venues for future work, technological and otherwise. There are no timelines or commitments are in place, as howe each of these research directions pans out depends how the work done thus far finds its way to market and wallet adoption.

· 11 min read

Part 2 of this 2-part series explains the did:pkh/CACAO variation for Verite data models and flows, which provides an entry path for wallets that may not support sufficient functionality for emerging decentralized identity patterns

Since some wallets may not themselves be willing to embed protocol-specific logic (interaction with verifiers) or more general verifiable-credential logic, we have to find a kind of "minimum viable" level of support for today’s non-DID-enabled crypto wallets. While handling DIDs and signing Verifiable Presentations brings a kind of secure indirection that enables portability and multi-chain wallets, these properties are not strictly essential to the core functionality of Verite. For this reason, we define a crypto wallet that can receive and pass to dApps a Verifiable Credential issued against its blockchain address adequate, with a few adjustments and supplements.

Phase 0: Issuance Directly to Crypto Wallet

In a crypto-wallet centric end-to-end flow, the trust model is different and the interplay between credential wallet and crypto wallet can be greatly simplified. The credentials themselves must also be slightly different– instead of obtaining the credential subject DID directly from the wallet to which they are being issued, the issuer will use a credential subject identifier based on a specific blockchain address controlled by that wallet. Using DID terminology, rather than attest to the controller of a wallet, it attests only to a specific address controlled by that wallet.

This greatly simplifies the ownership question, by relying on native mechanisms for proving ownership of the address– at the time of issuance, as well as at time of verification of the credentials.

Two Options of Expressing a Blockchain address as a DID (and as a VC subject)

Instead of defining the subject of the VC as a chain-agnostic DID provided by the wallet, the issuer will deterministically generate a DID from the blockchain address controlled by the connected wallet. Multiple DID methods allow this possibility; we’ll describe two of them, assuming a wallet with an Ethereum address (referred to as ETH_ADDRESS).

  • did:key method - issue against a crypto wallet’s public key: If the issuer has the wallet address ETH_ADDRESS and any signature over a known message, the corresponding public key can be recovered using the ecrecover mechanism ported over from Bitcoin in the early days of Ethereum. In this way. the issuer can deterministically obtain a did:key DID that will be used as the value of This is the method Circle will begin with, for ease of implementation by participants.
    • In this case, the mapping is: did:key:<ecrecover(ETH_ADDRESS, signature)>
    • For blockchains that do not use a Bitcoin-style pay2hash address system, like Solana and Polkadot, no recovery from a signature is necessary because the base form of the address is already a public key supported by multibase and thus by did:key.
  • did:pkh method - issue against a crypto wallet’s public address: Other DID methods, like [did:pkh](, allow DIDs to be defined directly based on blockchain addresses in their commonly-used, public-facing forms. Long term, this is the preferred method. Among other advantages, the implementation across chains is more apparent. - In this case, the mapping is: did:pkh:eip155:1:<ETH_ADDRESS>. eip155 here refers to the EVM namespace (defined in EIP155), while 1 refers to the ethereum mainnet according to EIP155. - Just as the did:key URI scheme relies on the [multibase]( registry, so does the did:pkh URI scheme rely on the ChainAgnostic Standards Alliance’s [namespace registry]( to add non-EVM namespaces. - In cases where human-readability is important, end-users can introspect the VC and see their familiar address, as opposed to a public key that in pay2hash systems like BTC or ETH, they might never have seen or know they control

With Ethereum and dApp-native Identity

Wallets that have not incorporated decentralized identity capabilities rarely support JWT _signing _features, or other token mechanics that are common to the traditional web2 identity world. For this reason, many web3 sites and dApps have started using the wallet connection pattern to create a more feature-rich and auditable session mechanism via offchain message signing (i.e. personal_sign on EVM systems). Equivalents for other blockchain ecosystems, such as Solana, are forthcoming.

In the case of issuance, this signature is enough to extract the crypto wallet’s public key, as mentioned above. Importantly, though, it also enables delegated keys to sign offchain events without another onerous or fraught wallet-interaction, as we will see below.

Phase 1: Off-chain Verification

Variant: Crypto-Wallet only with only VC storage capabilities

At verification time, when a wallet "connects” to a dApp by providing an off-chain signature over a structured authentication message, the dApp will have the wallet’s address (and live proof-of-control, if the authentication message included a secure nonce) so it can simply compare this address with the corresponding DID:PKH of the VC. This way, the verifier will not need to do an ownership check on the VC, and the dApp can trust the verifier to have received credentials from the right wallet because it, too, will require a wallet connection and prove ownership of the same wallet.

Without necessarily even having to parse, validity-check, or introspect the verifiable credentials, any wallet that can store them (whether as JWTs or even as raw text files) can submit them directly to verifiers, as shown below.

Credential exchange without a DID wallet

Note: while it is recommended that crypto wallets parse verifiable credentials and check their validity or safety, crypto wallets could in theory allow blind passthrough if the user can assume the responsibility for their contents. In the Verite case, there are little security concerns or abuses possible.

By itself, however, this bare VC is inferior to a VP from a full-featured decentralized-identity wallet, since it does not contain a non-repudiable off-chain wallet signature for auditing purposes. Or, to put it another way, it is only as trustworthy as the authentication of the wallet that sent it to you, and there is little standardization of the receipts you keep of crypto-wallet authentication to replay it for a future auditor or security review.

While the corner cases of impersonation or exfiltrated VCs might be vanishingly rare, the "audit trail” of a bare VC is weaker than a VC wrapped in a timestamped signature. For this reason, we encourage Verite dApps to create a functional equivalent to a verifiable presentation in the form of a signed presentation receipt (signed with a session-specific ephemeral key) for logging purposes. To accomplish this, we return to the Sign-In With Ethereum pattern to elaborate on its key delegation feature.

With Ethereum Flow

As mentioned above, we support the emerging standard approach of the "Sign-In With Ethereum” mechanism which defines a sign-in message that includes a domain-binding anchor, an ephemeral session key, and other caveats. While the ephemeral session key property was not essential to the issuance wallet connection, it can be useful in the verification situation for more trustless (and auditable) logging.

By generating an ephemeral key and including it in the initial wallet-connection message for the crypto wallet to sign over upon authenticating itself to the dApp, the wallet effectively "delegates” that ephemeral key for the duration of the session. In UX terms, this saves the user from a distinct off-chain wallet signature at each confirmation or consent event. Carefully defining the other properties of the SIWE message, dApps can secure and constrain that delegation, link to an applicable terms-of-service agreement, enable DNS-based domain-checks analogous to the "lock symbol” in modern browsers, etc.

Once the user has "connected” their wallet by signing this SIWE message, a CACAO is generated as a receipt of that session (and of the delegation of signing rights to the key). This allows the dApp to use smoother UX than requiring a full off-chain wallet signature to confirm each consent event or internal transaction (such as the presentation of VCs in a signed VP). But it also provides a compact and tamperproof way of encapsulating each event or internal transaction as a time-stamped and signed object for logging purposes– this makes each event as verifiable as an off-chain (or on-chain) signature, via the indirection of the delegated key.

Ownership Verification

You could say that the crypto wallet delegates the encapsulation and signature of a VP to the dApp, which creates a short-lived key with which to sign the VP, which is a kind of standardized logging object for a presentation event. This allows the verifier to confirm that the dApp is interacting on behalf of the wallet. Since the Verifier has confirmed control of the wallet address with a SIWE message, and the VC is issued to the address directly, there is no ownership verification needed as with a decentralized wallet; thanks to the CACAO, future auditors can replay these interactions and re-check signatures and tamper-proof objects to confirm all of these transactions trustlessly.

Detailed Flow

Ownership verification with non-DID wallet

  1. Wallet initiates DeFi transaction with dApp.
  2. dApp generates a session-specific ephemeral signing key and encoded it in the SIWE message for the wallet to sign. This generated session key will delegate the wallet for future signings, once after wallet vouches it (by signing the CACAO).
  3. Once the wallet has signed it and returned it to the dApp, the signature and message are encoded into a compact CACAO session receipt for logging and forensic purposes (if needed).
  4. Next the dApp lets the verifier know about the session, by POSTing the receipt to an endpoint (eg. signIn). The signed receipt also includes caveats, a domain-binding, an expiration, and other constraints to secure the delegation and the transfer of the session parameters.
  5. The verifier saves the CACAO. The verifier only uses this CACAO in the scope of this verification session (to prove the VP signed by the ephemeral key). Once the CACAO verification step is completed, the session object will be updated.
  6. Instead of sending the wallet to verify directly with the verifier (as in the previous post), the wallet will submit the VC directly to the dapp (or an agent/service it trusts). The dApp presents the prompt to verify.
  7. Wallet submits the bare VC.
  8. Subsequent requests from the dApp will include a reference to the session which the verifier can use if they need to check signatures made by that key. The VC(s) submitted by the dApp in this case will not be signed in a VP with the wallet’s key; instead, it/they will be put into a VP and signed by the dApp using the ephemeral key (the signing key mentioned in the first step above) delegated to it by the SIWE message. Introspection into the CACAO is required to extract the public key that signed the VP, as well as a signature from the wallet key over the public key.
  9. When all the information is submitted to the verifier, the verifier needs to examine the ownership of the credential:
    1. Extract the public key of the session signing key from the resources section of the CACAO
    2. Use the public key of the session signing key to validate the VP’s signature. This is to ensure that the dApp properly (which held the key and got user consent to delegate signing rights to it) signed the VP and that it has not been tampered with in transport.
    3. Compare iss in CACAO with the wallet’s DID in VC (in this case a did:pkh representing the wallet address as a long URI). They should match, if the dApp’s SIWE message. conforms to the SIWE specification. This is to check the wallet which vouched the session key is the subject (and holder) of the VC, which is also connected to the dApp with a signature over a nonce, included in the CACAO to keep future auditors from having to trust the verifier.


Circle’s implementation of the Verite protocol allows us to serve our customers and the dApps they interact with equally, putting the rigor of our KYC processes at the service of a process that is auditable and verifiable end-to-end without duplicating KYC process or PII from those processes across the chain of asset custody. We are proud to be driving the Verite process, and welcome more implementations, whether end-to-end issuer+verifier solutions like ours or more focused implementations that bring more wallets and more users into the ecosystem.

As the Centre team updates its documentation and sample implementation to reflect the new patterns and flows, we will continue to work with them to share the insights we are gaining from our exploratory work with dApps and clients.

· 12 min read

Since Verite’s original release, we’ve gotten feedback from development partners and the broader community about which patterns are useful and where guidance could be improved. Some themes include:

  • Persistence of verification results to on-chain registries is over-emphasized.
  • Verite’s current verification flow documentation assumes that Verifiable Credentials (VCs) are stored in wallets that support decentralized identity protocols, which are currently somewhat scarce in the market.

Upcoming Verite releases (including code and documentation) will address these concerns as follows:

  1. A forthcoming editorial revision of Verite documentation will explicitly describe a variant of the subject-submission patterns without any on-chain persistence.
  2. A coming release will include code and documentation demonstrating how more of today’s wallets can participate in VC flows in a standardized way, until DIDs become more common.

In this two-part blog series, we’ll preview these updates. This post (Part 1) includes an overview of options for the use of off-chain VCs to support on-chain decision-making, including subject-submission without on-chain persistence. This begins with a summary of core concepts and principles of Verite. Readers already familiar can skip ahead to the "Two Phases of Verification" section for an overview of the new flow being added to the Verite docs and pioneered by Circle’s implementation. After this new end-to-end pattern for Verite is outlined, a more analytical "Discussion" section follows.

The second post in the series will zoom in on how wallets that do not currently support sufficient functionality for emerging decentralized identity patterns can be "retrofitted” and supplemented to support Verite’s off-chain verifications like those described in this post.

Verification Concepts

If you’re new to Verite, or want a reminder, this section describes how Verite credential verification works off-chain to support smart contracts with maximum privacy around sensitive identifying information.


The following entities are involved in the Verite credential verification process:

  • Crypto wallet: aka "payment wallet”, controls a blockchain address; may be browser or mobile, hosted or self-hosted.
  • Credential wallet: aka "identity wallet”, stores and shares VCs, controls a "DID” (meta-wallet identifier). Some or all of these functional roles may be subsumed into a crypto wallet or a trusted dapp, or this entity may be completely distinct software, depending on flow and trust model.
  • Verifier: aka the "verifier service” or "verifier endpoint” that consumes a privacy-preserving, off-chain token (a verifiable credential) on behalf of a dApp. It can be operated by the dApp or a trusted partner but in either case it needs to be previously known to and trusted by the dApp at a static address.
  • dApp: decides which verifiers to trust; the dApp frontend is responsible for triggering the verification process; the dApp backend is responsible for on-chain validation
  • On-chain verification registry (optional): depending on the implementation choices, dApps may rely on on-chain storage of verification results. This may be part of the dApp or a separate registry accessible by other parties. For simplicity, we’ll assume the former case in this blog.

Verification Result

A Verification Result enables a "verified” crypto wallet – that is, a wallet whose controller has proven possession of a valid VC that has passed a dApp-trusted-verifier’s verification process – to interact on-chain.

You can think of a Verification Result as the result of slicing and dicing a VC, potentially obfuscated, with an attached proof from the Verifier. The Verification Result structure in Verite’s reference implementation includes the following fields, returned along with the Verifier’s cryptographic signature:

  • subject: identifies the wallet address
  • schema: indicates the kind of credential (flexible according to the implementation)
  • expiration: a number that is less than or equal to the expiration value in the VC.

The Verification Result may be extended to include additional fields, which could be carried over identically or fuzzed/obfuscated in some way (which is especially relevant if using on-chain storage).

export type VerificationResult = {
schema: string
subject: string // address
expiration: number

Verifier vs Subject Submission

Verite’s implementation demonstrates two possible methods of submitting Verification Results to smart contracts; or in other words:

  • Verifier-submitted: the verifier submits the Verification Result and signature directly to the smart contract immediately after verification.
  • Subject-submitted: the verifier returns the Verification Result and signature to the subject (or an entity or code acting on the subject’s behalf) who then submits the signed Verification Result to the smart contract.

Verification Principles

While verifications can be carried out using a variety of methods, all are expected conform to the following principles:

  1. Trusted issuer principle: the credential was issued by one of a list of trusted issuers.
  2. Credential ownership principle: The crypto wallet that owns (i.e. can prove themselves to be the subject of) the credential is the one making the DeFi transaction request, regardless of whether the credential itself is held and presented by a separate piece of hardware (i.e. whether the credential wallet and the crypto wallet are distinct)
  3. Trusted verifier principle: The verifier that provides verification is on the allowlist of verifiers already trusted by the dApp.

The fact that all Verified Credentials are signed by their self-identifying issuers makes the first criteria simple: issuers’ public keys can be obtained from the identifiers listed in a registry of trusted issuers, and their signatures can be checked against them at verification time. In this way, the verifier enforces the trusted issuer principle.

DApps, in turn, uphold the trusted verifier principle, deciding which verifiers they trust to verify VCs – including standard VCs checks (non-revocation, tamper-free) and any other fitness-for-purpose checks (such as subsetting valid Verite credentials according to additional constraints on issuers or KYC processes).

Enforcement of the credential ownership principle will vary depending on how the crypto wallet relates to the credential wallet. In this post, we’ll describe how this works with a distinct decentralized identity wallet controlled by the same actor as a crypto wallet; in the next, we’ll describe how crypto-wallets-only actors may be accommodated.

Two Phases of Verification

Verite’s Smart Contract Patterns documentation describes the two-phase process of how smart contracts can require successful verification of VCs. To summarize:

  • Phase 1: "Off-chain” verification by a verifier, based on traditional web or API stacks, resulting in a lightweight "Verification Result” object.
  • Phase 2: On-chain validation of the Verification Result, which is optionally persisted on-chain in private, obfuscated, or opaque form referred to as a "Verification Record”.

Verification phases

The two-phase process enables the use of VCs with chain-based applications for which verification operations are not technically possible and/or economically practical. More importantly, this reduces (by construction) the amount of potentially sensitive data sent on-chain. All examples that follow will refer to main-net Ethereum, but all these patterns should be replicable on all major chains, availing themselves of further privacy or obfuscation techniques where supported.

The subsequent sections discuss these phases and variations. This first post in the two-part series assumes a decentralized identity wallet, and the next will cover other options.

Phase 1: Off-chain Verification

Assuming a Decentralized Identity+Crypto Wallet

DeFi verification with decentralized id wallet

  1. The wallet initiates a DeFi transaction with a dApp.

  2. The dApp (frontend) chooses a trusted verifier to start the verification process.

  3. The verifier responds to the wallet, telling it how to start the process1 through the dApp.

  4. The wallet chooses one of the credentials stored locally and submits it to the verifier.

  5. The verifier confirmed the credential’s signature and the validity of its contents. The dApp retrieves (via callback or polling) confirmation.

  6. DApp unlocks the subsequent DeFi transaction flow.

Ownership verification

In step 4, when the wallet submits the credential, the wallet wraps the VC with a Verifiable Presentation (VP) and signs it. When the verifier receives the credential, it uses the Decentralized Identifier (DID) – for example, did:key:<public_key>2 – listed as holder in the VP to verify the JWT’s signature (made from the corresponding private key). It also compares this DID in the VP with the DID listed as in the VC, i.e. the public key that the credential was issued against. Those two checks guarantee the wallet, submitting the credentials, was the same wallet that the issuer intended, and thus rightly owns it.

Phase 2: Validation and use in smart contracts

The second phase of verification covers how a Verification Result gets used as a prerequisite to on-chain interaction.

Validating a Verification Result

Along with a verification result, a smart contract will receive a signature on the result, allowing them to confirm that the result was provided by an authorized party, as described here.

On-chain storage of Verification Results (or not)

We’ll look at 3 cases below. With the verifier-submitted pattern, the verifier always stores the Verification Result on-chain. With subject-submission, the Verification Result may or may not be stored on-chain, depending on the implementation.

Case 1: Verifier-submitted verification result, stored on-chain

With the verifier-submitted approach, the verifier stores the Verification Result on-chain after verification.

We’ll cover step 1.5 later; this is relevant for subsequent interactions with a dApp when on-chain storage is used.

Verifier-submitted, with storage

Case 2: Subject-submitted verification result, stored on-chain

The subject-submitted approach also supports persistence, as determined by the implementor. Like the verifier-submitted case, in subsequent interactions, the dApp will be able to determine that the wallet is already registered.

We’ll cover step 1.5 later; this is relevant for subsequent interactions with a dApp when on-chain storage is used.

Subject-submitted, with storage

Case 3: Subject-submitted verification result, off-chain

If the Verification Result is not persisted on-chain, then every interaction of the wallet with the contract will be associated with a fresh, unique Verification Result.

Subject-submitted, no storage

Subsequent interactions

Assuming the implementation is using on-chain storage, after the phases 1 & 2 (plus on-chain storage) are completed, then on subsequent interactions, a dApp could check the registry to confirm the wallet/address is already approved, avoiding the need to re-verify the VC. The dApp backend (or registry, if separate) can simply check that msg.sender is in the set of registered addresses.

Subsequent dApp interactions, on-chain storage

Without on-chain storage, in general the VC would be re-verified3 each time (as shown in the diagram in Case 3), and the verification result submitted along with the transaction. Since verification is off-chain (and generally fast/inexpensive, depending on the provider), and since this avoids on-chain storage of potentially correlatable data, this is often the preferred solution.


On-chain Storage Considerations

The on-chain storage patterns have implications that require careful consideration:

  1. On-chain persistence of any potentially identifiable (or re-identifiable) data should be avoided. If on-chain persistence is used, it is up to the design of a particular implementation to manage that smart contract’s access rights and obfuscation mechanisms to minimize privacy risks.
  2. The verifier-submitted pattern, in its current form, assumes the verifier’s willingness to pay transaction fees (e.g. "gas fees”, in EVM systems).
  3. The verifier-submitted pattern includes an obligation that the verifier update the on-chain state corresponding to revoked credentials.
  4. In general, this may not be the most cost-effective option.

There are optimizations – in the Verite open source, the Verification Registry may be used among multiple contracts, and proposed improvements enable further reuse through proxies. These are all possible for implementations; Verite’s open source repositories are intended as example implementations, and not as normative guidance.

However, we think it’s important to draw attention to off-chain options, as it has a profile of privacy characteristics we find favorable in many use-cases. This is one of the reasons we’re increasingly demonstrating this option, as seen in the off-chain NFT allowlist.

Circle’s Implementation Choices

It was not necessary to write the verification results to on-chain storage in Circle’s implementation. The advantages of flushing to chains include the avoidance of repeated verification for the same wallets. However, the advantages do not always outweigh the drawbacks:

  1. Short TTL is required for verification because a user's KYC status could change.
  2. Chain writers must pay a gas fee.

As a result, Circle chose and suggests other implementers also choose this approach: do not write verification results into chains unless there is a substantial efficiency gain, i.e., if many transactions will be authorized by the average on-chain write operations.

Since Circle’s architecture is optimized for delivering verification reliably and simply to dApps, we choose to explain the design choices we’ve made from the perspective of the verification process, where valid Verifiable Credentials get communicated to smart-contracts powering dApps.

Up Next

In the following post, we’ll describe options for wallets that don’t understand Decentralized Identity protocols to participate in VC exchanges, while conforming to the verification principles above.

The Verite core team is grateful to Circle’s Verite engineers Tao Tao, Daniel Lim, and Fei Niu for taking the lead on this exciting work!

  1. Crucially, this includes not just the usual redirections and tokens but also an artifact called a "Presentation Submission" object which describes the credentials that a given verifier will accept. This enables wallets to “pre-select” appropriate credentials to the end-user, in the case of a full wallet.
  2. An "onchain DID" (decentralized identifier) scheme such as did:ion can be used here if the wallet controls such an identifier– this is basically a user-controlled indirection, which allows user-initiated rotations and even migration to a new wallet. In an end-to-end system, however, this indirection is optional, because the verifier has access to the issuance records, and can simply use a public key in both places since they are opaque to any other audience.
  3. The verite reference implementation avoids resubmission of the same verification results / signature to avoid replays

· 18 min read
Justin Hunter

While Verite can help solve significantly more complex challenges than managing an early access list (i.e., an "allowlist") for NFT projects, it seemed like a fun experiment to see how well Verite could handle moving the costly process off-chain while still maintaining data integrity. Before we dive in, let's talk about what an NFT allowlist is, how it is normally managed, and what the problems are.

What is an Allowlist?

For many NFT projects, rewarding early supporters is crucial to the business model and community health. An allowlist helps give early supporters a distinct experience with unique terms and timing. By putting supporters' wallet address on a list that grants them early access to mint NFTs from the new collection, these supporters can avoid what's commonly referred to as "gas wars". Gas wars happen when a popular NFT project drops and people (and bots) spend exorbitant amounts of gas on the Ethereum network to ensure their minting transactions go through before anyone else's to avoid missing the narrow window of availability in an oversubscribed market. Needless to say, this negatively impacts all participants because it can price people out of the collection and force them to buy the NFTs on secondary market at a higher premium. On the Ethereum mainnet, gas fees have even spiked to higher than the mint prices of the NFTs! That's a hefty surcharge.

The allowlist concept lets people on the list mint for a period of time (normally 24 hours) before the public mint. This helps keep bots out of the mint, guarantees adequate supply to everyone on that list (or at least, guarantees each member on the list access to supply while it is adequate!), and keeps gas prices relatively low. NFT projects generally frame allowlist spots as a "reward" for community participation or various kinds of non-monetary contributions.

How Are Allowlists Normally Managed?

Historically, Ethereum-based NFT allowlists have been managed on-chain. This means a mapping of wallets addresses must be added to on-chain storage, usually via a function on the NFT project's smart contract that populates that persistent mapping in storage accessible to other functions. The transaction to add these wallet addresses can be incredibly expensive, since it is literally buying and taking up additional blockspace with each additional address. It has historically ranged from a few hundred dollars to a few thousand dollars depending on the price of ETH at the time, even if the allowlist is bundled into a single transaction. This number goes much higher broken out into multiple gas-inflicting updates.

Because of the cost, projects are incentivized to set the allowlist once and never update it. Every update costs money. This can lead to errors in the list, inequity, justifiably grouchy late-comers to the community, and other problems. Additionally, allowlists can often become "static requirements": rigid patterns that get over-applied by a one-size-fits-all approach. Services like Premint have begun to change this, which introduces an economy of scale to save on gas and other features. But further improvements are possible! Projects should have the flexibility to implement dynamic requirements on whom gets added to an allowlist and how.

That's where Verifiable Credentials come in.

How To Use Verite and Verifiable Credentials

We're going to be working through an Ethereum ERC-721 NFT contract alongside a mechanism that allows us to issue verifiable credentials to participants that we want to allow on the allowlist. We'll use Hardhat to help us generate the skeleton for our smart contract code and to make it easier to test and deploy.

We'll also use Sign In With Ethereum (SIWE) to handle our user sessions. We're using SIWE because it provides more protections and assurances than the normal "Connect Wallet" flow does.

On the front-end side of the house, we'll build a simple page that allows potential allowlist recipients to request their verifiable credential, and we'll build the actual minting functionality.

Let's get started. You'll need Node.js version 12 or above for this. You'll also need a good text editor and some knowledge of the command line.

From your command line, change into the directory where you keep all your fancy NFT projects. Then, let's clone the example app I built ahead of this tutorial to make our lives easier.

git clone

This is a repository that uses SIWE's base example app and extends it. So what you'll have is a folder for your frontend application, a folder for your backend express server, and a folder for your smart contract-related goodies.

Let's start by looking at the backend server. Open the backend/src/index.js file. There's a lot going on in here, but half of it is related to SIWE, which is very well documented. So, we're going to gloss over those routes and just trust that they work (they do).

Request Allowlist Endpoint

Scroll down in the file until you see the route for requestAllowlist. Now, before we go any further, let me walk through a quick explanation of how this entire flow will work.

  1. Project runs a web app and a server
  2. Web app handles both requesting/issuing verifiable credentials associated with the allowlist and minting
  3. During the minting process, a verifiable credential must be sent back to the project's server.
  4. Backend handles checking to see if a wallet should receive a credential, then generating the credential.
  5. Backend handles verifying that a credential sent as part of the minting process is valid.
  6. If credential is valid, backend signs an EIP712 message with a private key owned by the project.
  7. Signature is returned to the frontend and includes it as part of the mint function on the smart contract.

We'll dive into details on the smart contract in a moment, but that's the basic flow for the front and backends. For those who love a good diagram, we've got you covered:

Full Diagram

Now, if we look at the route called requestAllowlist, we'll see:

if (!req.session.siwe) {
res.status(401).json({ message: "You have to first sign_in" })
const address = req.session.siwe.address
if (!validateAllowlistAccess(address)) {
res.status(401).json({ message: "You are not eligible for the allowlist" })

const { subject } = await getOrCreateDidKey(address)

const issuerDidKey = await getIssuerKey()
const application = await createApplication(issuerDidKey, subject)
const presentation = await getPresentation(issuerDidKey, application)

res.setHeader("Content-Type", "application/json")

We are using the SIWE library to make sure the user is signed in and has a valid session. This also gives us the user's wallet address. Remember, we're trusting that all the SIWE code above this route works (it does).

Next, we are checking to see if the project has determined that wallet to be eligible for the allowlist. This is a very low-tech process. Most projects ask participants to do something in order to get on the list and then they manage a spreadsheet of addresses. In this code example, we have a function called validateAllowlistAccess() that checks a hardcoded array of addresses from a config file:

const config = JSON.parse(fs.readFileSync("../config.json"))

const validateAllowlistAccess = (address) => {
return config.addressesForAllowlist.includes(address)

Next, we need to create a DID (decentralized identifier) key for the associated wallet (or we need to look up an existing DID key). In a perfect world, we'd be using a built-in credential wallet integration with the user's Ethereum wallet, but since we don't have that, we're going to manage a delegated key system. The system works like this:

  1. Project checks to see if there is a DID key for the wallet in question in the database (note: the database here is just disk storage, but can be anything you'd like).
  2. If there is a DID key, project uses that key for Verite functions.
  3. If there is no DID key, project generates one and adds the mapping to the database.

That's happening here:

const { subject } = await getOrCreateDidKey(address)

And the getOrCreateDidKey() function looks like this:

const getOrCreateDidKey = async (address) => {
const db = JSON.parse(fs.readFileSync("db.json"))
let keyInfo = db.find((entry) => entry.address === address)
if (!keyInfo) {
const subject = randomDidKey(randomBytes)
subject.privateKey = toHexString(subject.privateKey)
subject.publicKey = toHexString(subject.publicKey)
keyInfo = {
fs.writeFileSync("db.json", JSON.stringify(db))

return keyInfo

As you can see, our database is making use of the always fashionable file system. We look up the key or we generate a new one using Verite's randomDidKey function. We then convert the public and private key portion of the payload to hex strings for easier storage.

Ok, moving on. Next, we grab the issuer key. This is a DID key that is associated with the project.

const issuerDidKey = await getIssuerKey()

Much like the function to get the user's DID key, the getIssuerKey function just does a look up in the DB and returns the key. Remember to always protect your keys, kids. Even though these keys are exclusively for signing and issuing credentials, you should protect them as if they could spend your ETH.

const getIssuerKey = async () => {
let issuer = JSON.parse(fs.readFileSync("issuer.json"))
if (!issuer.controller) {
issuer = randomDidKey(randomBytes)
issuer.privateKey = toHexString(issuer.privateKey)
issuer.publicKey = toHexString(issuer.publicKey)
if (!issuerDidKey.signingKey) {
const randomWallet = ethers.Wallet.createRandom()
const privateKey = randomWallet._signingKey().privateKey
issuerDidKey.signingKey = privateKey
fs.writeFileSync("issuer.json", JSON.stringify(issuer))

return issuer

As you can see, in addition to creating a DID key or fetching a DID key with this function, we are creating a signing key using an ETH wallet. This will be the same key we use to deploy the smart contract and sign a message later. Stand by for disclaimers!

Next, we call a function called createApplication.

const createApplication = async (issuerDidKey, subject) => {
subject.privateKey = fromHexString(subject.privateKey)
subject.publicKey = fromHexString(subject.publicKey)
const manifest = buildKycAmlManifest({ id: issuerDidKey.controller })
const application = await buildCredentialApplication(subject, manifest)
return application

This function includes some helpers to convert the DID key private and public keys back from hex strings to buffers. The function then uses the buildKycAmlManifest function from the Verite library to build a manifest that will be used in the credential application. It should be noted that I'm using the KycAmlManifest but you could create your own manifest that more closely mirrors adding someone to an allowlist. The KycAmlManifest fit closely enough for me, though.

Finally, the manifest is used and passed into the Verite library function buildCredentialApplication and the application is returned.

When the application is built, we now call a function called getPresentation:

const getPresentation = async (issuerDidKey, application) => {
issuerDidKey.privateKey = fromHexString(issuerDidKey.privateKey)
issuerDidKey.publicKey = fromHexString(issuerDidKey.publicKey)

const decodedApplication = await decodeCredentialApplication(application)

const attestation = {
type: "KYCAMLAttestation",
process: "",
approvalDate: new Date().toISOString()

const credentialType = "KYCAMLCredential"

const issuer = buildIssuer(issuerDidKey.subject, issuerDidKey.privateKey)
const presentation = await buildAndSignFulfillment(

return presentation

We're using the project's issuer DID key here. We decode the application using Verite's decodeCredentialApplication function. Then, we have to attest to the credential presentation.

Using the issuer private key and public key, we call the Verite library buildIssuer function. With the result, we can then create the verifiable presentation that will ultimately be passed back to the user by calling Verite's buildAndSignFulfillment function.

It is that presentation that is sent back to the user. We'll take a look at the frontend shortly, but just know that the presentation comes in the form of a JWT.

Verify Mint Access Endpoint

Next, we'll take a look at the verifyMintAccess route. This route includes significantly more functionality. Let's dive in!

try {
const { jwt } = req.body

if (!req.session || !req.session.siwe) {
return res.status(403).send("Unauthorized, please sign in")
const address = req.session.siwe.address

const decoded = await decodeVerifiablePresentation(jwt)

const vc = decoded.verifiableCredential[0]

const decodedVc = await decodeVerifiableCredential(vc.proof.jwt)

const issuerDidKey = await getIssuerKey()

const { subject } = await getOrCreateDidKey(address)

const offer = buildKycVerificationOffer(
const submission = await buildPresentationSubmission(

// The verifier will take the submission and verify its authenticity. There is no response
// from this function, but if it throws, then the credential is invalid.
try {
await validateVerificationSubmission(
} catch (error) {
return res.status(401).json({ message: "Could not verify credential" })

let privateKey = ""

if (!issuerDidKey.signingKey) {
throw new Error("No signing key found")
} else {
privateKey = issuerDidKey.signingKey

let wallet = new ethers.Wallet(privateKey)

const domain = {
name: "AllowList",
version: "1.0",
chainId: config.chainId,
verifyingContract: config.contractAddress
const types = {
AllowList: [{ name: "allow", type: "address" }]
const allowList = {
allow: address

const signature = await wallet._signTypedData(domain, types, allowList)

return res.send(signature)
} catch (error) {

Once again, the first thing we check is that the user has a valid SIWE session. This route takes a body that includes the verifiable presentation we had sent to the user previously. So, the next step is to call the Verite function decodeVerifiablePresentation to then be able to extract the verifiable credential and call the decodeVerifiableCredential function.

As with our requestAllowlist route, we now need to get the issuer DID key and look up the user's delegated DID key. From there, we can use the issuer key to call the Verite library function buildKycVerificationOffer. We use the results of that call and the user's DID key to call the Verite library function buildPresentationSubmission.

Now, we get on to the good stuff. We're going to make sure a valid credential was sent to us. We call the Verite library function validateVerificationSubmission. This function will throw if the credential is invalid. Otherwise, it does nothing. We're rooting for nothing!

Next, the code might get a little confusing, so I want to spend some time walking through this implementation and highlighting how you'd probably do this differently in production. Once the credential is verified, we need to sign a message with a private key owned by the project. For simplicity, I chose to use the same private key that would deploy the smart contract. This is not secure. Don't do this. Hopefully, this is enough to illustrate how to execute the next few steps, though.

We have the issuer DID key written to our database already (file system). We also included a signing key. We need that signing key to sign the message that will be sent back to the user. We use that key to build an Ethereum wallet that can be used for signing.

let privateKey = ""

if (!issuerDidKey.signingKey) {
throw new Error("No signing key found")
} else {
privateKey = issuerDidKey.signingKey

let wallet = new ethers.Wallet(privateKey)

Finally, we build out the EIP-712 message and sign it. The resulting signature hash is what we send back to the browser so the user can use it in the smart contract's minting function.

That was a lot, but guess what? The frontend and the smart contract should be a lot quicker and easier to follow.


If we back out of the backend folder in our project, we can then switch into the frontend folder. Take a look at frontend/src/index.js. The requestAllowlist function is the one the user will call to hit the project's server's endpoint to see if the user is even allowed to get an allowlist credential. If so, the credential is returned and stored in localstorage:

async function requestAllowlistAccess() {
try {
const res = await fetch(`${BACKEND_ADDR}/requestAllowlist`, {
credentials: "include"
const message = await res.json()

if (res.status === 401) {
localStorage.setItem("nft-vc", message)
alert("Credential received and stored in browser")
} catch (error) {

Again, this would look a lot nicer if there was a built-in credential wallet integration with Ethereum wallets, but for simplicity, the credential is being stored in localstorage. Safe, safe localstorage.

(narrator: localstorage is not safe).

When it's time to mint during the presale, the user clicks on the mint button and the mintPresale function is called:

async function mintPresale() {
const jwt = localStorage.getItem("nft-vc")
if (!jwt) {
alert("No early access credential found")

const res = await fetch(`${BACKEND_ADDR}/verifyMintAccess`, {
method: "POST",
headers: {
"Content-Type": "application/json"
body: JSON.stringify({ jwt }),
credentials: "include"

if (res.status === 401 || res.status === 403) {
"You're not authorized to mint or not signed in with the right wallet"

const sig = await res.text()

const contract = new ethers.Contract(address, json(), signer)
const allowList = {
allow: address
let overrides = {
value: ethers.utils.parseEther((0.06).toString())
const mint = await contract.mintAllowList(
alert("Minted successfully")

This function grabs the credential from localstorage and sends it along to the project's backend server. Assuming the signature from the project is returned, the user is now able to mint. That signature is sent to the smart contract as well as how many tokens should be minted and the amount of ETH necessary to mint. Note, the allowlist object that we send as well. This helps the smart contract verify the signature. Simple!

But how does that work with the smart contract exactly?

Smart Contract

If you open up the contract folder, you'll see a sub-folder called contracts. In there, you'll see the smart contract we're using in this example, called Base_ERC721.sol.

This is a pretty standard NFT minting contract. It's not a full implementation. There would be project-specific functions and requirements to make it complete, but it highlights the allowlist minting functionality.

The first thing to note is we're using the EIP-712 standard via a contract imported from OpenZeppelin. You can see that with this line:

import "@openzeppelin/contracts/utils/cryptography/draft-EIP712.sol";

Next, we are extending the ERC-721 contract and specifying use of EIP-712 here:

contract BASEERC721 is ERC721Enumerable, Ownable, EIP712("AllowList", "1.0") {

A little further down in the contract, we create a struct that defines the allowlist data model. It's simple because we are only looking at the wallet address that should be on the allowlist:

struct AllowList {
address allow;

We're going to focus in now on the mintAllowList function and the _verifySignature function. Our mintAllowList function starts off similar to a normal NFT minting function except it includes the required signature argument and dataToVerify argument. We do a couple of normal checks before we get to a check that verifies the signature itself. This is where the magic happens.

The _verifySignature function is called. It takes in the data model and the signature.

function _verifySignature(
AllowList memory dataToVerify,
bytes memory signature
) internal view returns (bool) {
bytes32 digest = _hashTypedDataV4(
keccak256("AllowList(address allow)"),

require(keccak256(bytes(signature)) != keccak256(bytes(PREVIOUS_SIGNATURE)), "Invalid nonce");
require(msg.sender == dataToVerify.allow, "Not on allow list");

address signerAddress = ECDSA.recover(digest, signature);

require(CONTRACT_OWNER == signerAddress, "Invalid signature");

return true;

Using the EIP-712 contract imported through the OpenZeppelin library, we're able to create a digest representing the data that was originally signed. We can then recover the signing address and compare it to the expected address. In our simplified example, we expect the signer to be the same address as the contract deployer, but you can, of course, extend this much further.

To help avoid replay attacks, we also compare the current signature to a variable called PREVIOUS_SIGNATURE. If the signature is the same, we reject the entire call because a signature can only be used once.

Back to our mintAllowList function, if the signature is verified, we allow the minting to happen. When that's complete, we update the PREVIOUS_SIGNATURE variable. This is, as with many things in this demo, a simplified replay attack prevention model. This can and probably should be extended to support your own use cases.

Caveats and Conclusion

In a perfect world, we would not be issuing credentials to a delegated subject DID. In our example, we could have just as easily have issued to the user's wallet address, but we wanted to highlight the DID functionality as best as possible.

It is possible today for the user to manage their own DID and keys, but the tricky part comes, as mentioned earlier in this post, when interacting with crypto wallets. Signing a transaction is not the same as signing a JWT. The keys used are different, the signatures are different, and the flow is different. Until these things become unified and more seamless, this demo helps illustrate how Verite can be used today to enforce allowlist restrictions for an NFT minting project.

Hopefully, this sparks some creativity. Hopefully, it inspires some people to go and build even more creative solutions that leverage verifiable credentials and Verite.

· 18 min read
Justin Hunter


Verifiable Credentials (VCs) allow people and organizations to issue statements on behalf of others. These statements are then verifiable even if the original issuer is no longer around. We can see VCs in action in many KYC (Know Your Customer) and AML (Anti-Money Laundering) flows. For the entire flow to work, though, there needs to be a verifier. This can be a centralized service, or it can be managed through a blockchain with verifications happening on-chain. Today, we'll walk through how to build a VC Registry Smart Contract in Solidity.

· 10 min read
Juan Caballero

Having worked for years in various corners of the decentralized identity sphere, I have developed a pet peeve for "technosolutionism": the notion that a sufficiently innovative technology can solve a social problem, or a business problem, without complex and nuanced changes to social and business practice. No less unnerving, perhaps, are the brute-force capitalists that chauvinistically proclaim that with enough momentum, traction, and capital, any Betamax can be relegated to the dustbin of history by a vastly inferior VHS. Both views are based on facts, and yet both are also dangerously blindered. Real progress cannot be attributed to TED-Talkers, to technologies that magically conquer markets, or to market-makers that appear publicly as lone snake-charmers working miracles. Real progress is made by rich, cross-disciplinary teams and heterogeneous coalitions coming together to attack hard problems from every angle at once.

· 4 min read
Justin Hunter

We carry credentials with us everyday. Our drivers license or state ID, our library card, our insurance card, and more are all just a few examples of some of the credentials we carry. We present these credentials when requested, and the credentials are verified by the person or entity we are presenting them to. But as the world shifts to an increasingly digital native format, and as people take more ownership over their identity, how can the issuance of, presentation of, and verification of credentials be managed?