Court of Humanity Whitepaper
Note: This is a draft work in progress. The content is not final.
Abstract
The age of the internet and AI has made it difficult to tell which online identities are real humans. Furthermore, many systems are vulnerable to the so-called sybil attack, in which one person creates multiple accounts to gain an unfair advantage. We propose a system that uses a combination of biometrics and economically incentivized voting to certify a user is a real human with only one account. We establish a cost of forgery to allow application developers to choose their security level versus convenience for the user. We explore the applications of this system from social media to online voting.
Introduction
The Sybil Problem
As more of our world moves online, the ability to verify unique human identities becomes increasingly critical. Many systems - from social networks to voting platforms - rely on the assumption that each account represents one real person. However, without reliable ways to verify this, systems remain vulnerable to sybil attacks where malicious actors create multiple fake accounts.
Previous Work
Proof of Unique Biometrics (PoUB)
Biometric systems like facial recognition (e.g. FaceNet1) or iris scans (e.g. Worldcoin2) can extract unique feature vectors from human biometric data. While these provide a starting point for uniqueness verification, biometric data alone is insufficient - AI-generated faces or fake hardware could trick purely algorithmic systems.
Decentralized Arbitration
Previous work, notably Kleros3, has shown that with appropriate economic incentives, decentralized blockchain-based Schelling games can reliably report facts about the world on-chain, enabling arbitration of certain types of disputes. These systems use token staking and reward/penalty mechanisms to incentivize honest behavior. While we build on Kleros’s core mechanism design patterns, we tailor our system specifically for human identity verification by integrating biometric uniqueness checks with incentivized human verification.
System Overview
Facial Biometrics
With widespread availability of smartphone cameras, facial biometrics are the most accessible biometrics. Anyone with a smartphone could register their biometrics with a new account or become a verifier. However, to have reasonable accuracy, applications must tolerate some false positive matches between people with similar facial features. Applications can compensate for false positives by requiring higher identity bonds. We believe facial biometrics is accurate enough for most applications. One privacy downside to facial biometrics is that someone can easily scan you without your consent (via pictures, CCTV, etc.). In a biometric uniqueness service, this could allow a 3rd party to tell whether you signed up for an account (probabilistically).
Other Biometrics
Iris scans, fingerprint scans, voice recognition, and even neural scans are all possible future biometrics that could be used. If applications have demand for more sophisticated biometrics, courts could be created with verifiers equipped with the necessary hardware to register users and verify the biometrics in disputes.
Verified Biometrics
Whether a real person submitted a particular biometric scan is a fact about the world that can be put through an incentivized arbitration system similar to Kleros to create an economic cost of fake data. We propose a Kleros-like system that employs “jurors” (hereafter called verifiers) to verify the submitted biometrics of a user.
Users can upload their biometric data to a decentralized PoUB system and prove to applications that they are unique (or have less than a certain amount of matches) without revealing the raw biometric data to the apps. Users can generate unique IDs for each application, so the user’s biometric data cannot be used to identify them across applications. A Kleros-like arbitration system can be used to verify these biometrics came from the user, should a dispute arise.
Identity Bonds
Applications require users to post small permanent or temporary “identity bonds” to use the app. For example, a social media app could require a $5 bond to be posted for an account to be active. Users would post a bond in a court supported by the application.
flowchart TD User[User] -->|Posts bond| Bond[Identity Bond] Bond -->|Required by| App[Application] Bond -->|Managed by| Court[Court] Court -->|Verifies| User App -->|Uses| Court User -->|Uses| App
The bond is essentially a commitment to participate in the court’s dispute resolution process, which may involve attending a video call or in-person meeting to provide biometrics. Apps should set the minimum bond size to discourage fake accounts. Each time the user attempts to perform an action in the app, the app checks that the value of the bond is at least B, and the user’s biometrics are still unique (via the PoUB service). The user can lose the bond if someone suspects the user’s biometrics are fake, raises a dispute, and the user fails to prove their biometrics are real, per the court rules.
For some high-stakes actions, such as voting on proposals, apps could require higher minimum bond sizes. Users could increase their bond for the duration of the voting period, then withdraw the extra after.
Dispute-Dependent Actions
Applications can specify certain actions as dependent on disputes of the users who took the action (such as voting on a proposal). These actions should have a challenge period after the action is taken, during which observers can dispute suspicious accounts. Actions will be pending until all raised disputes are resolved (or until enough there aren’t enough disputes to change the outcome). Applications will need to consider how to handle delays caused by disputes, such as whether to allow proposals in parallel or halt proposals until the pending one is settled.
Democracy Example
Say the social media app has a DAO treasury with current members and democratic voting. A proposal is put forth to send 1% of the DAO treasury on a project. The value at risk of a sybil attack is 1% of the treasury. The pass threshold (50%) (simple majority). The chance the attack is detected and disputed (90%). To discourage sybil attacks, the minimum bond should be set such that the attack cost is greater than the value at risk:
To spend $1 million in a DAO with 1000 members, the min vote bond would be $200. However, voter apathy could lead to an attacker only needing to put down a small amount of bonds to steal a large amount of money. Thus, a DAO could set N per proposal, based on the number of voters. The min bond would start high but decrease as more voters participated. Voters would set a min and max bond they are willing to put down, and their vote would only be included if enough voters participated to lower the min bond below their max.
After the proposal voting period ends, there should be a challenge period to allow disputing voters who may be fake. If those disputes are successful, those votes are nullified.
Case Flow
flowchart TD Dispute[Disputer raises case] -->|Posts dispute bond| Court[Court] Court -->|Notifies| User[Disputed User] User -->|Attends verification| Verify[Verification Session] Verify -->|Submit votes| Vote[Verifier Votes] Vote -->|Majority for user| UserWin[User wins dispute bond] Vote -->|Majority against user| UserLose[User loses bond] UserLose -->|May affect| Actions[Pending Actions] Vote -->|Can be appealed| Appeal[Appeal Round] Appeal -->|Higher dispute bond required| Court
If someone suspects an account is fake, for example if the account votes for a malicious proposal, then they can fund a dispute against the user which must raise an amount equal to the user’s bonds in this app plus court fees. Verifiers are asked to vote on whether the user’s submitted biometric data actually came from a real person who owns the user’s account. This will typically require the verifiers to meet in person with the user and take a biometric scan with their own hardware or camera, which is then compared to the user’s submitted data. This can be done ad-hoc or in public sessions organized by the court DAO. Disputes can be appealed several times, with each appeal requiring a higher bond. The final round should be a public session anyone can attend to determine fork choice. Users will also prove ownership of the account in question with a cryptographically signed message.
If the data and ownership matches, then the verifier will vote in favor of verifying the biometrics. If the case resolves against the disputed user, then the bond will be forfeited to the disputers and potentially undo pending actions in the application. If in favor, then the dispute bond is rewarded to the disputed user.
Courts
Courts can be limited to certain geographies, or timezones and languages for remote-only courts. Rather than one court for the world, we envision a marketplace of courts with different locations, rules, and parameters, competing for bonds and stake. Half of the total value of the staked tokens in a court is approximately the cost of corrupting the court (via bribing jurors or buying tokens) aka a 51% attack. Courts should not let the total bonded amount get too high, as this could allow an attacker to profit more than the cost of corruption.
During a court corruption attack, the attacker would simultaneously create fake accounts to sybil attack applications and dispute every user that wasn’t their own. Assuming applications set the bond amount equivalent to attacker profit, then courts should not process disputes totalling to more than 1/4 of the court stake value at a time (as attacker profits 1/4 from winning bonds and 1/4 attacking applications).
If a dispute reaches the final round, applications should prepare to choose a fork, should the dispute resolve in favor of the attacker. Application smart contracts could queue disputes and send to the court in batches, to limit the damage of a court corruption attack and to send the pending disputes to the new court in the event of a fork choice vote. A democratic DAO could for example have the final round trigger a fork choice vote that excludes recent accounts.
Applications should decide how to handle forks in a way that credibly limits the damage of a court corruption attack. Ultimately the best defense against a court corruption attack is for applications to ensure their value at risk does not exceed the cost of corruption. This includes choosing courts with a large marketcap and accurately setting bond amounts with respect to the value at risk.
Remote Verification
While in-person meetups with verifiers are the most secure way to establish whether a real human submitted some biometrics, it is also currently possible to determine with reasonable accuracy whether a user is a real person by holding a live video call. Verifiers could ask users to perform certain actions in the video call, such as smiling, waving their hands, or making a specific gesture, until the verifiers are confident the video is not AI generated.
Verifiers could either use the video data to extract the facial biometrics, or ask the user for a high quality photo and ensure the photo looks like the same person on the video call. This would allow an easier user experience, but will likely be insecure in the long run, as AI generated live video will become highly sophisticated and difficult to distinguish from real humans. Therefore, the system can specify the type of verification (virtual or in person) that was certified, and let the applications decide whether to accept.
While a remote-only court system could work initially, the economic value of the court token is at risk of being instantly worthless as soon as AI-generated video becomes sophisticated enough, because then applications would no longer trust certificates from the court and users would no longer seek out verifications, eliminating court revenue. This inevitable obsolescence would discourage investment in the system, making it difficult to garner significant stake. Therefore, remote verification could either be an initial phase or as part of a larger in-person court system, to ensure the court token has lasting value rather than being a “hot potato.”
In-Person Verification
Courts could instruct verifiers to only accept in-person biometrics with their own hardware. Courts would specify regions or meeting places where verifiers can meet with the user to verify their biometrics during a dispute. There could be geographical hierarchies where disputes start in small city subcourts, but can escalate to larger regional courts meeting a nearby large city with more verifiers. There is a tradeoff between pooling together capital in large hierarchical courts and practicality. Larger courts may require significant travel for users and verifiers, imposing a cost that may effectively prevent some users from participating. Most users would not want to commit to potentially traveling a long distance to verify their biometrics if they were disputed. However larger courts can scale to securing more value for the applications. We expect this tradeoff space to be explored in the market.
PoUB System
The proof-of-unique-biometrics systems compatible with the decentralized nature of Court of Humanity must themselves be private and decentralized to avoid the risk of privileged central parties tampering with, censoring, or reusing biometric data. An ideal private biometrics system would use a private computation technology such as multi-party computation (MPC) to ensure no one has access to the data, and that the data is not tampered with.
A decentralized private biometrics system, like a vector database implemented on a decentralized MPC, would allow users to upload biometric data and give apps permission to call a method to check whether the user’s data is unique. The PoUB system would integrate with blockchains and allow smart contracts to send/receive data.
The MPC would only report the number of matches for a given biometric data, but not which users are in that matching set. This would prevent unwanted authentications, such as video surveillance attempting to find the account IDs of passerbys. If a user whishes to prove ownership of some biometric data, they can generate a one-time-use auth code which is provided to a third party, who will scan the user’s biometrics with their own hardware and submit to the MPC. With a valid auth code, the MPC will not only report the number of matches, but also whether the user is one of the matches. This would prove to a third party that the biometric data submitted by the user is legitimately from the user, rather than faked.
MPC technology is still in its infancy, but we believe it will be possible in the near future to scale to private biometrics. We are currently evaluating the Nillion protocol as a possible platform.
Court Token
A token is created, with some initial distribution that provides credible proof of decentralization (such as trusted public figures). Some tokens could be locked in a Uniswap-like liquidity pool, to give credence to the token’s price. Generally the initial distribution should have convincing evidence that no one party has a majority, and especially no one has > 2/3.
Court DAO
The token can be associated with a DAO, which can be used to vote on the parameters of the court system and organize sessions. The DAO should have limited powers to alter case outcomes and should be monitored by end user applications for events like time-delayed code-upgrades.
Court Stake
Token holders can stake the court token into the court system, which locks their tokens until they unstake, making the user a “verifier.” Verifiers can only unstake tokens that are not assigned to an open dispute. The more stake a user has, the more likely they are to be chosen as a verifier for a dispute.
Court Examples
Humanity Court of The Northeast
This court performs remote verification of facial biometrics in the EST time zone and in-person verifications in major NE US cities. Verifiers can stake into the remote-only and/or in-person court. The remote-only court has one level that handles all cases. The in-person court has sub-courts for major NE cities, while the top-level court sessions happen in NYC.
graph TD Verifier((Verifier)) --> |Stake| Remote[Remote Court] Verifier((Verifier)) --> |Stake| DC[DC Court] DC --> NYC Boston[Boston Court] --> NYC Philly[Philadelphia Court] --> NYC
Remote Case Flow
- User posts bond for a biometric ID in remote-only court.
- One or more parties fund a dispute against the bond.
- A small amount of stake (= one to a few verifiers) is assigned to the dispute.
- Verifiers and user schedule a video call before the 7 day reporting deadline.
- User provides high resolution photos or video for verifiers to extract facial biometrics.
- User provides auth codes to enable the MPC to report whether they are a match.
- Verifiers send user’s biometric ID, extracted biometric vector, and auth code to MPC, which reports the number of matches and whether the biometric ID is one of the matches.
- If the biometric ID is in the matching set, then the user has proven that the biometric data is at least from realistic looking photos/videos, but now the verifiers must determine the following:
- whether the person on the video call is the same person as in the provided photos/videos
- whether the photos/videos and the person on the call are not edited or faked
- Courts should establish some basic procedures for testing this, acknowledging the limitations of detecting sophisticated fakes. Verifiers carry out these tests on the video call until they are satisfied.
- Verifiers consider the evidence and vote accordingly.
- The case can be appealed several times until the final round with a max of 500 verifiers.
- One or more public video call sessions will take place with attendance by anyone who wants to decide whether to support a minority fork.
In-Person Case Flow
- User posts bond for a biometric ID in Philadelphia sub-court.
- One or more parties fund a dispute against the bond.
- A small amount of stake from the Philadelphia sub-court is assigned to the dispute.
- User and verifiers are assigned a meeting time at the downtown court office.
- Verifiers use their own phones to take facial biometrics of the user, and send to MPC.
- Verifiers perform some basic tests to ensure the user is not wearing a mask or similar removable face-altering device.
- If the MPC reported a match and the user passes the physical tests, then the verifiers vote to verify the biometric ID.
- Someone appeals the case to the NYC top-level court.
- A larger session is scheduled at the NYC court office.
- The user and non-local drawn verifiers travel to NYC.
- 100 verifiers meticulously check the user’s biometrics and vote accordingly.
- The session is open to the public and should allow anyone to check the user’s biometrics themselves.
Sybil Limit
Apps can get the number of matches, aka similar feature vectors, for a user’s biometrics from the PoUB service. A user must approve an app’s permission to call this method, and the app will only see matches for other users that are on that app. Thus, if a user creates two accounts, each with the same facial scan, then an app will see two matches when the user signs up the second account.
However, if someone just happens to have a close looking face, then apps would also see matches, so apps should set a “sybil limit” which allows some people to have similar looking faces and tolerate some false positive matches. For high security systems, this could be 1, but apps would want to use more accurate biometrics like iris scans for this low of a limit. For lower security systems like social media, apps could set higher sybil limits. When a user has “reserve” in their sybil limit, apps could allow “vouching” for other users, who would get the benefit of the user’s bond without having to post their own. This is to prevent the scenario where people are incentivized to create multiple accounts with the same biometrics up to the sybil limit. Since there is no stopping that, you might as well allow “using up” the sybil limit with vouches for faster onboarding.
Verifier bonds
Small local courts could require verifiers to post a bond in a larger court to stake. This would increase market confidence that the court token was not majority controlled by a colluding group, if the minimum number of colluding verifiers to achieve a stake majority is high enough. Dispute resolution (IE verifier votes) could be dispute-dependent actions pending any disputes of verifier identity in the larger court.
For example, a DAO could have a sybil attacker attempting to pass a malicious proposal. Legitimate users notice and dispute the yes-voting accounts to a small local court. As the court marketcap is small enough, the attacker has bought enough tokens to become the majority of the court stake (court corruption attack) and spread stake around several sybil accounts to hide the attack. However, the small court requires verifiers to post a bond in a large court (Ex Court of NYC). Users notice the first few rounds of disputes in the small court resolve in favor of the attacker, suggesting the court is corrupted. Users then dispute the suspected sybil verifiers in the Court of NYC, which pauses the disputes until the large NYC court rules on the suspected sybil verifiers. The attacker fails to show up for all the disputed identities in the NYC court sessions, thus his verifier accounts in the small court are disabled. Without the attacker’s verifiers, legitimate verifiers resolve the small court disputes against the attacker, nullifying their votes in the malicious DAO proposal.
Hierarchical Biometrics
While iris scans are more specific than facial scans, we favor more widely available biometric techniques that do not require specialized hardware. We want to lower the barrier to entry for verifiers, while avoiding more invasive scans with poor public image. However, facial scans are likely to result in more false positive matches, therefore we can also use a heirarchy of biometric techniques to determine whether matching facial scans are from the same person. For example, if two users’ facial scans match, they can submit unique iris scans to distinguish them. If a user with a matching facial scan to another submits a unique iris scan, then their possible sybil count is reduced by one, while it is still two for the other person with just a facial scan uploaded. This allows applications to get the benefit of a low false positive rate while still allowing most people to use facial scans.
Context and Monitoring
Since attacks won’t be thwarted unless people suspect accounts are sybils, applications will want to monitor user activity and even publish dashboards that allow crowdsourcing of sybil detection. The more context applications can provide around system activity, the more likely attacks will be spotted and disputes created. Applications will want to provide enough public monitoring tools and set bonds high enough to attract a community of dispute-creators watching out for sybils.
Fees
Courts have flexibility in how they set fees. They could charge an interest rate on bonds, or dispute fees, or both. We expect a mix of bond fees and dispute fees to be common. Arguably, the court stake is securing value, even if no cases are happening. Thus, continuous fees on bonds may make sense.
Security Analysis
Griefing Attacks
Users can grief verifiers by disputing themselves, then failing to show up for disputes, forcing verifiers to vote “no,” then appealing to a higher court, and finally showing up for that round, claiming the first round verifiers voted incorrectly. We need to think about bond and stake math carefully to discourage this, while still enforcing penalties for verifiers who try to sneak through voting yes on a sybil but are caught later.
Minority Stake Attacks
If an attacker controls a substantial minority of stake, they may be selected as the initial round verifier(s). If no one audits the dispute, then the attacker may sneak through some sybil accounts from time to time, improving their chance of success of an attack on an application. Again, the context is key, and if the creators of the dispute created it because the accounts were seen doing some suspicious activity in an app, then they will appeal the case to a higher court.
Majority Stake Attacks
If an attacker controls a majority of stake, they can simply vote in their favor all the way to the final dispute and win the dispute bonds and profit from the application they sybil attacked. However, this would cause a court fork, so the attacker’s stake would be worthless. We call this scenario “cost of corruption” which is half of the marketcap of the staked tokens. It is important to make courts cover as large an area as reasonably possible so the token marketcap is as high as possible, and can secure high value applications.
Conclusion
Through combining biometric uniqueness with schelling games, we believe a system of courts and sybil-resistant applications can arise to bring with it the next generation of decentralized identity.
References
Footnotes
-
Schroff, F., Kalenichenko, D., & Philbin, J. (2015). FaceNet: A unified embedding for face recognition and clustering. IEEE Conference on Computer Vision and Pattern Recognition (CVPR). https://arxiv.org/abs/1503.03832 ↩
-
Worldcoin. (2023). The Worldcoin Protocol: Technical Documentation. https://worldcoin.org/technical-documentation ↩
-
Lesaege, C., & Ast, F. (2019). Kleros: Short Paper v1.0.7. https://kleros.io/whitepaper.pdf ↩
This article represents my personal opinions and research. Nothing in this article should be taken as professional, financial, legal, or investment advice.