Dear lawmakers and regulators, this draft industry paper doesn't do the job. I explain why in the hope that you will act.

First published to the generative identity website.


In my first public critique of self-sovereign identity (Generative identity — beyond self-sovereignty, 2019) I began to explore ways in which its founding principles are fatally flawed and made a first attempt at describing the technology's very poor social consequences. I concluded with an invitation to its evangelists to point out errors in my analysis and to explain "how it might not contribute to such tragic outcomes let alone help avoid them."

I got reactions but none that engaged meaningfully with the critique.

I followed up a year later (The dystopia of self-sovereign identity, 2020), then with the only dissenting chapter in the most comprehensive book on the matter (see SSI: our dystopian nightmare, 2021), and most recently and most comprehensively with last year's essay (Human identity: the number one challenge in computer science, 2022).

While a few useful but top-level conversations have ensued, as reported on the generative identity website, no detailed response emerged. Now here we are in 2023 and we can finally review a draft of an industry paper exploring how SSI mitigates and exacerbates human harms.

The draft paper — Overcoming Human Harm Challenges in Digital Identity Ecosystems — has been produced and published by the Trust Over IP Foundation, "an independent project hosted at the Linux Foundation, working with pan-industry support from leading organizations around the world." Its mission is "to provide a robust, common standard and complete architecture for Internet-scale digital trust."

And the draft paper's conclusion? As summarised in the accompanying blog post:

Can SSI harm people in the real world? Our paper says “yes” and why that’s so.

Hoorah! you might cry. Finally! you might say. But no. This paper isn't going to help anyone in the ways it should. I'll explain in overarching terms what I think has happened here and then illuminate a selection of the draft paper's significant shortcomings. But first allow me to explain why SSI, the poster child of decentralized digital identity, deserves special attention amongst the plethora of approaches to digital identity.

To self-quote from my 2022 essay:

[SSI] is fundamentally a mutation carrying computer science’s false premise [of identity] further into community.

That's what decentralization does.

I wholly embrace biomimicry and I am therefore a qualified advocate of decentralization i.e. when both the ends one is working towards and the qualities of decentralization as a means are well articulated. I detest abuses of the concentrations of power arising in centralized systems as much as the next person, but I also know that just any old form of decentralizing effort won't automagically represent an improvement. Things could well go in the other direction.

Nature is complex and has had just a little longer to 'think' about things if you know what I mean. We needn't take quite so long given that many disciplines find themselves voracious students of Nature, but SSI is not a product of this interdisciplinary knowledge.

The whole < Sum of the parts

So what do we have here?

Let's be fair, many of us have tried in our time to contribute to the dreaded group report. They are never easy if only because the lowest common denominator appears to be the only thing to ever secure consensus. Look no further than this paper's title for evidence of design-by-committee; that's one weird-sounding headline!

But imagine now the greater challenge of participating in an industry group forced to come together to tell the world about the ways in which the thing you've been working on for years, the thing you've told investors will secure a 10x return, the thing on which both your company's future and your own personal reputation and livelihood depend, is bad for the world.

If this wasn't so urgently needed in the context of the digital innovation that smashes into the human condition like no other, you'd almost feel sorry for them.

The process resulting in this draft paper has been led by Nicky Hickman. Nicky is deeply knowledgeable in this space and quite frankly she's awesome. (You will see she was one of the dozen most diligent reviewers of my essay last year.) Of the paper's contributors I know personally or have at least interacted with on occasion, I can say without a moment's hesitation that they're all lovely human beings. There's no reason to doubt the others are too, but it doesn't matter if you're awesome or lovely, an impossible task is an impossible task. Objective and useful reports cannot be authored through the lens of largely mono-disciplinary commercial bias, and herding cats can only be as successful as herding cats usually is.

I should note that Nicky invited me to join ToIP and specifically this undertaking. I declined for two reasons. First, doing so required I subscribe to ToIP's goal of realising SSI in the world. Second, I was told that SSI's founding principles would not be up for discussion.

That doesn't mean we don't enjoy collaborating personally. Nicky helped me fix some errors in a draft version of this post. And on that note, thanks go to David Wither too. 🙏🏼

Now then, let's dive in.

Harms? Yeah But ...

Perhaps the defining feature of this report is what is not said.

SSI's founding principles are wrong, by which I mean Cameron 2005, Allen 2016, and Principles of SSI 2020, and this is by far the greatest and gravest cause of harm as I've discussed previously. From here on, I will restrict myself to what is said.

So let's start at the top. That title again: Overcoming Human Harm Challenges in Digital Identity Ecosystems. It reads awkwardly because it's desperate. If I was asked to proffer an alternative title having read the paper's contents I would propose: Harms? Yeah But ... You see the participants couldn't bring themselves to explore and discuss harms without trying to balance talk of harms with talk of benefits. Every "exacerbation" of harm must be accompanied by a "mitigation" as if the working group is telling the world that the latter should be considered as some sort of harm offset.

The muddiness of this thinking, the shoddiness of the implied ethics, is laid bare by one of the 20th Century's foremost moral, legal and political philosophers, John Rawls:

[Justice] does not allow that the sacrifices imposed on a few are outweighed by the larger sum of advantages enjoyed by many. Therefore in a just society the liberties of equal citizenship are taken as settled; the rights secured by justice are not subject to political bargaining or to the calculus of social interests. (A Theory of Justice, 1971)

Harm is not offset by benefits. Harm is harm. This paper should make harm its sole focus simply on the basis that every other piece of content produced to date by the SSI community strives to big-up its acclaimed advantages. There's no shortage of it. How exactly is the community to respond as it needs to, as we all need it to, without a crystal clear explication of the harms?

Trust me

Trust me when I say that trust is a social concept, i.e. occurring between people, and yet the paper's main section kicks off with the phrase "trust technologies". Later we're treated to "trust layer", the layer in question being something slotted into an information technology stack. And let's not forget that the identity the group producing the paper selected for themselves, ToIP, conveys that they believe trust per se is transmissible over Internet Protocol.

Now we might say in everyday parlance “I trust my [tech thing]” but we actually mean we trust those who have designed and delivered it, and perhaps those who have tested and certified it, etc. We are willing to take the risk of making ourselves vulnerable to the product of this combined intelligence and labour. But everyday parlance simply doesn't provide the required analytical rigour. If the paper’s authors wish to engage social scientists or alert technologists to social science insights in some part, this is not a good start. It's a terrible start.

Trust us, this is STS architecture

Many of SSI's flaws emanate from ignorance of social science amongst the community's technologists. I can empathise. I am a chartered engineer dedicated to the application of digital technologies in complex systems, and only a mere student of social science. But I am a keen student of nearly ten years and it's plain to me that technologists striving to design social technology at scale should invite social science expertise to the table. I appreciate this will involve money but lack of funding for interdisciplinarity cannot excuse its omission in this context.

This imperative is perhaps beginning to dawn for we find an intriguing thread running through this paper exemplified by this assertion:

"The key feature of SSI is that it is not a technical system architecture but a socio-technical system (STS) architecture.”

This is a misrepresentation, putting it politely. OK, less politely, it's a lie. For this claim to be true one would see social science contributions in this domain as much as one sees computer science. To paraphrase the last generative identity blog post, of the SSI-related job vacancies you've seen, what's the ratio of techie roles .v. social science roles? I haven't stumbled across one example of the latter. But more rigorously, let's take the most comprehensive book (2021) on the topic as the counterfactual here. The social sciences aren't discussed specifically in any chapter other than in the sole dissenting chapter I contributed. STS is referenced on ten different pages in this paper and yet didn't warrant one teeny tiny mention anywhere in the entire book.

We do find the following admission in the paper:

"The work of social scientists, creatives, legal experts, natural scientists, policymakers and laymen, is not typically part of these standards or product development processes.”

And we are told that:

"This leaves many gaps in the STS."

... but this is wholly disingenuous. If the sentence about the exclusivity of standards and product development stands then clearly the development of SSI hasn’t followed a socio-technical systems approach at all. Not so much “gaps” then as being significantly and so dangerously devoid of any meaningful social science input. If social science was in the mix, we wouldn't find sentences such as:

"SSI recognises human identity as a social process because it is relational rather than nodal, every exchange of digital data begins as a P2P interaction with protocols such as DIDComm.”

As I wrote in my essay last summer, SSI is functionally relational and not co-emergently relational. The former is a technical quality and the latter describes social process.

In moving this draft paper forward to a final version, the sentence above acclaiming a STS architecture should be changed to:

Many of SSI's harms emanate from or are amplified by the fact that the architecture should have been developed as a socio-technical system but wasn't.

And to the list of items under the heading Common Features of Harm I would add:

  • Identity harms may be caused by incompetence and/or unethical behaviour in system design, especially refusal to establish and have the patience to pursue interdisciplinary deliberation and knowledge building.

The paper positions the ToIP work as the social bit of the STS caboodle but ToIP came together way down the line. From observation, the ToIP Technology Stack Working Group appears to consider governance-related tech development as all the 'social sciency stuff' needed, and even that narrow focus appears to have unavoidable contradictions at its heart. Such governance requires institutional oversight but has no means to mandate it, and SSI exemplifies a group of decentralizing technical protocols set on eroding the institutions it relies upon for governance.

DIE!

A diagram portraying STS components has the words "Digital Identity Ecosystem" emblazoned at the centre, or DIE for short. (I kid you not. A group so very clearly concerned with how this paper might make them look chose DIE as a defining acronym. But back to STS ...)

In conversation with Phil Windley in 2020 I noted:

The system architecture to which I refer encompasses human behaviour, human community and society, and the natural living world. ... The system architecture to which you refer is technical. Period. There is no way, contrary to your assertion, that the distressing consequences with which I’m concerned ought to follow from this system architecture. There may be some, but my focus is SSI in the real world.

Any ecologist and/or systems thinker reading this will know that my subject here is boundary drawing. When everything in the known universe is connected to everything else, where exactly might we draw boundaries? (See that last link for more on this.)

Here’s (Di Maio 2014) on the topic:

Modeling large scale, complex systems is non trivial, in particular addressing boundary issues between the technical system and the environment: “Considerations of large-scale engineering systems often present a dilemma of where to draw the line between a system and its environment. How are social, political, economic, and institutional issues addressed? How can the techniques of engineering science be connected with a modern understanding of human decision making, organizational behavior, and institutional inertia?” (Laracy 2007). To build joint optimization into a STS from inception, the system boundary must necessarily include people and environment, resulting in what can be defined as an extended boundary. It is not possible to achieve joint optimization if the socio and the technical aspects are modeled separately as different things, because all the modeling, development and engineering activities that follow are influenced and guided by the boundary. Setting an inclusive boundary results in the STS being considered as a whole, rather than emerging from the interaction of different sub-systems.

I couldn't say it better myself, which is why I didn't. So how appropriate is it to adopt the “Digital Identity Ecosystem” as the scope of the STS design?

The paper's glossary defines DIE as:

A set of at least two (autonomous) parties (the members of the ecosystem) whose individual expressions of digital identity are recognised by other members, and whose individual work is of benefit to the set as a whole.

And digital identity as:

a form of digital data that enables a specific entity to be distinguished from all others in a specific context. Identity may apply to any type of entity, including individuals, organisations, and things.

So this draft paper tells us that the SSI community draws a boundary excluding (1) the non-digital, (2) any operation of identity based on alikeness, and (3) any description of identity that doesn't invoke an entity, a thing. This third point means the boundary encompasses a noun-like conceptualization of identity but excludes verb-like conceptualizations. (See my essay last year for an explanation of noun-like and verb-like conceptualizations.) Given that every discipline with an interest in identity — other than law and computer science — conceptualizes identity as verb-like, this is a big red flag.

In summary then, the authors of a paper addressing harms to people in the real world present the boundaries of their work as excluding the real world. This simply wouldn't have happened in a STS design process.

Live!

Overall, Part 3 is by far the stand out section of the paper. I like the list of "harms countermeasures" for example, although I would propose adding two more:

  • Ensure interdisciplinarity in design
  • Approach the technological augmentation of human identity as a living process and its domain an ecology (per the conclusion of Web3's Future Fail is Avoidable).

Clearly these two are related but the imperative of regarding this system as a living system is worth picking out. Heck, it should be made into a neon sign and hung above whichever door ToIP community members most often walk through. (And all the more reason to be amused by the acronym DIE.) Clearly, (Nguyen-Phan 2022) is a start. More please!

Let's discuss life by way of the paper's 101 on differences between western and non-western cultures. I write "101" as the American slang term for the most basic knowledge in some subject, and so then not in a flattering sense. This is a paper supposedly addressing the social harms of rolling out a much championed technology around the world. This is NOT then, I would argue, the time to write for high schoolers. I sense that groupthink in action again.

We're told that "in many non-western cultures the individual is subservient to the group" in contrast to "a western philosophical tradition that prioritises individual agency, choice and freedoms." This is reductionist at best. A harsher critic might wonder if it hints at an exceptionalism intimating that American software companies shouldn't concern themselves too greatly with such differences.

It might typify the extremes — the American myths of the lone cowboy and self-made man, say, versus Confucianism — but one wonders what the continental Europeans in the room were thinking for example.

This line of writing serves to set up this humdinger: we are told that interrelatedness is "at the heart of many non-western conceptions of sociocentric identity.” I was quite sure American culture still emphasised family and some remnants at least of the bowling alley camaraderie, but that's not the most pertinent observation here.

Interrelatedness and interconnectedness are in fact at the heart of the global scientific and (I understand although it’s not an area of expertise I can claim) spiritual consensus on the nature of living processes. Of life. If SSI harms might be all the more toxic in the contexts of this 101 on non-western culture, as the paper notes, then by corollary it's bad for human life.

While already emphasised in a quote I've used above, the paper's authors most obviously write their own coup de grâce with this:

"SSI is one of the only digital identity architectures to be specifically designed from the outset to give equivalence to identity for people, organisations and things.”

I'm still reeling from reading this, written it seems without a moment's reflection. Not one nanosecond. Allow me to offer a rewrite for the final version of the paper:

SSI is designed specifically for the thingification of people. It does not distinguish the qualities of the animate from the inanimate, the cognitive from the non-cognitive. If there's anyone out there thinking that we're special or different in any kind of way from the computing resources and things the Internet has enmeshed to date, think again m̶y̶ f̶r̶i̶e̶n̶d̶  you cryptographically triangulated and regimented object.

Look who's alive and well

Talking of living, it appears that the notorious homo economicus is alive and well in the Mitigations sections, animated by the fallacies SSI technologists tell themselves about rational control and consent. Woody Hartzog (2019) explains this well.

The following assertion for example is celebrated with not one iota of supporting evidence. For the avoidance of doubt, I mean of course supporting evidence relating to the real world, i.e. the one with beautiful and messy human beings that are nothing like homo economicus — you know, the ones studied by social scientists.

"... the requirement in SSI for holders’ consent and participation in transactions means that many of the technological harms that come with separation of the technical from the human process, are mitigated by SSI."

The audacity to include this here and celebrate SSI's programmatic / automation possibilities elsewhere, but back to the actors ...

Such (imaginary) rational actors possess the capacity to reflect on the emergent consequences of their actions it would seem, on the poor personal and systemic consequences to which they might otherwise have contributed had they not been so wonderfully prescient, for how else would they know when to exert control / withhold consent? This would of course represent nothing short of a superhuman capacity given that the SSI community itself exhibits no talent in anticipating the emergent consequences of its own schema.

Sustainable Development Goal 16.9 sets out to ensure that everyone on Earth has legal identity. Having had the opportunity to speak with just one person who did not have legal identity for many years, I have just the smallest appreciation of how awful it is to be stuck in a world institutionally organised around legal identity when you don't have it. It's important to note that while vitally important when it is needed, proof of legal identity is rarely required from one month to the next today, and the right to legal identity does not infer a right for everyone to demand presentation of legal identity. And yet SSI sets things up for its programmatic integration. Legal identity becomes structurated. It ends up permeating everyday interactions through a series of cryptographic triangulations.

If we consider for a moment that even economists have killed off homo economicus, on what basis would an ethical SSI designer sign-off a system we know will have people integrating legal identity left, right, and centre? I refer to a protocol stack that we're already seeing wielded to effect legal-identity-as-a-trivial-service (e.g. "current KYC is ‘single-use’ while KYC’d SSI makes KYC ‘recyclable’", kycDAO).

The section on mitigations of legal harms is most obviously authored with disciplinary blinkers. There's nothing on structuration, nothing on the poor consequences of a dedication to corporeality (i.e. one person, one physical body, one identity), and nothing on compatibility with the essential operations of non-bureaucratic conceptualizations of identity. This however is not the time to relitigate my critiques over the years but just to point out that they go largely unrecognised by this paper.

There is one sweeping statement in this section that needs challenging:

"Translating SSI into existing centrally controlled identity systems as a technical substitute rather than a full interpretation of what it means to implement a new socio-technical system with re-aligned incentives, open and transparent governance, and respect for human rights can only exacerbate the existing harms of those systems.”

I can’t agree. Switching relatively abruptly from one well-understood governance paradigm to one that is completely untested at scale and admits to having developed with mono-disciplinary blinkers is very far from a comforting prospect.

With all that said, this is the best section in this part of the paper, which is less of an accolade and more of an indictment of the other sections. I’m guessing that’s because the legal domain shares some structural and cognitive patterns with information technologists given the long association of the IT industry with the provision of services to government and other bureaucratic organisations.

Dear lawmakers and regulators

Please know that SSI is evidently not the product of a socio-technical systems approach. SSI has not been the subject of interdisciplinary design or development. You should not trust (make yourselves and the rest of us vulnerable to) the claim made in this draft paper that it is.

However, the paper is accurate by my reading in noting:

The more we explore unwanted side effects from digital identity ecosystems, the more questions we have. It may take decades of international, interdisciplinary collaboration to find those answers, but this is a starting point.

As I have written in my critiques to date, if society does not tightly constrain applications of SSI then we will witness pollution of the information ecology of human nature and human culture that will cause unprecedented human suffering. The effort required now to move beyond SSI with diligent interdisciplinarity is nothing compared to the effort required to clean up such pollution and its effects after the fact.

When the community writes about "felt harms", they mean psychological and physical harm. When they conclude that "'do nothing' or 'technology is neutral' defences are no longer acceptable” you will know that such defences have never been acceptable and appreciate I hope that this was and still might be a mindset underpinning SSI.

When you read in the very same paper that "SSI is better adapted to represent human identity" and also that human identity "is dynamic, fluid and made up of complex processes; the mere idea of expressing such a nuanced and subtle thing as human identity as a series of data items causes many harms, both intentional and otherwise", your sense-making faculties will determine I hope that the SSI community makes too little sense to itself let alone the rest of us.

My first public critique explicitly invited constructive response and engagement that never came — with the exception of the invitation to join ToIP and subscribe to its view of the world! I will point out the kind invitation to contribute a dissenting chapter to the SSI book, but also note that its inclusion was very much touch-and-go, it did not make the print edition, and no-one other than one of the book's lovely editors had any desire to talk about it.

I write now for you dear lawmakers and regulators in the hope that you will rein this in tightly, and sooner than later. I write for those who wish to pursue generative identity — digitally mediated and augmented human identity approached primarily for psychological, sociological, and ecological health.

Please let me know how I might help.