Are we our own worst enemy?
Originally posted to the generative identity website.
Read moreFirst published to the generative identity website.
No two people can share an exact understanding of anything deep and meaningful simply because we each have different contexts. Conversation relies upon and can never wholly substitute for context. Nevertheless, we can work to grow a shared understanding through conversation, and the relationship between conversationalists evolves in the process.
The relationship is immanent in such informational exchange[1].
On one level, the opening paragraph here pertains to this being a blog post about conversations I’ve valued in recent months. But there’s another level given that ‘digital identity’ is our subject. Identity, in what you might call the natural and non-bureaucratic sense, is reciprocally defining and co-constitutive with relationships and information exchange[2].
Identities are immanent in the relationships immanent in information exchange.
Originally published to the Ethereum World blog.
In light of the Trump ban, far right hate speech, and the plainly weird QAnon conspiracy theories, the world's attention is increasingly focused on the moderation of and by social media platforms.
Our work at AKASHA is founded on the belief that humans are not problems waiting to be solved, but potential waiting to unfold. We are dedicated to that unfolding, and so then to enabling, nurturing, exploring, learning, discussing, self-organizing, creating, and regenerating. And this post explores our thinking and doing when it comes to moderating.
Moderating processes are fascinating and essential. They must encourage and accommodate the complexity of community, and their design can contribute to phenomenal success or dismal failure. And regardless, we're never going to go straight from zero to hero here. We need to work this up together.
We're going to start by defining some common terms and dispelling some common myths. Then we explore some key design considerations and sketch out the feedback mechanisms involved, before presenting the moderating goals as we see them right now. Any and all comments and feedback are most welcome.