Nicholas Lyons on finding truth in a post-trust era
AI is reshaping how we consume information, and with it what constitutes as "truth"
I’ve noticed a change in how people share information online. A few years ago, if someone wanted to make a point, they’d drop a link. Now neat, bland, sometimes incorrect paragraphs lifted straight from ChatGPT or Gemini pepper my feeds.
AI has already shaped online searches. A study published just five days ago found that the majority of Google searches end without a click through to individual websites. The “AI overview” section seems to suffice.
Using AI to engage with content has its advantages, why spend an hour reading a paper when a chatbot can condense its main points in seconds? In a world where we are bombarded by information from all sides at all times of the day, who wouldn't want a shortcut? But the more I watch this play out, the more I think about its possible implications.
AI often gets things wrong. Earlier this year a study found that around 60% of outputs are incorrect in some way and I don’t know about you but I’ve definitely noticed it. A number of my Google searches recently have resulted in AI summaries that are clearly off. As an experiment, I asked ChatGPT to write an intro for this week’s Future Talk, presenting the ideas that surfaced in the conversation. It missed the point entirely.
The problem is, the more we use these tools to summarise things, we’re not engaging with the data or the context anymore, so we can’t tell if they are wrong. By sharing the summaries as if they are the truth, we are adding to the wave of dis and misinformation that faces us every time we look at our feed. This adds another layer to the environment of echochambers, where algorithms that feed us more of the subjects we like and engage with confirm our own personal “truths.”
This brings me to this week’s Future Talk with Nicholas Lyons, founder of Arkeytyp and Valu which is built on digital identity and privacy protocol Verus, of which Nick is an advisor.
I met Nick in Monaco. He was on my panel at WAIB summit where we discussed the rise of on chain AI agents, why they are important for individual agency in the AI-driven world of the future and how ready web3 infrastructure is to support their development.
The conversation below is a continuation of that. We discuss what “truth” currently means in the digital world and the impact of eroded trust in traditional institutions on digital environments which has created fertile ground for AI misinformation to take root. We also look at possible solutions, which could improve our engagement with LLMs going forward.
Key Takeaways:
Post-truth describes the flood of misinformation that makes it impossible to discern fact from fiction, while post-trust marks the breakdown of confidence in institutions as reliable arbiters of reality. In a post-trust era people are left to construct their own realities through confirmation bias which is strengthened by social media algorithms.
LLMs mirror back human biases ascertained as “truth” from the data it is fed with, something Nick calls “GIGO” garbage in, garbage out. A decline in people engaging directly with information, instead relying on the outputs of GPTs, strengthens this shift further. Nick’s proposed solution is “TITO” – truth in, truth out – where “true” data is fed into AIs leading to outputs that are closer to the “truth”.
Nick argues that the first step is to establish what is “true” by gathering information directly from individuals. Blockchains can provide the foundation for this, using digital identity tools to verify that each person is real while still protecting their privacy. With that protection, people are more likely to share what they believe, without the distortion that comes from surveillance or social pressure.
An important part of TITO is giving people a reason to provide accurate data. That’s why Nick introduces VIVO—value in, value out. By attaching value to the information people share, and to its ability to be verified, individuals are motivated to take part.
This is also important for the adoption of these more decentralised identity systems. Centralized identity systems like Google, Apple and social media accounts thrive because they are easy and make interactions online simpler. To compete, decentralized identity systems must not only provide individuals with control over their data but give that data value through monetary incentives for them to share it. Nick says that, due to the most recent data always being the most valuable, this provides ongoing value for the individual in an age of AI.
Food for thought:
If you want to learn more about decentralised approaches to identity, a16z published a video a few weeks ago discussing just that. The panel debated technological and ethical solutions for online identity in the age of AI agents and deepfakes and how decentralized systems might protect both privacy and the integrity of online spaces.
This paper on Sagepub published last week looks at how AI and algorithms change who we trust and what we believe in the post-truth era. It shows how automated systems influence what counts as knowledge and shape democratic debate, arguing that we need new ways to rebuild trust and judgment in a digital world.
This paper is a little older but still very relevant, looking at how “post-truth” and information warfare are intertwined in today’s media ecosystem, emphasizing that algorithm-driven content, AI-generated media, and cognitive overload foster environments where facts are replaced by satisfying narratives.
Along similar lines, last year I wrote an article after interviewing Winn Schwartau about information warfare. You can read my article on Digital Frontier as they have now taken down the paywall.
Uncertain realities, Isabelle Castro, available as an NFT here
Keep reading with a 7-day free trial
Subscribe to Utopia in Beta to keep reading this post and get 7 days of free access to the full post archives.