Digital Trust in 2035: How Can We Incentivise Constructive Online Discourse in the GenAI Era?

Image by congerdesign from Pixabay
Today, I had the honour of participating in the round table “Digital Trust and Democracy in the Era of Generative AI,” a Deep Dive session within The Digital Trust Convention organized by Sebastian Hallensleben.
The event brought together experts from across academia, policy and industry to imagine what a resilient, trust-friendly digital space could look like in 2035, and how we might get there.
We explored three guiding questions:
- What values, beyond trust, resilience, and fairness, should define a desirable digital future?
- What structures and mechanisms could incentivise constructive digital discourse at scale — and how can this re-energise democracy?
- What lessons can we take from past successes in global cooperation to guide our path forward?
I was invited to contribute to the roundtable by making an impulse statement on one of the guiding questions and I chose the second question as it resonates more with me personally – I’m naturally drawn to problem-solving 🙂
Below is the impulse statement I shared during the session guided by the belief that digital spaces can (and must) be designed to serve people first.
Question: What structures and mechanisms can incentivise constructive digital discourse on a broad scale in 2035, and how can this re-energise democracy?
Let me start with a little fantasy — or perhaps even more a warning, because some days ago, and I am sure some of you saw it, Elon Musk shared his vision for 2035: what he said is that he expects a world where AI is smarter than the smartest human, robots are everywhere, cars drive themselves, and goods and services are almost free. He believes that in this world, the biggest challenge will be finding meaning in life, because most traditional work will be gone. And not only Musk, but just some weeks before scientist Ray Kurzweil shared a similar prediction. He believes we’ll soon have AI-connected brains that will enhance our intelligence and even allow us to live much longer lives.
It all sounds like a sci-fi movie, right? — full of comfort, automation, long healthy life, free time for humanity. Without getting into the scientific truth of these predictions, I want to say that this future feels to me quite empty. And I wonder what happens to our sense of purpose, our role in society, or even our ability to shape our own future? Going back to reality….
If we want a democratic digital space in 2035, we need to start building it now — with different priorities. We need to make sure that AI isn’t just intelligent, but wise — because it’s grounded in human values, public accountability, and shared purpose. We also need to make sure that technology doesn’t just serve efficiency or profit but serves people. That means all people — not just the powerful, not just the early adopters, and not just the ones who speak the loudest.
So, how do we get there? How do we create a digital world that supports democracy and helps people stay engaged? Here are my proposals:
First, we need to find a better balance between powerful technology and human values. Yes, AI will be capable of doing more and more. But that doesn’t mean we should accept everything it brings without questioning it. We need to ask ourselves: What kind of world do we want to live in? Not just what is possible, but what is right, what is fair, and what is meaningful.
Second, technology must be designed with people at the center. That means thinking beyond the business model or the technical solution. It means understanding the social and ethical impact of the tools we create — and making sure they do not harm, exclude, or manipulate. Tech should support our freedom, and not take it away quietly in the background.
Third, regulation matters — and we need to trust that institutions like regulators and courts will do their job. But regulation alone is not enough. Rules don’t help if no one knows their rights, or if enforcement is too slow to keep up. So, we need strong oversight, but also collaboration and more transparency from tech developers and platforms and importantly, better public education, so that people understand their rights and can act when those rights are at risk.
Fourth — and this is essential — we need people to wake up. Many people still think technology is neutral or inevitable. It’s not. It reflects choices — and those choices have consequences. So, we must raise awareness. We need more digital literacy efforts about how AI systems work, how our data is used, and how tech shapes our opinions, behaviours, and freedoms.
And most importantly, people need a real voice — not just as users, but as active participants. We must create digital spaces that connect back to the physical world — spaces that encourage real dialogue, civic action, and human contact. Because it’s those offline interactions, those shared experiences, that make us truly human — and that’s where empathy, critical thinking, and a sense of shared responsibility can grow.
Let’s design for that connection — not just safe and functional systems, but digital spaces that inspire and bring people together. Spaces that act as a bridge — between screens and streets, between individuals and communities, between ideas and action.
So yes, the future may bring some more robots, assistants, self-driving cars, and hopefully a healthier life. But if we want it to also bring dignity, inclusion, and trust — we have to design for that, together. It won’t happen by accident. But it can happen — if we choose it.