This is AI generated summarization, which may have errors. For context, always refer to the full article.
Allow me to express my appreciation to Rappler and your partners for hosting the Manila leg of the Social Good Summit. I understand this is the first time since the pandemic that we are holding this summit, so we come to this gathering with a little more optimism, and hopefully with sharpened focus to better tackle the question at hand: how do we turn tech for good, and move from problem to solution?
Thinking on this theme, I was reminded of an idea first put forth in 1965 by the writer and futurist Alvin Toffler: the concept of Future Shock. This, he explained, is the “shattering stress and disorientation” experienced by individuals subjected to “too much change in too short a time.” Collaborating with his wife Heidi Toffler, he expounded on the idea in a book of the same name, warning us that the rapid acceleration of technological change would leave us uprooted and overwhelmed, impacting not only nations and industries but also our individual lives and how we interact with each other.
Even then, Toffler noted that the cycles of technology had already shortened exponentially: the time it took between a breakthrough, its application, and its diffusion in society had been cut radically – and we simply cannot keep pace.
In 1970, Toffler warned us the “future shock “may well be the most important disease of tomorrow.” Half a century later, their warning still rings true. Even without going into future shock as a psychological malady, no one can dispute the evident disorientation and disconnection resulting from rapid advances in technology, and the difficulty of individuals and institutions in keeping pace with their consequences on society.
We certainly see that, for instance, in social media, and our capacity – or lack thereof – to deal with disinformation, the democratization of hate and violence, and the magnification of silos and echo chambers to a point that has splintered any sense of shared reality. Or the increasingly tech-driven globalization of the economy, and its immense impacts on inequality and the environment. Or the rapid but uneven acceleration of technologies that has only widened the divide between highly developed and poorer nations, with digital access and literacy lagging not just among but also within developing countries and fueling vicious cycles of marginalization and inequality.
Seismic shifts in AI
We are likely to see the same thing with the next frontier of technology – artificial intelligence. Already, seismic shifts are happening across industries and economies, and AI tools are only growing stronger and stronger. The late Stephen Hawking warned us that sufficiently advanced AI would be able to “take off on its own, and re-design itself at an ever- increasing rate.” His fear was not just that rapid advances in AI will leave us disoriented. It could make us obsolete: limited by biological evolution, we would be unable to compete, and eventually superseded. He minced no words, saying: “the development of full artificial intelligence could spell the end of the human race.”
But for Hawking, it could also be the most transformative event in the history of humanity – a technological revolution that could help us heal the planet after the damage of industrialization, eradicate disease and poverty, and transform every aspect of our lives. It could be, in his own words, “either the best, or the worst thing, ever to happen to humanity” – but that is something up to us. In the end he retained some optimism; he believed we can create AI for the good of the world – but we need to be “aware of the dangers, identify them, employ the best possible practice and management, and prepare for its consequences well in advance.”
The spirit of Toffler’s prescription against future shock emphasizes much the same thing: that we need to be thinking deeply, strategically, of the future. This, I believe, is also the disposition we require in thinking about turning tech for good. Where do we begin?
For one, governments should take the initiative. Regulation is an important part of the solution, particularly to help ensure transparency and accountability. But given the nature of technological advancement, this is admittedly a difficult task requiring delicate balance between protection and innovation. It also means that the necessary frameworks will vary from technology to technology. There is no clear-cut set of rules that we can uniformly apply. But perhaps what is more important is what others have called “anticipatory governance,” which is that mindset of adaptability that looks far ahead and prepares for the future.
Impact on judiciary
In this area, we are seeing encouraging signs. Recently, the European Union’s Digital Services Act finally took effect, setting rules on content moderation, user privacy, and transparency to “clean up social media” and take on a wide range of problems including misogyny, the abuse and exploitation of children, consumer fraud, disinformation, and threats to democratic elections in online platforms. The EU is also looking to finalize the AI Act. Last year, the United States released the Blueprint for an AI Bill of Rights centered on protecting the public from harm.
I know there have also been legislative proposals to regulate and develop AI here in the Philippines. As for the judiciary, you may have heard that we are currently undertaking a program of judicial reform through innovation under our Strategic Plan for Judicial Innovations 2022-2027 or the SPJI, a key part of which is studying the benefits as well as risks of AI applications for our justice system.
Technology has impacted the Philippine judiciary in more ways than one. It has not only transformed court processes and operations; to a certain degree, it has also altered judicial thinking. Expanded access to information has resulted to a comprehensive understanding and deeper analysis of cases.
AI has the potential to enhance the accuracy and efficiency of judicial decisions, while at the same time, raise some algorithmic bias.
Significantly, in the face of rapid technological advancements, judges may find it relevant and necessary to configure traditional mindsets to a broader analytical framework encompassing the complexities and intricacies of technology. It is here that a paradigm shift may occur in the interpretation of the scope and extent of private rights vis a vis greater good and societal safety, under the context of advancing technology.
Moving forward, for us especially, we need our legislators and policymakers to take a proactive role, to have this disposition of anticipatory governance, instead of being reactive in terms of setting up frameworks and guardrails for technology particularly on AI to ensure that this will ultimately be beneficial to our people and will respect, protect, and uphold their rights.
But we also cannot just leave it to government and big tech or the industry. We need to open and maintain broader channels of dialogue including not just them, but also innovators, advocates, experts, and practitioners across a wide range of disciplines, civil society, and even ordinary citizens – like through this summit. Tools for digital democracy can be very helpful in this regard.
As individuals, we also need critical engagement with technology. We cannot simply avoid it, or just halt innovation. But we also cannot embrace it heedlessly. Even as we use its tools – more and more of them becoming indispensable in how we live and work – we have to be mindful. We have to make space to think of things like ethical considerations, social impacts, environmental consequences, even its impacts on our own selves and relationships with others.
In all this, it is important that we do not lose sight of people. First in the sense that it’s not just technology influencing people. It is too easy to think of technology as an external force avalanching change and its consequences on us. But that would be an abdication of our responsibility. Humanity shapes the technology that reshapes humanity; this means we have agency. We are not helpless; we are not powerless; but we do have to act – and because it is difficult to do alone, we have to do it together. And second in the sense that all this must be in the service of humanity, and not just driven by the profit or the pursuit of innovation for innovation’s sake. There has to be a better way of doing things: one where we use new technologies to address longstanding problems like inequality and injustice, where, to borrow a phrase, the arc of innovation bends toward sustainable development and social justice.
How do we turn tech for good? The question sounds so much simpler than it really is, and I confess I cannot point to any absolute solution or a precise framework for regulation. But I suppose the point is less to arrive at a categorical answer; the point, I suppose, is that we ask the question. That we begin and sustain such conversations. That we keep doing it and doing it for the technologies that we have today, so we can better approach the technologies of tomorrow – not in fear, not in shock, but with a shared understanding that the future is still ours to forge; that it is our hands on the wheel; that whatever happens next is up to us.
This, I hope, is what our summit today provides. Emphasizing a leading role for governments, sustaining dialogue, critically engaging with technology, keeping people at the core of what we do – these are just starting points. I truly look forward to today’s discussions, and I do hope they inspire longer and deeper conversations and illuminate more concrete pathways to turning tech for good, to moving from problem to solution. – Rappler.com
Chief Justice Gesmundo delivered this keynote at the Social Good Summit organized by Rappler on September 16, 2023, at the Samsung Hall, SM Aura, Taguig.