artificial intelligence

[OPINION] Why high tech needs a humanist approach

Chad Osorio

This is AI generated summarization, which may have errors. For context, always refer to the full article.

[OPINION] Why high tech needs a humanist approach
'Tech promises to be the great equalizer, and yet why does it continue perpetuating ingrained social inequalities?'

AI, Big Data, and other high tech are currently being adopted at a staggering pace. For example, the global AI software market is evaluated at $62.35 billion as of Q4 2020, with an expected compound annual growth rate of 40.2% from 2021 to 2028. Few industries experience this much growth in such a relatively short period of time.

One thing to note is that advanced software isn’t self-generating, or at least not yet: at present, it requires humans to design, create, and implement these tech tools.

Many universities and higher learning institutions, in both developed and developing countries, are seeking to take advantage of this by offering related academic courses.

In the Philippines, for example, the College of Engineering of the top-rated University of the Philippines has taken this initiative, with its recently announced its Master of Engineering in AI. The Asian Institute of Management has pioneered a Masters in Data Science in the Philippines, with other top universities like Ateneo de Manila University, De La Salle University, and the University of Sto. Tomas following suit with their respective undergraduate and graduate degree and training programs.

Some people would argue that in order for Filipinos to be competitive in the international tech labor arena, we must use these programs to spur our students’ technical capacities to the fullest. The proposals on how to do so vary, but generally it means more hands-on training and less time in the university setting, with decreased required coursework on non-tech related subjects. This usually translates to the removal of social sciences and humanities from tech-oriented core academic curricula.

I respectfully disagree with these initiatives.

My perspective comes from my work with international organizations, including the Affiliated Network for Social Accountability in East Asia and the Pacific, and the Asian Development Bank, on initiatives to encourage adoption of current and emerging tech to improve local community development. At present, I work with ALPHA10X, which utilizes cutting-edge AI technology with the goal of improving strategic investment and innovation to boost efficiency in the global life sciences industry.

In all these tech-related projects I’ve worked on, my background in psychology, law, and economics have helped provide a holistic perspective and generate insights on how these tech can make the most positive social impact on its target users. For NGOs, this translates to greater and more sustainable community development; for private businesses, this means greater commercial success.

Must Read

How machine learning is helping us fine-tune climate models to reach unprecedented detail

How machine learning is helping us fine-tune climate models to reach unprecedented detail

The benefits of multidisciplinary collaboration is myriad. It leads to interesting insights and top-quality innovation which could overhaul entire industries. Books have been and can still be written about this. However, this is not the goal of this article. While I can continue to extoll on the added value of social sciences and humanities to high tech, in this article I would like to focus on outlining their essential value.

This is why in the long run, I believe that maintaining hyperfocus on technical skills, but failing to give enough importance to the humanities and the social sciences, could result in more setbacks in the field of high tech rather than spur its continuous development. How so?

Rise of the Terminators

One of the most famous sci-fi movies of the ’90s is The Terminator. Killer robots in the shape of sexy people have embedded themselves deep in popular culture. Indeed, up until now, doomsday soothsayers, usually taking the form of Twitter moguls, warn of AIs going rogue. While I personally believe that it will still be some time before Siri or Alexa tries to murder us in our sleep, that doesn’t mean to say that AI and other forms of high tech cannot be dangerous to who we are as a society. After all, great things are often double-edged swords, and the same is true in this case.

There are a number of challenges in AI and Big Data existing today, and I can only cite a few of many. 

AI-powered social media algorithms have been implemented with little feedback from political science or social psychology, only fueled by pure commercial interest. This has led to highly-polarized echo chambers, the propagation of fake news, and the rise of populist, highly ineffectual leaders who give little regard to the rule of law. The Philippines, among many other countries, is an excellent case study for this, especially during its 2016 presidential elections.

Baseless opinions and falsities masquerading as facts aren’t the only data input for AI systems, the output of which can create lasting harm on personal lives and the social fabric. Numbers can also be a dangerous thing, as statisticians have known for decades.

It is a basic concept in AI, for example, that the quality of input affects the outcome. AI is not some sentient, malevolent being which actively seeks to do harm. Rather, it is a simply a machine, although it processes information at an exponentially faster rate. But as they say, garbage in, garbage out. Procedures on data-gathering, the resulting quality and integrity of data, and the processes used to train AI, all human-managed, dictate the results of these algorithms, and how they can be interpreted into policies.

If the concept of ethics is not ingrained in these processes through insights from social sciences and humanities, even the most well-meaning AI can result in lasting harm.

After all, it is necessary to remember that faster processing does not necessarily mean better, more effective results. In many instances, inherent biases are actually uncovered by crunching data at a massive and faster scale, revealed by the meta-analysis of trends. In a number of different fields, including health care and finance, the use of AI has resulted in the fleshing out of deeply-entrenched biases against marginalized groups, including racism and class discrimination.

AI language system GPT-3, the world’s largest neural network, has been shown to be significantly Islamophobic, associating Muslims with violence and terrorism. In the US, algorithms have been found to be racially biased and have resulted in skewed policy recommendations for public health. Similarly, in the finance sector, trained AI  has consistently denied lending aid to families of African or Hispanic ancestry due to biased credit scores as input data. Unless these problems are understood and fixed, implementing similar AI tools in the Philippines could result in similar classist discriminatory outcomes, however unintended.

Tech promises to be the great equalizer, and yet why does it continue perpetuating ingrained social inequalities? Could it be because the people designing these AI are often equally unaware of these discriminatory biases? After all, this could result in failure to take them into account during design and implementation. This situation is something which needs to remedied, through applied social sciences, in order for the promise of tech as a solution for social ills to be truly realized.  

Must Read

How deep learning AI is discovering new antibiotics

How deep learning AI is discovering new antibiotics

Global terrorists  are also starting to exploit the use of high tech to implement their plans, and it is necessary to counter cyberattacks with technical defenses. In this regard, info-arming AI and data scientists and engineers with little background on international security studies, including briefing them on the social root causes of terrorist ideals, makes them more formidable allies in the global war against acts of terrorism.  

This works both ways. Even in the fight to uphold international security, respect for universal human rights is of primary importance. For example, while we may want to put surveillance cameras everywhere, with AI aiming to de-anonymize the identities of individuals who seek to do harm, should this be at the cost of the privacy of the majority of the population?

It’s not a question for me to answer, but again it’s the fine line of knowing how to balance public safety and security with basic human rights, dignity, and the minimum expectations of privacy. Without a background on social sciences and humanities, it will be easy for technologists to overstep boundaries. The road to hell, after all, is said to be paved with good intentions, and this is why it is necessary to be able to recognize signposts along the way. This ensures that even the most well-meaning of our most intelligent scientists and engineers don’t get lost and end up doing more social harm in the name of good.

Another essential value that social sciences and humanities can teach to those working in the tech industry is the concept of building trust in high tech. IBM Managing Director Patama Chantaruck describes it thus:

“We need assurances that AI cannot be tampered with and that the system itself is secure. We need to be able to look inside AI systems, to understand the rationale behind the algorithmic outcome, and even ask it questions as to how it came to its decision.”

Trust in tech is important for its wider adoption. In the field of medicine, for example, facial expressions help increase human trust in medical robots. In turn, the increased use of AI and other high tech in health care can reduce human errors, leading to better patient outcomes. We need to be able to trust the systems in order for us to properly utilize them, and furthermore for people to follow policy recommendations generated by these systems. However, gaining trust in high tech means knowing the human mind and socialization processes inside and out, and this is what the humanities and the social sciences can offer.

Effective international security, equal, non-discriminatory treatment, and increasing trust in technology: these are just some of the few challenges faced by high tech, which can benefit from learning what social sciences and humanities have to teach. This list is by no means exhaustive, and just a sampling of how essential all these disciplines are to each other.

Can all of these disciplines, working together, prevent the rise of our future robot overlords? I can’t guarantee this, but I have my fingers crossed.

Must Read

Meet Grace, the healthcare robot COVID-19 created

Meet Grace, the healthcare robot COVID-19 created
An ounce of prevention

My favorite Latin saying is “Corruptio optimi pessima.” Nothing is more evil than the corruption of the best. When we let tech-oriented capitalists who are not keenly aware of social issues pioneer tech solutions, only fueled by greedy commercial interests, this can lead to dire situations: widespread misinformation, exacerbation of social inequalities, environmental harms, political unrest, even genocide.

One thing I’ve learned working in all the intersections of these various fields is that the spring cannot rise higher than the source: in other words, AI is only as intelligent as its creator, and the kind of data it ingests. Although it can indeed process information at an astounding pace, it is limited by the parameters set by the developers and engineers who created it.

It is therefore clear from the variety of examples abovementioned that we need to arm our AI and data scientists and engineers with core concepts in the social sciences and the humanities, like ethics, morality, and equality. Understanding society with a sturdy perspective of these concepts in mind can help address these challenges from the get-go.

Intelligence has little to do with this, artificial or otherwise. It’s all about human and social values which are rarely the primary focus of technical courses. This is why, especially in the development of powerful tech which has so much far-reaching potential, it is important to highlight viewpoints from the humanities and the social sciences.

Keeping this idea in mind, what are concrete steps in order to do so?

First, we need to ensure that starting from the professional cradle, our future engineers, scientists, and developers grow holistically. This means that they be given access to social sciences and humanities subjects at the earliest possible time for them to understand the human condition. Doing so creates a strong foundation on which they can spearhead properly-oriented tech solutions to ameliorate existing social conditions.

I was humbled to have been invited to the Global Drucker Forum 2021, held in Vienna, Austria. It featured a number of conference panels devoted to AI, and is the source of the second recommendation. During the discussions, what became crystal clear is that the tech development process must always incorporate an ethical viewpoint. This is true especially for non-consumer-facing tech. Keeping ethics in mind during both the creation and implementation process can provide an understanding of inherent risks, reduce social harms, balance interests, and increase trust in tech. However, I believe that a course or two on ethics is not sufficient; for deeper understanding and application, continuous social sciences and humanities education even for the most well-informed tech professional is ideal.

Lastly, the greater tech industry should strive to include perspectives from non-tech and business fields into the development of their models. Now more than ever, innovation hinges on collaboration. Working with people from different disciplines helps provide a deeper understanding of problems that specific high tech seek to solve, as well as the risks and benefits of proposed solutions.

In sum, for high tech, the value of perspectives from the social sciences and humanities are not merely additive: they’re actually essential. Moving forward, we must keep this perspective in mind: that for high tech to be truly valuable, professionals from all fields, whether from the sciences and engineering sectors or the social sciences and humanities, must work together to optimize their impact in order to spur innovation and sustainably improve lives.

Doing any less is a disservice not only to the interests of high tech, but to the people it seeks to serve. – Rappler.com

Chad Patrick Osorio is Senior Lecturer for Economics at the University of the Philippines Los Baños. He is External Consultant and former Head of Research for ALPHA10X, and current Chief Legal Officer of Sociov, a data-driven coaching and mentoring platform. Special thanks to Jeanella Klarys Pascual of Perfect Memory and Jessa Osorio of Xendit for insights. Send comments and queries to https://chadvice.co/

Add a comment

Sort by

There are no comments yet. Add your comment to start the conversation.

Summarize this article with AI

How does this make you feel?

Loading
Download the Rappler App!