Getting to the root of the problem: The myth of “AI for good”

 
tulsi parida headshot
Interest:
Programme:
Skoll Foundation issue areas:
Author:

Getting to the root of the problem: The myth of “AI for good”

Each year the Skoll Centre invites a small number of Oxford students to the annual Skoll World Forum on Social Entrepreneurship. Each year they share their unique perspectives of the sessions and events that unfold during this magical time in Oxford.

“We have to move beyond talking about AI for good and AI ethics. We simply cannot build just, equal, and fair automated systems on top of corrupt toxic sludge.” - Tanya O’Carroll

Tanya O’Carrol’s mic-drop statement at the end of her talk brought on a raucous round of applause at the Artificial Intelligence (AI) and Human Rights panel at the 2019 Skoll World Forum. She had just addressed the major elephant in the room: the very business models of extraction of personal data at any cost that have brought on an age of surveillance capitalism need to be challenged.

The AI and Human Rights panel at the 2019 Skoll World Forum brought to light some incredibly pertinent insights surrounding the intersection of technology, society, and business. First, there was a questioning of “AI for good” and a plead for nuance while looking at what has already been done in human rights when discussing AI ethics. Second, there was the challenging of the capitalist structures in place that have even created a need for “AI for good”. Lastly, as is the optimistic nature of the Skoll World Forum, there were examples of the power of collective genius in addressing challenges of human rights in the digital age.

The problem with “AI for good” and “AI ethics”

The current discourse around “AI for good” and “AI ethics” stem from an understanding that left unchecked, new technologies can wreak havoc in society. However, the general consensus at Skoll’s panel was that AI is not that special — much like any other tool or technology, it can be used for good, for bad, have intendended and unintentended consequences. Furthermore, many corporate ethical codes for AI try to reinvent the wheel, without looking into existing human-rights based codes of conduct. Dunstan Allison-Hope of Business for Social Responsibility argued that “ human rights based methodologies offer a robust framework for the responsible development and use of AI, and should form an essential part of business policy and practice”. He continued on to say that the current conversation around ethics and human rights in technology only include tech companies, and that we need members of other industries weighing in on conversations surrounding human rights in the digital age, especially as AI and other technologies become a dominant force across sectors and geographies.

But why do we need AI for good or AI ethics in the first place? Promoting AI for good startups and regulating AI ethics don’t necessarily answer some of the most pressing questions that come with the rise of tech’s frightful five: Who is collecting our data? Where is this data going? What does consent look like? We need to look at the root of the problem: the big-tech business model.

Dissecting the big-tech business model

Shoshana Zuboff, an academic at Harvard, coined the phrase “surveillance capitalism” to explain power and information asymmetries that have enabled a new economic order that make those who hold our data far more powerful than us. Companies such as Facebook and Google have what she calls a “behavioral surplus” of digital data that allows them to be monopolists and market leaders in the trade of behavioral data. Everything we do online is tracked and able to be monetized. The monetization of this data is especially profitable in the aggregate, and as is Zuboff’s core argument, this data is far more valuable to the aggregator than to the individual.

The data that is extracted from us as we use tech products has such a disproportionate value to corporations that immense inequalities in power have emerged. Tanya O’Carrol of Amnesty International lamented at the Skoll World Forum that the way data is exploited and harvested is one of the biggest existential threats to society today. We need ethical codes for AI and organizations working on AI for good precisely because the tech business models of today prey on the raw material of our digital personhood. We need to challenge the system of data extraction that exists today.

What can we do? The power of collective genius

Systems change does not happen with an individual. It comes with the collaboration of a variety of different actors. Elizabeth Hausler of Build Change illustrated at the panel the need of collaboration between actors in her work in addressing infrastructure and architectural inadequacies resulting from natural disasters. Her organization uses AI to quickly assess buildings and rapidly come up with engineering designs that can then be implemented by builders and engineers with homeowner input. She also indicated that AI alone would not be the solution; we still need government officials to make the tough decisions to allocate resources to solve the right problems. Similarly, Babusi Nyoni, an AI evangelist from Zimbabwe discussed that without proximity to the perceived beneficiaries of an innovation, many technology projects fail. Communities and context can help determine which data is and isn’t useful.

Communities, governments, and businesses must bring together what Megan Smith, founder of Shift7 and moderator of the panel, calls their collective genius to challenge the existing power structures in the tech industry. Whether it is breaking up monopolies, pushing for an adherence to human rights conventions, corporate tax reforms, or accelerating positive community-led innovations, we must stop working in silos to challenge the status quo. Megan ended the panel by quoting William Gibson, “the future is already here, but just not evenly distributed”. Collective genius (and action) can help change that distribution.

Tulsi Parida is a Pershing Square scholar at the University of Oxford, where she most recently completed an MSc at the Oxford Internet Institute, studying the implications of mobile learning technologies in emerging markets through a gender and political economy lens. She is currently pursuing an MBA at Saïd Business school, where she is focused on responsible business and impact finance/investing. In previous years, she has led teams at start-ups in the US and India working to reduce digital divides in literacy. Tulsi is committed to reducing digital inequality and promoting responsible/inclusive tech.

Follow Tulsi on Medium