Before I came to the
MBA, I was a service designer and user researcher in public sector innovation
labs at the White House, World Bank, and later the City of Austin, working on
making public services more user centered and user friendly. I came to the MBA at
Oxford to complement my design skills with the business and operations skills I
needed to take my prototypes to scale.
When I got to the Saïd Business School, I joined the small but mighty group of “social impact” MBAs looking to take our work to the next level with business skills. Over my year as a student, and now as an intern at the Skoll Centre working on impact talent development, I’ve learned a few lessons that I think apply to any business programme hoping to support their social impact students:
1. Understand your “impact-MBA” personas, and design programming accordingly
There are a lot of us
that get lumped together as social-impact focused MBAs.
the impact-curious who are looking to enter the field
those who are committed to impact but looking to move into a new sector
and those who want to accelerate their careers in their sectors
The same support doesn’t
work for everyone.
A lot of MBA
programming around impact assumes that we need to be convinced that social
challenges are worth caring about and working on. That is effective for the
impact-curious, and converts some who are on the edge, but it doesn’t cater to
the impact veterans who are already convinced.
That’s why one of the key, but more difficult choices we’ve had to make at the Skoll Centre this year, is to focus the Impact Lab on those who have deep impact experience. This might feel exclusive, but in fact, it ensures that different impact MBAs with different needs and expectations get what they want and need out of the programming. The more we realize there isn’t a single, one-size fits all “impact-MBA”, and design programming that serves more niche needs, the more meaningful students’ engagement will be.
2. Look beyond impact investing and impact consulting
Just as there are
different impact MBA personas with different levels of interest and experience,
there are personas that bring different skills, experiences and interests
within impact. There are former policy makers hoping to bring back a business
perspective, software engineers looking to lend their skills to a social tech
firm, marketers wanting to help rebuild trust in business… the list goes on.
Yet, the MBA impact career pathways can tend to mirror those of the broader
MBA, and overemphasize finance and consulting.
It’s important that
all MBAs, especially the broadly impact curious who might be convinced to
pursue a career in impact, see that they have a place and role in the impact
ecosystem. Impact investing and consulting are each fulfilling and prestigious
careers, but they certainly aren’t for everyone, and don’t begin to cover the
breadth of options and needs in the impact space.
Keeping this in mind, this year the Skoll Centre is collaborating more closely with the careers team at Oxford Saïd to help MBA students navigate through the many different pathways and careers in impact. We’re particularly focused on engaging our incredible network of impact MBA alumni working in every sector from government to NGOs, tech companies, and marketing agencies to banks and consulting firms to share their learnings and advice with the incoming MBA class of 2019-20.
3. Don’t let us lose sight of the humans
There’s the danger
that when students are taught frameworks and tools for impact, they come out
bordering on technocratic. It’s hard not to drink the kool-aid and believe that
an impact consulting framework, or a human-centered design sprint could help us
fully understand and solve the problem. Or that using rigorous financial and
impact analysis will certainly help us identify the social innovations that
will scale. And when we’re sitting in our future jobs sending fancy PowerPoints
or building elaborate models, it has the risk of reinforcing what’s happening
really is this simple! The optimism of the MBA is great, but it’s important to
keep us connected to the humans at the core of all the challenges we want to
This is where the Skoll Centre is looking to deepen its ties to our community. Oxford has a tradition of walling itself off, and we’re working on breaking this down and connect with the community. A great example was through Map the System. When my teammate and I analysed the system that caused inequality in early childhood in Oxfordshire, we found through our research with a local community organization, the Oxford Hub, that promising solutions never made it to implementation because impact reporting frameworks didn’t match the phase of solutions. It’s easy in a classroom to be convinced that impact measurement is important (and it is!), but the nuance comes from interactions with the real world. Creating those opportunities leads to understanding (and employability!).
4. Help us be as rigorous about our personal impact as we would be about an organization’s
As a school focused on responsible business, Oxford Saïd’s theory of change is to pump out business leaders who can create change from within even the most “traditional” companies. But these leaders can’t live up to this vision unless they’re critical about their organization’s and their own activities and intentions.
The same theory
applies within impact as MBAs join impact organizations like the United
Nations, development banks, and corporate sustainability teams. These
traditional players are the natural and prestigious next steps befitting an
MBA, and it’s certainly possible to make significant change through these
positions, but it’s important that we’re as rigorous about our impact and
intentions going into these (or any other) organizations, as we are about our
actions as a responsible leader working for an investment bank or consulting
That’s why in Impact Lab over the next year, we’re putting a finer point on developing a critical perspective on different themes within impact. With every activity, our objective is to help leaders to challenge their assumptions and ask themselves and others the critical questions necessary to ensure they are having the impact they promise. Importantly, we’re extending this perspective to being self-critical, so that students examine the biases and privilege they’re bringing into this work, and how they can overcome and utilize them to help create a fairer, more sustainable and prosperous world.
5. Mainstream impact
Finally, the most
important thing that a business school can do to create a fairer, more
sustainable and prosperous world, is not only to support students that are interested
in impact, but also to mainstream impact within the broader business
curriculum. Oxford Saïd did a great job of this within the core accounting
course this year. Each week, our professor had a group of students research,
critically reflect and present on different themes within extra-financial
accounting and reporting. The school also organized a mandatory union debate
that examined the merits and limitations of mandated sustainability reporting.
It was exciting that ESG factors and sustainability were thoroughly
mainstreamed within our curriculum, and eye-opening (at least to me) to see how
most corporates are thinking about sustainability and impact. Students were
able to leave the accounting course, not only understanding the basics of
accounting, but understanding the current state of extra-financial reporting,
and how we might build on this progress in our careers.
It would be great to
see similar mainstreaming in other core courses like analytics, corporate
finance, and technology and operations. The school already offers elective
courses and co-curricular activities like the Skoll Centre’s Impact Lab for
those who are interested in impact, but it’s all too possible for students who
don’t come in with this interest, to avoid any content and reflection around
impact at all during their MBA degree.
If we want to support every student to become responsible leaders and to pursue purposeful careers, a critical, human-focused, impact education is key for every MBA student.
Puja Balachander is an Oxford MBA 2018-19 student. She is also the co-founder of Devie, a trusted digital service platform that guides parents on their journey from pregnancy to parenthood, equipping them to become their child’s best first teacher.
Each year the Skoll Centre invites a small number of Oxford students to the annual Skoll World Forum on Social Entrepreneurship. Each year they share their unique perspectives of the sessions and events that unfold during this magical time in Oxford.
“We have to move beyond talking about AI for good and AI ethics. We simply cannot build just, equal, and fair automated systems on top of corrupt toxic sludge.”
Tanya O’Carrol’s mic-drop statement at the end of her talk brought on a raucous round of applause at the Artificial Intelligence (AI) and Human Rights panel at the 2019 Skoll World Forum. She had just addressed the major elephant in the room: the very business models of extraction of personal data at any cost that have brought on an age of surveillance capitalism need to be challenged.
The AI and Human Rights panel at the 2019 Skoll World Forum brought to light some incredibly pertinent insights surrounding the intersection of technology, society, and business. First, there was a questioning of “AI for good” and a plead for nuance while looking at what has already been done in human rights when discussing AI ethics. Second, there was the challenging of the capitalist structures in place that have even created a need for “AI for good”. Lastly, as is the optimistic nature of the Skoll World Forum, there were examples of the power of collective genius in addressing challenges of human rights in the digital age.
The problem with “AI for good” and “AI ethics”
The current discourse around “AI for good” and “AI ethics” stem from an understanding that left unchecked, new technologies can wreak havoc in society. However, the general consensus at Skoll’s panel was that AI is not that special — much like any other tool or technology, it can be used for good, for bad, have intendended and unintentended consequences. Furthermore, many corporate ethical codes for AI try to reinvent the wheel, without looking into existing human-rights based codes of conduct. Dunstan Allison-Hope of Business for Social Responsibility argued that “ human rights based methodologies offer a robust framework for the responsible development and use of AI, and should form an essential part of business policy and practice”. He continued on to say that the current conversation around ethics and human rights in technology only include tech companies, and that we need members of other industries weighing in on conversations surrounding human rights in the digital age, especially as AI and other technologies become a dominant force across sectors and geographies.
But why do we need AI for good or AI ethics in the first place? Promoting AI for good startups and regulating AI ethics don’t necessarily answer some of the most pressing questions that come with the rise of tech’s frightful five: Who is collecting our data? Where is this data going? What does consent look like? We need to look at the root of the problem: the big-tech business model.
Dissecting the big-tech business model
Shoshana Zuboff, an academic at Harvard, coined the phrase “surveillance capitalism” to explain power and information asymmetries that have enabled a new economic order that make those who hold our data far more powerful than us. Companies such as Facebook and Google have what she calls a “behavioral surplus” of digital data that allows them to be monopolists and market leaders in the trade of behavioral data. Everything we do online is tracked and able to be monetized. The monetization of this data is especially profitable in the aggregate, and as is Zuboff’s core argument, this data is far more valuable to the aggregator than to the individual.
The data that is extracted from us as we use tech products has such a disproportionate value to corporations that immense inequalities in power have emerged. Tanya O’Carrol of Amnesty International lamented at the Skoll World Forum that the way data is exploited and harvested is one of the biggest existential threats to society today. We need ethical codes for AI and organizations working on AI for good precisely because the tech business models of today prey on the raw material of our digital personhood. We need to challenge the system of data extraction that exists today.
What can we do? The power of collective genius
Systems change does not happen with an individual. It comes with the collaboration of a variety of different actors. Elizabeth Hausler of Build Change illustrated at the panel the need of collaboration between actors in her work in addressing infrastructure and architectural inadequacies resulting from natural disasters. Her organization uses AI to quickly assess buildings and rapidly come up with engineering designs that can then be implemented by builders and engineers with homeowner input. She also indicated that AI alone would not be the solution; we still need government officials to make the tough decisions to allocate resources to solve the right problems. Similarly, Babusi Nyoni, an AI evangelist from Zimbabwe discussed that without proximity to the perceived beneficiaries of an innovation, many technology projects fail. Communities and context can help determine which data is and isn’t useful.
Communities, governments, and businesses must bring together what Megan Smith, founder of Shift7 and moderator of the panel, calls their collective genius to challenge the existing power structures in the tech industry. Whether it is breaking up monopolies, pushing for an adherence to human rights conventions, corporate tax reforms, or accelerating positive community-led innovations, we must stop working in silos to challenge the status quo. Megan ended the panel by quoting William Gibson, “the future is already here, but just not evenly distributed”. Collective genius (and action) can help change that distribution.
About the author
Tulsi Parida is a Pershing Square scholar at the University of Oxford, where she most recently completed an MSc at the Oxford Internet Institute, studying the implications of mobile learning technologies in emerging markets through a gender and political economy lens. She is currently pursuing an MBA at Saïd Business school, where she is focused on responsible business and impact finance/investing. In previous years, she has led teams at start-ups in the US and India working to reduce digital divides in literacy. Tulsi is committed to reducing digital inequality and promoting responsible/inclusive tech.