Has technological innovation lost the plot? An interview with AI ethicist Dr. Shannon Vallor

• Bookmarks: 312


Shannon Vallor is the Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence at the Edinburgh Futures Institute (EFI) at the University of Edinburgh, where she is also appointed in Philosophy. Professor Vallor’s research explores how new technologies, especially AI, robotics, and data science, reshape human moral character, habits, and practices.

She is a Fellow of the Alan Turing Institute, an advisor on the ethical use of data and AI to multiple Scottish and UK government bodies, and a steering member of the Stanford One Hundred Year Study on AI. She is a former Visiting Researcher and AI Ethicist at Google. In addition to her many articles and published educational modules on the ethics of data, robotics, and artificial intelligence, she is the author of the book Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting (Oxford University Press, 2016) and the forthcoming Lessons from the AI Mirror: Rebuilding Our Humanity in an Age of Machine Thinking.

Danya Sherbini, Senior Editor at the Chicago Policy Review, spoke with Dr. Vallor about the current state of technological innovation. The interview has been edited for brevity and clarity.

Chicago Policy Review: Last year you published an article in the MIT Technology Review about the lack of technological innovation focused on helping humans flourish. Can you give your take on how we got here?

Dr. Shannon Vallor: We’ve seen a gradual slide over the last few decades towards a technology ecosystem that is built for purposes other than what public interest would warrant. Since the post-war period of the twentieth century, people were quite alarmed that innovation was driven primarily by state interests in militarization and surveillance. But for a very long time people still saw the broader thrust of science and technology as enhancing human well-being, agency, and capability. Technology was a tool allowing us to go further, live longer, fuller lives, amplify our knowledge and capabilities, and expand the benefits of science more widely around the globe and to more diverse communities.

The digital transition of culture to large internet platforms kicked off a shift in how we think about technology, who we think it’s for, and what we think it’s for. Most new technologies you see being promoted as the next wave of technological advancement involve some mode of data extraction that turn our data into a resource for somebody else. These technologies are becoming embedded in our own bodies, making it so that every human relationship, every human experience, is mediated by these extractive technologies.

That’s a very different vision than technologies that allow us to enjoy more freedom and more variety in the kinds of activities and kinds of lives that we could pursue. The early days of computing were very romantic, heady days where we imagined that that’s what the digital transformation would bring us. It would bring us liberation, variety, choice, and autonomy. We didn’t imagine that it would actually begin to feel like a vice tightening around us and pushing us into a very narrow set of behavioral patterns. And that’s what, over the last few decades, it has felt like is happening for a lot of people.

CPR: In your article, you discuss the shift of technological innovation away from infrastructure like roads, power grids, and transit systems towards things like consumer apps and smart devices. Can you talk more about this shift? How has the definition of “technological innovation” changed?

The background for writing that piece was to point out the things that have been neglected because they don’t fall into the business model of data extraction – like the way that we no longer think of physical infrastructure as something that’s exciting for tech-minded people to pour their energy into or devote their careers to.

We think of non-digital infrastructure as burdens for taxpayers, to be grudgingly and poorly maintained, not to be designed in ways that are sustainable and better than what we had a decade ago. We’re content if our power grids and rail systems basically limp along in much the same state that was expected 30 years ago, when we should be 30 years ahead and enjoying, for example in the United States, utilities and public transportation that rival the best in Europe or Asia. We should be enjoying power grids that lead the way in efficiency and stability, water systems that deliver cleaner water and sustainable infrastructure that is more secure and resistant to weather shocks and other challenges. But we’re lagging far behind on these fundamental aspects of life that we don’t even think of as technology anymore.

But of course, they are technology. They’re the most important technologies, the most vital technologies – the ones that we use to ensure we have food to eat and water to drink. And one of the really interesting things about the pandemic was the way it showed the fragility of our global supply chain and the lack of investment in resilience and big picture infrastructure. So, we’re missing a lot, and my question is: if everyone seems to be able to see it, why can’t we change it?

CPR: Given these changes in how we view technological innovation, what can be done to incentivize companies to shift their focus towards public interest technologies? How can policy play a role in realigning these incentives?

Dr. Shannon Vallor: That’s a great question and obviously there are no easy answers. But as your question suggests, there’s an intersection here between the problem I’m pointing out in the technology ecosystem and the problems that many people have been pointing out in the broader economic order for a long time.

Our current economic order incentivizes people to seek short-term gains, even if that means long-term loss. So, what’s the incentive for corporate leaders to build a more sustainable and more beloved tech ecosystem for society and for human flourishing over the long term? Their incentives all run the other way, and we have, from a regulatory and policy standpoint, permitted the incentives to be structured in this way. So, I don’t pretend to be shocked by the patterns that we’re seeing when all the incentives that we’ve put into the system reward precisely these patterns.

Society doesn’t run on moral heroism. Moral heroism is important, but it’s important for individuals as a way of resisting broken systems and unjust systems. It’s not what makes good systems work. You don’t actually expect the system to reform itself simply because it’s unjust or poorly suited to human flourishing. People have to fix that. And the people who have to fix that are the people in power, and the people who put them in power.

We have to think of this not as a tech problem, but as a political problem. In academia, we talk a lot about techno-solutionism – the false idea that every social problem has a technical root, and a technical fix. This kind of attitude has permeated the policy space. We talk about the risks and harms of technology, and how policy can mitigate technology’s harms. But most of what we’re seeing are downstream effects of technology that arise from upstream systemic misalignment of political and economic incentives with social sustainability, social cohesion, and human wellbeing.

One of the examples of this is the way, in the tech policy space, we’ve been focused for so long on things like privacy and data protection. It’s not that these are unimportant; certainly, what’s happening in this extractive data ecosystem requires careful attention to privacy and data protection, and policy has a role to play. But privacy and data protection address a different problem than that of datafication and extractive technology. A company can comply with all policy directives for privacy design and data protection and still be incentivized to extract and monetize your data – it just has to do so a little bit more carefully in order to operate within the constraints that policy sets on how it collects, how it extracts, how it handles and transfers and monetizes your data. It doesn’t change the fact that that’s what it’s primarily incentivized to do.

So, we need policy that focuses on better incentives for technology development more broadly. What incentives can we create for technology companies to adopt business models other than ‘take the money and run,’ or ‘take the data and run,’ which today mean more or less the same thing?

CPR: So, do you consider some of the data protection regulations that have arisen in the last few years, such as GDPR in the European Union and CCPA in California, to fall into those type of policies?

Dr. Shannon Vallor: Because those policies are addressing downstream harms rather than the upstream causes, they end up playing an endless game of whack-a-mole. Because the landscape is always evolving and new forms of harm are popping up all over the place, policymakers have to chase them down. With every new technology that’s created, like we’re seeing with generative AI models right now, there’s a whole flood of new problems that policymakers are having to divert their attention toward. Next year there will be new problems. So, we’re basically exhausting all of the will and resources of the policy domain on playing this game of whack-a-mole. Instead, we should be asking why technology is constantly showing up in these kinds of broken and harmful forms and what we can do upstream to realign the entire ecosystem with beneficial outcomes, so that policymakers can address things that weren’t supposed to happen instead of things that were designed to happen.

CPR: In many ways, social media was a tidal wave that swept over the tech industry and society at large. Now, the new tech wave seems to be artificial intelligence, which is becoming more and more prolific across industries. As someone who studies the ethics of AI, what do you think the risks are of this growing focus on AI? Do you think the risks outweigh the rewards, or vice versa?

Dr. Shannon Vallor: The problems with AI are not technology problems. They’re problems that arise because of the political and economic incentives shaping the development of the technology. AI is not one thing. There are so many different technologies floating around under that umbrella label. Trying to describe them all with one term, or evaluate them all in a single judgment, is going to be immediately misguided. So, we should be asking which kind of AI technologies we want, where do we want them, how do we want them used, and how do we want them governed. And then we can build the AI technologies to do that. What we don’t want to do is let AI technologies be shaped by all the same misaligned incentives that have driven the digital platform ecosystem, and then panic and realize we have to mop up the mess that has been created. And unfortunately, that’s what we’re on track to do. AI technologies are so diverse and can be designed and deployed in so many different ways that we could in theory make AI be just about anything that we want it to be.

But we aren’t being given that choice. I’ll just take educators as an example. As a university professor, I’m among many who are now being told that ChatGPT and other technologies are going to prevent us from being able to assess students the way we always have. So, we’re told we have to redesign our entire pedagogy in order to make it AI-proof.

As it happens, some of the ways we assess students in university aren’t ideal, and we might actually invent some better ways as a result of these pressures. But the fact of the matter is, it shouldn’t be the case that educators have to adapt to a technology that hits our sector like a hammer that we didn’t ask for, a tool we had no voice in shaping. If students and educators were coming together to decide how AI could facilitate learning for them, I suspect we could get some pretty great things. But instead, everyone has to adapt after the fact. Technologies are expected to come as they are, and then we’re expected to remold ourselves and our systems to accommodate the technology. That’s not ultimately the ethos that’s at the heart of what it is to be human and connected to technology. The only reason for technology to exist is to foster the well-being of the creatures on this planet. And if it’s not doing that, there’s no point to it.

So, let’s get real and look at the structural problems in the tech ecosystem that have led to this state of affairs. AI technologies are not the problem. If we address some of these structural and political problems of the incentives in the tech ecosystem, then, 100 years from now we may be living with forms of AI that are extremely welcome and are entirely compatible with sustainable human flourishing.

CPR: In addition to the whack-a-mole situation you described, one challenge with technology policy is that policymakers are often removed from the technical side of things and rely on subject matter experts. What do you think can be done to bridge this gap and make technology policy more effective?

SV: This is a real area of concern for me and it’s something I’m working on now as part of my own research. We have a new program here in the UK funded by the Research Council here called “Bridging Responsible AI Divides” and myself and a colleague here at the University of Edinburgh are directing that program. One of its aims is to actually facilitate the transfer, exchange, and adoption –in both industry and policy settings—of knowledge that is germane to the responsible development of AI. How can we actually get this knowledge into policymakers’ hands in ways that they can use it, and what are the barriers to them adopting it? What are the barriers to companies adopting these forms of research that often come from the arts and humanities and social sciences that contextualize what AI is, and what it can be?

It isn’t just that policymakers need more technical expertise, which they do. Policymakers do need to understand the reality of what these tools are and are not, but they also are missing a lot of social, political, anthropological, and ethnographic understanding of the people and the contexts in which this technology will live. We need to bring this interdisciplinary knowledge into the hands of policymakers, and we need to have it translated so that it isn’t left in the arcane form that academics are incentivized to produce, a form that policymakers cannot use.

Many academics are cut off from understanding how policy works and how you actually change things through policy and regulation, so the knowledge flows are broken in both directions. We need policy sprints and other interventions that pair up researchers, technologists, policymakers, and other stakeholders in the tech ecosystem— particularly those whose voices are often left out, even though they might be the ones most directly impacted by these technologies.

CPR: Given everything we’ve discussed, are you still hopeful about the future? Do you think we can achieve some of these aims around realigning incentives and ensuring that technological innovation benefits the public interest?

SV: I am hopeful, but it’s a cautious and grounded hope. There are two kinds of hope. One, I think, is very healthy and necessary, and one is very dangerous. The kind that’s dangerous is that sort of hope that’s built on an empty faith in progress as destiny – the idea that we are destined to solve all our problems. As much as I would like to believe that’s true, I know that when things are okay, it’s because people have worked hard and made sacrifices and made choices to make them okay. So, it won’t be okay unless we do the work.

I am optimistic, though, in the sense that I absolutely know that we can do the work. And when I talk to students that are sometimes cynical and feeling very hopeless about the future, it helps to actually take them back through history and see how many times we actually have hit what has felt like a world-ending crisis, and how many times in society we’ve come together to demand better and do better.

We can talk about the civil rights movement, which is still having to be fought every step of the way. But we did fight that battle, and we’re still fighting it, and I don’t think people will ever give up fighting for civil rights. I don’t think people will ever give up fighting for human rights, and I don’t think people will ever give up fighting for the future of the planet. I think the question is whether enough of us will put our backs into that work, especially those of us who are relatively comfortable where we are and don’t feel like it’s an existential question of our survival. It’s going to take all of us putting our backs into it and really pushing for change, because one thing we don’t have is a whole lot of time. The world is becoming more fragile, not less. We are at a critical moment where we have to act in a coordinated, concerted, and wise way now—not wait for the next generation to figure it out.

454 views
bookmark icon