argumentative essay on technology does more harm than good

  • In the News
  • Editor’s Picks
  • Upcoming Debates
  • Environment

Upcoming debates

  • Become a Member
  • Nate Silver: The… Nate Silver, a preeminent Election 2024 forecaster and one of the most influential voices in political analysis as the founder of FiveThirtyEight and bestselling author knows that analyzing facts and having the right data can get you ahead, make informed decisions, and be prepared for any changes to a circumstance. In this conversation, Silver will discuss his recent publication, “On the Edge: The Art of Risking Everything,” with Reason’s Editor-at-Large Nick Gillespie, and will address how risk-takers are cultivating power and driving change. This debate took place in front of a live audience on Wednesday, October 23, 2024, at Maxwell Social.   Wednesday, October 23, 2024
  • Board of Trustees
  • Advisory Council
  • Partners/Sponsors
  • Our Friends
  • Get Involved
  • Where to find us
  • NPR Station Support
  • Join Our Team

argumentative essay on technology does more harm than good

SUPPORT OPEN-MINDED DEBATE

Help us bring debate to communities and classrooms across the nation.

argumentative essay on technology does more harm than good

  • Menu Header
  • Sign Up Log In
  • My Dashboard
  • Economics, Finance
  • Energy, Environment
  • Briefing Room
  • View Schedule
  • NPR Radio List
  • Memberships
  • Partner Support
  • The Intelligence Council
  • Robert Rosenkranz, Chairman

argumentative essay on technology does more harm than good

Will Artificial Intelligence Do More Harm Than Good?

Daron Acemoglu

ARGUING YES

Daron Acemoglu

Darrell West

Darrell West

  • In The News

Is it true that artificial intelligence will do more harm than good? Proponents say it will help us solve problems like climate change and world hunger while eliminating dangerous and mundane jobs. But critics warn that A.I.’s current trajectory is a dangerous one, which will likely concentrate power, reduce jobs, surveil consumers and voters alike, and threaten democracy. What’s more, some people say these conditions are not based on science fiction but are already in the process of afflicting us, given the number of algorithms we engage with on a daily basis. So, could artificial intelligence be used to help solve some of the world’s most pressing challenges and level the playing field, or will it present perils that far outweigh any good it might do?

argumentative essay on technology does more harm than good

Related Resources (4 RESOURCES)

The ai we should fear is already here, daron acemoglu: robotics, a.i., and the future of work, why we need to think of ai as a platform, the promises and risks of artificial intelligence: a conversation with daron acemoglu.

argumentative essay on technology does more harm than good

How will artificial intelligence shape our future? My long-read Q&A with Darrell M. West

Darrell m. west – the future of work: robots, ai, and automation, the future of work, how artificial intelligence is transforming the world.

down

Background (2 RESOURCES)

John Donvan

Hi everybody, I’m John Donvan, and welcome to Agree to Disagree from Intelligence Squared. And about that telephone in your pocket. Well, I, actually, let me use much more up-to-date term the smartphone in your pocket. And yes, it is smart in the sense that it really does have a certain kind of functional intelligence going on behind that little screen. Artificial Intelligence, AI. And thanks to AI, your phone is not some static, you know, lump of silicon, it is actually adapting to you all the time. It is studying you and it is learning from you to the point where it, it’s actually now advising you on routes and restaurants, and it’s reordering your photos by subject matter and whose faces are in them. And it’s lining up news stories for you to read.

And in so many ways that I expect you appreciate your phone and its artificial intelligence, they are helping you and your relationship with all of that, because let’s face it, it is a relationship between us and AI. Is it good and healthy and problem solving, or does it become worrisome that the AI has so much info on you and is practically speaking, deciding things for you, and that AI itself is just moving so fast into so many parts of life, way, way past the phone in your pocket and into the, the infrastructure of everything? And also that you don’t really know who’s behind it all and where it’s all headed.

So that, yes, you’re experiencing some of the benefits of AI right now already, but you may also be wondering in the longer run, will artificial intelligence do more harm than good? Well, that is exactly what we are asking in this episode, word for word. That question is one that later in the program we’re gonna actually put to an AI engine and see what answer it comes up with. But first, we have two humans who are gonna debate from opposing points of view on the question, will artificial intelligence do more harm than good? We’re joined by Daron Acemoglu, an economist at MIT, and Darrell West, Vice President and Director of Governance Studies at the Brookings Institution. Daron and Darrell, thanks so much for joining us on Agree to disagree.

Thank you. Nice to be with you.

To get things started, we have this central question and we just wanna know who’s the yes and who’s the no. So on the question once again, will artificial intelligence do more harm than good? Daron on that question, are you a yes or are you a no?

I am a yes, and I think AI will do more harm than good on its current path because it is being developed in a particular way that deepens social, economic and political inequalities in society.

Thank you very much. And that tells us where you’re gonna be on this question, Darrell, but I wanna do it as a formality. Darrell West on the question, will artificial intelligence do more harm than good? Are you a yes or are you a no?

I am the no, because I think AI can relieve humans of boring, dirty, and dangerous activities. There are gonna be tremendous advantages if we get the policy right. I mean, that’s really the crucial part here.

All right, thank you. So we know who’s yes and who’s no. And I want to go back to, uh, Daron, you took the yes side, which is yes, do more harm. So take a couple of of minutes and tell us why you’re on that side.

So let me sort of first lay the scene by saying, you know, AI is a bunch of new technologies, extensions of existing digital technologies in many different dimensions. And of course, any new technology arrives in the middle of existing conflicts, ex- existing power imbalances between companies, governments, individuals, workers, citizens, users. And if you looked at it at that perspective, you would start thinking about, well, how is it going to influence those existing relationships? And who is going to decide the future path of AI and for whose benefit? So the thing that really worries me is that AI on its current path is in the hands of already powerful companies and governments and, you know, forget about, you know, super intelligence and so on. But artificial intelligence as construed right now is a lot about collecting a lot of data and developing algorithms based on that data.

Now, as the familiar saying goes, knowledge is power, information is power. So who controls data is very much embedded in the existing economic and social power relations of society. And if you give more data to the already very powerful corporations, the already very powerful cogovernment such as the Chinese government, you know, this is going to have an impact on how we are going to divide the gains from new technologies and how we’re going to change the power dynamics in society both economically and socially. And in fact, when you look at the details, what you see is that, you know, AI is sometimes sort of presented as reaching human level intelligence and amazing capabilities.

Sure, perhaps in 200 years time scientist think it could happen, but today it’s really about three major activities. It’s about advertising, online digital ads, it’s about surveillance. You actually watch people, monitor people, learn more about their political views, what they’re doing at work, what they’re doing in their free time, and it’s about automation. And all three of these deepen existing inequalities in society. And unless you take other measures to counterbalance them, all three of them are going to have negative consequences on many people. So you see that with data collection, you see that with surveillance and monitoring. Automation is the more complicated one because it’s a key economic activity that’s been going on for 250 years at least. But I will argue, and we can come back to this later in its current form, this type of automation is also not, not generating benefits for workers in general.

Okay. You’ve given us a lot to start with there, but before we start digging into the points that you’re making, I wanna let, uh, Darrell West tell us why he says no to the idea that it’s doing more harm than good. So it’s your turn now, Darrell.

Thank you. Uh, I guess I am more optimistic about the future of AI because all new technologies almost always create problems for society. You know, when you go back and look at the printing, press, television, automobiles, any new technology, there always are problems either in terms of human safety or lack of understanding about the, uh, technology. And, and certainly we see a lot of that with, uh, AI today. But what always happens is humans have the capability to control the technology through laws, policies, and regulations. That’s been the history of technology innovation, and that will be the future of AI. Now, right now, we have had, for the last several decades a libertarian stance on technology in which we have outsourced all the major decisions to private companies.

And they have an enormous amount of data. They have misused the information. Uh, there’ve been problems of racial bias, a lack of fairness, uh, and certainly inequality has emerged from that. But it doesn’t have to be that way. It’s actually not the fault of the technology, I would argue it’s the fault of our public policy. I think with new laws, a new social contract, more regulations, we can actually adapt AI to human needs and move AI towards the public good in the same way that we did with past, uh, innovations. Like cars were dangerous things, and we came up with safety belts, uh, airbags and other, uh, safety, uh, mechanisms, uh, and we’re able to reduce the fatality rate. Uh, we have problems of AI in terms of fairness, bias, and inequality, but we can alter our tax policies and our social policies to deal with some of those issues and move AI much more towards the public good.

So what, what I’m hearing each of you saying there’s some, a certain amount of common ground in that you’re, you’re both saying there’s AI, uh, that’s doing a certain thing that could be harmful. And Darrell, you’re saying, well, it, it won’t be that harmful if we take measures. And Daron, you are saying it’s already doing harm and it’s gonna continue to, to do harm unless we take measures. But what I think I hear being the difference between the two of you is a little bit of a sense of where we are now in this journey. And Darrell, you’re pretty optimistic that things are moving in the right direction or likely to move in the right direction. And Daron, you sound much less optimistic about the moment we are and the trend lines that you see about how things are unfolding.

So I, I want to talk, you to talk a little bit more about that and what you said in your opening statement, and especially in terms of, of AI being an engine for inequality and, and and, and why not be more optimistic in the way that Darrell is that yeah, there are challenges, but we can address them, we can figure it out. Why not be more optimistic about that?

Well, I think, uh, you’ve nailed the key point of disagreement. There is a lot of agreement between what Darrell and I are saying, but there is one subtle difference, which is that I think Darrell, as several other, many other economists and social scientists, believes that somehow there are automatic mechanisms for humanity to adjust its policies, its norms, its institutions, so that we get the better of the new technologies. Whereas my own read of history, my own read of evidence is that, that sort of adjustment mechanism is much more haphazard and it often requires major political change and it sometimes failed, it sometimes succeeds and you cannot bank on its success. And in fact, I would go further and I would say that the moment we start banking on the success of these adjustment mechanisms, be they policies to redistribute or change the governance of data or companies or automation, I think the moment we take those for granted, we make it much more likely that the current path of AI that is inequality boosting, that is not actually bringing much productivity benefits continuous.

Darrell, your response?

I mean, I do not think there is an automatic mechanism, uh, that is going to correct the abuses of, uh, technology, but I do believe that humans have the capability to make different choices and enact different kinds of policies. And in fact, I think we are actually in the early stages of starting to do that. So as I mentioned before, the last several decades we’ve been libertarian, but there now is a tech-lash that has emerged, a backlash against technology, against the technology companies and against the dominant market power of these, uh, companies. And we are starting to see new laws, new regulations, and new policies designed to deal with particular problems. So for example, uh, take the issue of privacy.

Like I think, uh, both, uh, Daron and I would agree, uh, that, uh, companies have too much data, they can misuse the, uh, data, the threat to privacy is, uh, very real. But the state of California passed a law to impose a lot of new restrictions on technology companies. And California is such a big state that the tech companies can’t really devise AI for California that’s different from Utah or Nevada. So when California cracks down on, uh, privacy abuses or the European Union cracks down on privacy abuses, which, uh, they are, uh, currently doing as well, essentially the tech companies have to come along and essentially devise better AI that really helps everyone.

And so I think that just really illustrates the fact that at the state and local level, we’re already seeing a lot of, uh, action, uh, in moving towards greater, uh, regulation. Uh, there’s more regulation of the gig economy. Uh, there’s more, uh, cities who are starting to regulate Airbnb rentals because I think it’s harmful to their local communities. So this tech-lash is starting to change public opinion into public policy, and I think that’s the source of my optimism.

I’m John Donvan, this is Intelligent Squared US. More when we return. Welcome back to Intelligence Squared US, I’m your host, John Donvan. Let’s return to our discussion.

Daron, Darrell is also saying we’ve been down this road before with technologies. I’m thinking of obviously automobiles and electricity and, and railroads, um, maybe nuclear power, um, but um, that, that we sort of got it figured out, and that’s where the optimism is coming from Darrell. And I wanna know if you think there’s a, a, a difference of, of magnitude or qual- or, or, or kind in the challenges you’re talking about with AI versus, say, the adoption of electricity or the automobile.

Well, to make this less interesting, I think I agree a lot with the, most of what Darrell said. I think his beginning statement for the last round is, is very much my opinion as well. I think the only place where there is some residual disagreement is in Darrell’s account of how we have adjusted to past breakthrough technologies. I would not be as positive about how that adjustment took place. And I would say it was much more painful, it was much more haphazard, and it was much longer. Let me give some examples. You know, before the industrial age, we had the medieval agricultural economy, contrary to what is sometimes implied with labels such as dark ages. The Middle Ages had a lot of new technologies that improved agricultural productivity massively. But if you actually look at what happened is that in most cases, these technologies did not improve the living standards of peasants, farmers, they deepened inequality.

Sometimes they actually reduced the living standards of, uh, uh, of, of the majority of the population. For example, with much better mills, uh, you could improve many parts of the production process, but then the mills were the monopoly of religious authorities or the lords that then compel people to pay exorbitant prices for using the mills and could not, would not even allow them to use their hen mills. So this actually went on for several hundreds of years before sort of you see a systematic change in who has property rights, who controls new technologies, who has rights. And then you come to the industrial age and you see the same thing playing out.

You know, the new technologies that spearheaded the British Industrial Revolution were mostly in the textile industry at first. Those not only just led to much greater inequality, but also lower wages, much longer working days for most textile workers, also to much worse conditions in factories in cities. And this lasted more than a hundred years before things started improving. So there is nothing rapid about it. And if you look at the details, there’s nothing automatic about it now. That’s what worries me.

Well, let, let’s talk a little bit about, you were talking about the impact of technologies, uh, on, on, on the livelihoods and the, the quality of life of individuals hundreds of years ago. I wanna talk a little bit now about the impact of artificial intelligence on the livelihoods, uh, and life, uh, quality and the workplace of, uh, the average American. So Darrell, you said at the outset, um, that, that, um, AI kind of relieves people of certain dreary jobs, which sounds great. Um, how far does that go?

Well, technology can relieve people of, uh, mundane, dirty and dangerous jobs. So, for example, when there’s a nuclear accident, like we used to have to send people in to kind of monitor the radiation, uh, levels, uh, now we routinely send in robots with cameras, uh, with sensors. They can basically collect the same, uh, data without endangering humans. So that’s an example of how technology can actually improve the human life, uh, and, uh, kind of protect people in the process. But I think there also some economic advantages that, uh, people often ignore. So for example, cell phones are becoming ubiquitous and smartphones are becoming, uh, more, uh, common around the world as well, including in developing nations.

Mobile technology has enabled farmers in remote areas of Africa to basically expand their markets. Like in the old days before technology, essentially people who fished, uh, people who farmed in, uh, rural developing, uh, nations, basically were limited to their very local market in order to sell their wares. Today because of mobile technology, they have access to much greater information outside their local city and their local community. They can sell their wares, uh, to a broader, uh, uh, group. And so it’s an example of how technology can better the lives of people. I wouldn’t argue it’s going to, you know, solve the inequality problem because you know, that’s deeper and more, uh, fundamental and there’s a lot more going on there. But technology can be one of the tools that can provide more information to people, expand their markets, and therefore create greater economic opportunities.

So, Darrel, you, you just mentioned that you don’t think technology is solely to blame for inequality. So I want you to talk a little bit about that. Um, w- what, I know it’s very, very difficult to put a percentage on it, so I’m not gonna ask you to do that. But if you can give us an overall sense is, is AI, uh, in, in any way, in the way that Daron is explaining it already an engine for inequality compared to other things that are causing inequality?

It is currently an engine for inequality, but I would argue a bigger source of inequality is our completely inegalitarian tax policies and our very poor social welfare, uh, policies. I mean, there are very large and very rich tech companies that are paying five to 10% on their, uh, corporate tax rates. Like that is ridiculous. That’s completely unfair. That contributes to the inequality. So if we want to deal with inequality, there are tax policy and social welfare policy changes that would mitigate the impact of that. Like, if you look at the experience of America, like the period from like the 1920s through the late ’40s was a period of pretty high, uh, income inequality. But then through public investments and infrastructure, tax policy changes, social welfare policy changes, we developed social security, unemployment compensation, and other things that the period from, let’s say 1945 through the 1970s, America actually got more egalitarian.

But then over the last 40 years, basically since the 1970s, our tax policies moved in a very inegalitarian manner. And so tech, uh, and so inequality, uh, that expanded. But it’s not necessarily the technology that caused that. It’s the fact that we don’t tax these companies. We don’t tax their income or their wealth. We don’t regulate them. Uh, we don’t have, uh, social welfare policies that help people make the adjustment to automation. So the, the inequality does not have to coexist, uh, with, uh, technology. If we pair technology with the proper tax policy and social welfare policies, we can actually bring down inequality.

Well, let me bring that back to Daron then. What, what Darrell is saying is that your argument for inequality might be overstating the impact of inequality in the overall equation of what’s causing inequality.

Well, I think those are complex questions, and there is much, I agree with what Darrell said. I think our tax policies are horrendous. You know, there are institutional reasons for inequality. In the US, social safety net is very weak and inefficiently organized, completely agreed. But if you look at the data, at least the way I have, and you know, of course there are different choices you can make in approaching the data, but my own research here as well as research by others, indicates that automation has been a major source of inequality and stagnant wage growth over the last four decades.

And it’s not so surprising in some sense what automation has done. And this is all pre AI, by the way. And the question is how AI is continuing this trend, and we can talk about that. But what automation has done is that it has enabled companies to use equipment, industrial robots for tasks previously performed by blue collar workers and factories and software, digital tools for tasks previously performed by office workers. And if you look at where inequality comes from, it’s actually driven very largely by the following incomes of workers, demographic groups that used to specialize in those tasks that have been automated over the last several decades. And in fact, that’s exactly what the historical record also shows.

In other episodes, you also see that automation tends to increase inequality, tends to depress wages. This does not invalidate the point that Darrell made. He gave a brilliant example. For example, from the work by the economist Rob Jensen, about how mobile phones have helped, uh, fishermen in, in Kerala in India. But when you look at that, the remarkable thing in that example is exactly that the use of mobile phones to help fishermen is not automation. It’s actually you are augmenting what fisherman used to do. You’re providing them new tools to make better decision, acquire better information, and you’re actually creating new tasks for them.

But, but isn’t that

That’s, that’s the key

… isn’t that actually

… key issue. If I can just say one more sentence.

I think that’s the key issue when it comes to the labor market implications of any new technologies. How much of it is just automation? How much of it is creating new capabilities for workers? And I think the current path of AI building on how we have used digital technologies is going too much in the automation direction.

Why do you think that? T- talk, talk about a couple of industries where you think that’s happening.

Well, I mean, the best example is, uh, manufacturing. You know, if you look at-

Well, let me, let me ask you. I, I think manufacturing is also the easiest example. But, but

… what, what about other kinds of professions that, where people feel they might be sitting comfortably right now, are you saying that they’re also at risk of losing their livelihoods? Not having their work enhanced by a, a souped up tool, but having their work replaced by, um, by an artificial intelligence that can lock, stock and barrel do everything that they were doing for a living before?

Yeah, I think there’s been a lot of, that has already happened before AI. If you look at where automation has been, uh, you know, there’s rapid automation on factory floors, but you know, even, uh, as far back as 1980s, 1970s, you know, a very small fraction of the US workforce was in blue collar jobs. A lot more of the impact of automation has been in office jobs, clerical jobs. And you know, if you look at, uh, in the 1930s, 1920s, 1930s, uh, 1950s, 1960s, you see clerical jobs or blue collar manufacturing jobs, those were the ones that were engines of wage growth in the 1940s, ’50s, and ’60s. And that’s because there were many organizational and technological choices that were expanding the functions that these workers were performing, creating new tasks for them, new capabilities for them.

And now, fast-forward to the post 1980 era, and you see that a lot of the use of digital technologies is focused on eliminating these workers or eliminating at least a subset of the tasks that they’re performing. And that’s the cracks of the matter about how we use technology creating inequality or actually helping workers.

All right, lemme take that to, to Darrell. And, and Darrell, I’m, I’m sure you say, you’re gonna tell us it doesn’t need to be that way, but also to some degree, I’m sure you’ll, you’ll concede it has been that way. So I’d like to hear how you process your response to that.

Uh, Daron is, uh, completely correct, uh, that there are going to be lost jobs and there have been, uh, lost jobs, uh, due to, uh, technology and automation. I mean, we can clearly see that in retail, in office jobs, in finance, uh, manufacturing, and in other areas. But a few years ago, I wrote a book entitled The Future of Work, and I argued, and John, you’re exactly right, it doesn’t have to be that way. I made the argument that we, in this era of digital transformation, we need to invest in worker retraining, in adult education.

We need to embrace lifelong learning, kind of the old education model where we invest in education up through about age 25, and then people are on their own after that has to give way to new models where basically people re-skill at ages 30, 40, 50 and 60 virtually, uh, throughout, uh, their entire lives. So again, it’s not the technology that is creating the problem, it’s our lack of good tax policies, social welfare policies, education policies, et cetera, et cetera. So we need a new social contract. And if we do those things, I’m actually still very optimistic about the future of AI and technology in general.

So I mentioned at the beginning of the program that we went to an AI engine, uh, called Anthropic, and we actually entered the question that we’re debating here, will artificial intelligence do more harm than good? And it gave us an answer. (laughs), I’m gonna read it. Ai… this is the answer. I’m, I’m now quoting, I will not do the robot voice, but I’m so tempted.

“Um, AI has the potential for both positive and negative impacts. AI can help solve many of the world’s problems and improve lives with applications such as disease diagnosis, increased productivity, and automation of dangerous jobs. However, there are valid concerns around privacy and bias and job displacement. There are also longer term concerns about the existential risks associated with advanced AI, such as the control problem and the misuse of AI by malicious actors. Overall,” now it’s reaching a conclusion, it’s taking sides in this part. “Overall, AI will likely have a net positive impact, but in order to ensure this, we need to be proactive about addressing the risks and continue to develop AI responsibly.” Well, it sounds like

Are you sure Darrell didn’t write this?

Yeah, it sounds (laughs)-

That is exactly my view. I completely agree with that. (laughs).

(laughs). Uh, I, I’m, I’m just curious about your take that we’re actually listening to an AI answer that was constructed in pretty darn good English and made sense. Um, what, how does that, how does that just hitch, hit you, uh, Daron, that, that, that a, a machine made up of wires and lights and flashing silicon can do that? Somebody’s gonna make so much fun of what I, how I just described (laughs) the circuitry

No, I mean, I-

… but go for it.

You know, o- of course, many of the capabilities we are developing are quite impressive. The cameras on our phones are amazing. Uh, the remote work communication technologies are just out of this world. You know, you have much better imaging possibilities, much better ways of data crunching. I think all of those are very real. That’s the reason why I agree with Darrell that we have the means to expand our capabilities. I also tend to think that there is often quite a lot of hype about some of these technologies. You know, if you ask these fancy sort of machine language models questions that can be built on the basis of things that they have access to from the internet, they can give very sophisticated answers.

But if you probe and try to understand whether they actually comprehend what they are saying, even the concepts, I think the, uh, the, the case is very clear that there’s a huge lacuna of, of understanding, and it’s just like parroting things that you sort of get from the web and put together or from their training data and, and put together. So I’m not actually, uh, of the view that we are close to anything like human capabilities in most tasks. But on the other hand, data crunching can get you very far. But the question again comes down to what are you going to use that data crunching for?

D- Darrell, wh- why, why is AI so good with data? What, what advantage does it have over the human brain?

Well, its main advantage is just the tremendous, uh, processing power that it has. And as super computers get even better, that processing power, uh, is going to, uh, improve as well. But I mean, what impressed me about the AI engine answer is AI and machine learning are actually good at summarizing what other people have come up with because it’s basically just a more sophisticated form of content analysis that we’ve had around, uh, for, uh, decades. You know, it can look at the literature, uh, look at books, look at articles, and basically kind of pull out what are the most frequent types of arguments.

That’s completely different from human intelligence. Uh, that’s kind of almost a rote type of thing. I mean, the future AI is going to develop more capabilities to learn, uh, to adapt, uh, and to basically make, uh, more informed decisions, uh, based on data. But I do think the part of the answer that we do need to worry about is the data control part of, uh, that. It is a huge problem, uh, right now, basically because of the dominant market position of the large, uh, technology, uh, platform, uh, companies. And so that’s the part that we need to get a handle on.

I’m John Donvan, this is Intelligent Squared US. More of our conversation when we return. Welcome back to Intelligence Squared Us. Let’s get back to our debate.

So where do you see us being in 20 years on all of these issues? You know, on the whole, again, the, the question is whether we think AI will do more harm than good. So we’re talking about on balance, and it’s obvious now from both of you. We’ve heard about both benefits, uh, real and soon to come, and also risks. But you know, if you look at some of the potential benefits that AI can, can make people healthier, that the, that, uh, disease can be spotted sooner, that potentially, if the dream of, uh, auto, of self-driving cars comes true, that AI can make the roads safer. Those, those are all good things.

On the other hand, what happens to the taxi drivers and what happens to the radiologists and what happens potentially to some of the other m- members of the medical profession? On the whole weighing up the, those sets of benefits and, and, and, and risks Daron, what, where do you, where do you see us going in 20 years, being in 20 years on the trajectory that we’re on right now?

Uh, again, I don’t wanna sound like, I think it’s an open and shut case. I do believe, like Darrell, that we can change course, we can change direction, but I think it’s a more systemic change that is needed. Do I believe that such a systemic change is likely in the current US political environment? Unfortunately, I’m not super optimistic, although I still hope we can do better. I also think that though, there are technologies that are going to be very promising in terms of disease detection and a few other things like protein folding. Some of the promises of AI are exaggerated.

Many of the simplest routine tasks were already automated by using simpler digital tools. We are now entering the field of human judgment. And I don’t think AI is going to make such rapid progress that in 20 years time, we’re gonna see amazing productivity improvements. In fact, one of the remarkable things about the data is that the age of digital technologies has been one of relatively slow productivity growth. And I think it’s going to continue that way unless there is a complete overhaul of how we use these digital tools, which again, I hope is feasible, but I don’t see it as likely.

So, so that’s interesting, Darrell. I think what I’m hearing Daron saying is that in addition to the, the tension with the policy choices that are made, the, to the, uh, establishment of guardrails to keep the harm from outweighing the good, also just the suggestion that the actual wizardry of tech, of, of AI has been over-promised that, uh, the, the potential benefits are gonna take a long, long time to come.

You know, we were talking, I, I know I was on panels in 2016, 2017, where there were predictions that there were gonna be a lot of self-driving cars all over the highways by 2020, and now it’s two years after that, and that hasn’t happened. And so that question of whether the promise is over promised, uh, is, is out there. And I think Daron has just put that out there, and it goes, I think it challenges optimism to a degree.

Uh, there certainly has been a lot of hype, nobody can really, uh, deny that. But, uh, the example on self-driving cars, uh, there was a prediction that by 2020, uh, they would, uh, be present at least in, uh, major, uh, cities. Uh, that has not happened. If you take a 10 to 20 year time horizon, though, I do think self-driving cars will become, uh, much more prevalent. It’ll probably start in the taxi business and the ride-sharing services and in the, uh, trucking, uh, sector before it becomes a mass consumption, uh, type of thing. But it will then disseminate from those areas.

Now, if we actually get to the point where self-driving cars do become more prevalent, right now in America, there are about 44,000 highway fatalities every year. 90% of them are accidents involving drunk driving or distracted driving. Like human error is the big source of these 44,000 fatalities. Now, autonomous vehicles are gonna have their own problems. Uh, privacy actually is gonna be a huge problem because autonomous vehicles collect an enormous amount of data, but autonomous vehicles are not gonna get drunk, and they’re not gonna get distracted. If we had a lot of autonomous vehicles on the road, we would not have 44,000 fatalities. Now, it would not be zero. Uh, it would be somewhere in between, but there would, uh, but our highways would be much, uh, safer due to the technology.

All right, so that’s a good, that’s, that’s a pretty optimistic prediction for 20 years from now. I, I wanted to mention before, when we were talking about taking that question to, uh, the question of our debate to the artificial intelligence, um, at Anthropic, that we actually did a debate ourselves in 2019. I moderated a debate, uh, under the aegis of Intelligence Squared between a human being and IBM’s technology called Project Debater. And we filled a hall, hundreds of people watched, um, a piece of technology in about 15 minutes, put together, uh, an argument on a question about providing, um, uh, preschool for children as I, as I recall.

And a human was given 15 minutes to prepare his argument. And then the two of them debated for about 20 minutes and went back and forth and back and forth with the, uh, the technology speaking and a voice that I would identify as female, um, making very cogent arguments, even making some jokes that were programmed into it. And in the end, the audience voted and decided that the human had been more persuasive, uh, which may show certain human bias, but it, it was, it was not an overwhelming vote. Um, the, the, the, the computer did pretty well. And I, I’m bringing that up, first of all, to recommend it’s worth going and listening to on our selection of, uh, of, uh, podcasts, but, in radio programs.

But I’m bringing it up also because we at Intelligence Squared to think of debate as being so, so, so central to democracy, to the functioning of democracy, to the expression of ideas, to the challenging of ideas to free expression, period. And I know Daron, that one of your big concerns about the negative impacts, as you would put it, of AI as we have, are experiencing it already, is that it is corrosive to democracy. And now that we’ve put democracy into the conversation, I really want to ask you to take on that, that point of view and explain where you think AI is, is being damaging to the very fabric of our democracy. Now and potentially.

Well, thank you, John. I’m wondering when that was gonna come up.

Here, here it is.

Yes, absolutely. Absolutely. I think, uh, you know, we need to take a step back. There is, as Darrell said, always a danger that new technologies are going to disrupt existing methods of communication, discourse, equilibrium in social affairs. And you see that with online communication in general before AI. But I think the most pernicious effects of AI on democracy are linked, again, consistent with Darrell’s emphasis to the business model of tech companies, which is based on digital advertisements trying to monetize data and information about people who are taking part in that communication. And it has led to an environment in which you have many problematic aspects for democratic discourse.

Democratic discourse requires that people have trust in the quality of the information that they receive, that they hear opposing voices, they can have incentives and means for reaching compromise. They can have small group interactions, which acts as a school for self-governance and democratic participation. And I think the current business model centered on digital advertisement undermines all of these. It’s not just the echo chambers. That’s one aspect of it. It’s not just the extremism sort of getting a lot of mileage, but the whole, uh, sort of erosion of civic communication and participation that is coming together with this business model and the tools that are amplifying this, this business model that’s a problem.

Where are those tools functioning? Gi- give us just a few examples of where the algorithms are doing that.

I think the, you see that both on sort of, uh, bilateral communication platforms such as, you know, uh, Facebook where you are sort of directly sending messages to, uh, a group of followers versus platforms such as YouTube, where you are sort of broadcasting messages. In both cases, if you look at the business model, it’s based on trying to maximize engagement. And, uh, the reason for that is very clear, if people are engaged, that they’re gonna be more receptive to digital advertisement, which is the source of the, the most major source of revenue, overwhelming source of revenue for these companies.

And in both cases, you see that that acts as a way of undermining any type of reliable communication. It amplifies extremism and more sensationalist content, uh, in, in the YouTube case that is going to to be videos with more sensational material or bigger claims, or sort of more extremist content, in Facebook cases gonna be related to the way the feed is presented and amplified and boosted. And in both cases, the evidence is building up that this has come with a variety of anti-democratic consequences.

And, and are you saying that the AI is figuring out, you know, obviously what you’re saying is that these, um, algorithms are trying to push our buttons. And is the AI figuring out even better ways to push our buttons because it has access to all of this data than any, than any human would?

Absolutely, absolutely. You know, I think, you know, advertisements again, are as old as, you know, marketing and industrial production. But if you look at how the ad business worked before it’s a broadcast, it was a broadcast model. So you weren’t targeting the ads to a specific individual and, and you weren’t using detailed information about how that individual’s attention is going to be the drone, how his or her buttons are gonna be pressed, how you can maximize that individual’s enragement, emotional reactions so that they end up spending more time. All of these again, are not, you know, this is not like super intelligence or anything like that. It’s really simple algorithms, but with enough data about how individuals react to various different types of cues, you can do that quite successfully.

The sort of Cambridge Analytica is just one little example of that. If you look at the algorithms involved in Cambridge Analytica, they’re pretty mundane.

D- Darrell, um, what’s your response on this? I, I, again, I suspect you’re gonna say there’s a lot of truth to this.

Well, I’m not gonna defend the AI of social media companies because clearly we have an extremely toxic information ecosystem. Uh, Daron is completely, uh, right about the way social media fuels polarization, extremism, radicalization, and sometimes even outright, uh, violence. So that clearly is problematic. Uh, I actually have a new, uh, Brookings book out entitled Power Politics, that does look at threats to democracy. The threats that I worry about beyond technology are voter suppression, rigged institutions, gerrymandering, and this archaic institution called the Electoral College, which basically has allowed republicans to lose the presidential popular vote most, uh, of the time in the last 30 years, but actually win the electoral college, uh, 40% of, uh, the time and therefore dominate policymaking. So I make that point just to, uh, make the broader argument that we need to avoid tech determinism. There clearly are a lot of problems in our society, but they’re not technology.

Can you define that term?

Yeah. Uh, what I mean by that is there clearly are many problems, uh, in our society, but there not necessarily based on the technology, you know, inequality I think is based on tax policy and social welfare policy. The threats to democracy, although technology is making the social media, uh, atmosphere, uh, much more toxic, they’re much more fundamental threats there in terms of state legislatures passing rules that are basically making it more difficult to vote. How our political institutions are rigged against majority public opinion, uh, gerrymandering is skewing the results of house elections and state legislative elections. So the, the problem with tech determinism is it actually makes people feel powerless. Because if we’re blaming the technology, then the lesson that people draw from that is the technology is so powerful, we can’t do anything about it. And in all of my writing, I always try and make the argument that humans are not powerless. We are actually powerful, we can control the technology.

But if I could add just one thing, I mean, I think, uh, the argument that I try to articulate is not tech determinism. I emphasize specifically that this is a function of the business models that are prevailing. So absolutely, I don’t think there is anything deterministic about tech. But there is one aspect that I would like to reiterate. Unless you change the business model, which is very, very difficult, and it’s again, part of that systemic change that I loosely referred to before, you are not going to be able to avoid those lopsided effects of data concentration, data control, and monetization of that data that takes place online in social media and other platforms. So I think it’s not tech determinism, but it is a set of pernicious effects from the current use of tech based on a specific set of business practices.

Darrell, you mentioned a few minutes ago, we are powerful and, and, and, and we can control this technology. Which just throws, throws the door open to the conversation that comes out of science fiction that I’m guessing neither of you (laughs) thinks we should need to take particularly seriously, but I gotta put it out there. That whole, that whole notion of the cyborgs taking over that whole notion, you know, Elon Musk warning about the day will come when the artificial intelligence really achieves consciousness and then achieves, uh, e- egocentrism and then just achieves ambition and then achieves greed and doesn’t, and wants us out of the way. Um, is that just crazy, fun, scary talk? Or is there something real there? And I’d like to ask each of you that question, but go, go to you first, Darrell.

Uh, Elon Musk is just completely wrong on this as he is in many other areas. I am not worried at all about cyborgs, super intelligent robots or AI eventually enslaving, uh, humanity. Uh, the, when you look at AI today, it’s actually not that good. Most AI is what I would call single purpose AI, meaning it can be really, really good at doing one thing like tabulating data or analyzing the content of the scientific, uh, literature, uh, things like that. There actually is not AI that is general purpose AI. Like, it’s hard to have an algorithm that is good at like 10 different things or 100 different things the way human beings are.

And so the fear that they’re gonna be these super intelligent bodies that are like good at a bunch of different things, maybe in 500 years we’ll need to worry about that. But today what I worry about is fairness, racial bias, lack of transparency, impact on human safety, inequality, privacy. I do not worry, uh, at all about cyborgs or super intelligent robots taking over.

(laughs). And what about you, Daron?

Well, 1- 100%, 200%, 300% agreed with Darrell. Darrell’s completely, right. There is no feasible path along the current trajectory to anything even remotely looking like super intelligence. But I would actually add, I would amplify that point. I actually believe that all this talk of super intelligence is actually misleading us and taking, making us take off our guard against the more relevant problems with AI, in two ways. And I’m not saying this is a conspiracy. I’m not, this is not, nobody’s conspiracy. I hate conspiracy theories. It’s just something that has evolved in the context of the current tech industry, current tech barons, uh, way of thinking about it.

Well, I’m glad, I’m glad I asked that question. I was a little bit hesitant because I thought you had both, uh, laugh it out of the park, but you both took it seriously and actually brought some insight to it as, as you did, um, throughout this whole conversation. Um, I, I, I, it was very interesting to hear how much you agreed, um, but you had that essential disagreement on optimism about the moment that we’re in and where it’s leading us to in the world of, uh, artificial intelligence becoming further integrated into our lives and into the infrastructure of everything we do.

But I really wanna say that I appreciated how you listened to each other and showed respect for each other, because that’s what we aim for in Intelligence Squared. So, uh, Daron Acemoglu and Darrell West, thank you so much for joining us for this conversation.

Thank you very much.

Well, thank you, John. Thank you, Darrell. This was really fun and, uh, I’ve learned a lot. Thank you.

Yep. Agreed.

And the conversation that you’ve just heard perfectly captures why we do this. Y- you know, that the way the discourse is happening across the land these days is pretty broken, and it’s why it’s so unusual, but also so refreshing to hear two people who disagree actually be able to converse rationally and civilly and to shed light, not just blow smoke. And we know from so many of you that that’s exactly why you tune into us. And it’s why I’d like to remind you that as you turn to us for all of that, we turn to you for support. We are a nonprofit and it’s contributions from listeners like you who keep us going. So please consider sending us a buck or two, or 10 or 50. Go to our website, iq2us.org, whatever works, whatever you can do, and it’ll give you a stake in what we’re doing here each week. And it will mean that we’ll be here each week and next week and beyond. So thanks. Please consider it.

And I’m John Donvan. We’ll see you next time. Thank you for tuning into this episode of Intelligence Squared, made possible by a generous grant from the Laura and Gary Lauder Venture Philanthropy Fund as a nonprofit. Our work to combat extreme polarization through civil and respectful debate is generously funded by listeners like you, the Rosenkranz Foundation and Friends of Intelligence Squared. Robert Rosenkranz is our chairman. Clea Conner is CEO, David Ariosto is head of editorial, Julia Melfi, Shay O’Mara and Marlette Sandoval are our producers. Damon Whitmore is our radio producer, and I’m your host, John Donvan. We’ll see you next time.

Related News (2 RESOURCES)

 Is the Truth really out there? Debates. You remember them. When both sides presented facts to support your chosen position. While the debate has disappeared from much of our daily…

Is the Truth really out there?   Debates. You remember them. When both sides presented facts to support their chosen position. While debate has disappeared from much of our daily…

JOIN THE CONVERSATION

You must be LOGGED IN or be a member to comment. BECOME A MEMBER to Join the Conversation.

Have an idea for a debate or have a question for the Open to Debate Team?

RECOMMENDED

Will chatgpt do more harm than good, agree to disagree: sex with robots, artificial intelligence: the risks could outweigh the rewards.

Explore All

DEBATE COMMUNITY

Educational briefs, be curious. be open-minded. be open to debate..

FIND OPEN TO DEBATE ON YOUR LOCAL NPR STATION

Apple Podcasts Logo

A nonpartisan, nonprofit organization, Open to Debate, formerly known as Intelligence Squared U.S. addresses a fundamental problem in America: the extreme polarization of our nation and our politics. Our mission is to restore critical thinking, facts, reason, and civility to American public discourse.

  • Suggest a Topic
  • Privacy Policy
  • Terms and Conditions
  • Media & Press
  • Careers / Internships

Open to Debate Logo

Expert moderation. Good-faith arguments. Reasoned analysis.

Sign up for weekly new releases, and exclusive access to live debates, VIP events, and Open to Debate’s 2024 election series.