Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

  • Artificial Intelligence and the Future of Humans

Experts say the rise of artificial intelligence will make most people better off over the next decade, but many have concerns about how advances in AI will affect what it means to be human, to be productive and to exercise free will

Table of contents.

  • 1. Concerns about human agency, evolution and survival
  • 2. Solutions to address AI’s anticipated negative impacts
  • 3. Improvements ahead: How humans and AI might evolve together in the next decade
  • About this canvassing of experts
  • Acknowledgments

Table that shows that people in most of the surveyed countries are more willing to discuss politics in person than via digital channels.

Digital life is augmenting human capacities and disrupting eons-old human activities. Code-driven systems have spread to more than half of the world’s inhabitants in ambient information and connectivity, offering previously unimagined opportunities and unprecedented threats. As emerging algorithm-driven artificial intelligence (AI) continues to spread, will people be better off than they are today?

Some 979 technology pioneers, innovators, developers, business and policy leaders, researchers and activists answered this question in a canvassing of experts conducted in the summer of 2018.

The experts predicted networked artificial intelligence will amplify human effectiveness but also threaten human autonomy, agency and capabilities. They spoke of the wide-ranging possibilities; that computers might match or even exceed human intelligence and capabilities on tasks such as complex decision-making, reasoning and learning, sophisticated analytics and pattern recognition, visual acuity, speech recognition and language translation. They said “smart” systems in communities, in vehicles, in buildings and utilities, on farms and in business processes will save time, money and lives and offer opportunities for individuals to enjoy a more-customized future.

Many focused their optimistic remarks on health care and the many possible applications of AI in diagnosing and treating patients or helping senior citizens live fuller and healthier lives. They were also enthusiastic about AI’s role in contributing to broad public-health programs built around massive amounts of data that may be captured in the coming years about everything from personal genomes to nutrition. Additionally, a number of these experts predicted that AI would abet long-anticipated changes in formal and informal education systems.

Yet, most experts, regardless of whether they are optimistic or not, expressed concerns about the long-term impact of these new tools on the essential elements of being human. All respondents in this non-scientific canvassing were asked to elaborate on why they felt AI would leave people better off or not. Many shared deep worries, and many also suggested pathways toward solutions. The main themes they sounded about threats and remedies are outlined in the accompanying table.

[chart id=”21972″]

Specifically, participants were asked to consider the following:

“Please think forward to the year 2030. Analysts expect that people will become even more dependent on networked artificial intelligence (AI) in complex digital systems. Some say we will continue on the historic arc of augmenting our lives with mostly positive results as we widely implement these networked tools. Some say our increasing dependence on these AI and related systems is likely to lead to widespread difficulties.

Our question: By 2030, do you think it is most likely that advancing AI and related technology systems will enhance human capacities and empower them? That is, most of the time, will most people be better off than they are today? Or is it most likely that advancing AI and related technology systems will lessen human autonomy and agency to such an extent that most people will not be better off than the way things are today?”

Overall, and despite the downsides they fear, 63% of respondents in this canvassing said they are hopeful that most individuals will be mostly better off in 2030, and 37% said people will not be better off.

A number of the thought leaders who participated in this canvassing said humans’ expanding reliance on technological systems will only go well if close attention is paid to how these tools, platforms and networks are engineered, distributed and updated. Some of the powerful, overarching answers included those from:

Sonia Katyal , co-director of the Berkeley Center for Law and Technology and a member of the inaugural U.S. Commerce Department Digital Economy Board of Advisors, predicted, “In 2030, the greatest set of questions will involve how perceptions of AI and their application will influence the trajectory of civil rights in the future. Questions about privacy, speech, the right of assembly and technological construction of personhood will all re-emerge in this new AI context, throwing into question our deepest-held beliefs about equality and opportunity for all. Who will benefit and who will be disadvantaged in this new world depends on how broadly we analyze these questions today, for the future.”

We need to work aggressively to make sure technology matches our values. Erik Brynjolfsson

[machine learning]

Bryan Johnson , founder and CEO of Kernel, a leading developer of advanced neural interfaces, and OS Fund, a venture capital firm, said, “I strongly believe the answer depends on whether we can shift our economic systems toward prioritizing radical human improvement and staunching the trend toward human irrelevance in the face of AI. I don’t mean just jobs; I mean true, existential irrelevance, which is the end result of not prioritizing human well-being and cognition.”

Andrew McLaughlin , executive director of the Center for Innovative Thinking at Yale University, previously deputy chief technology officer of the United States for President Barack Obama and global public policy lead for Google, wrote, “2030 is not far in the future. My sense is that innovations like the internet and networked AI have massive short-term benefits, along with long-term negatives that can take decades to be recognizable. AI will drive a vast range of efficiency optimizations but also enable hidden discrimination and arbitrary penalization of individuals in areas like insurance, job seeking and performance assessment.”

Michael M. Roberts , first president and CEO of the Internet Corporation for Assigned Names and Numbers (ICANN) and Internet Hall of Fame member, wrote, “The range of opportunities for intelligent agents to augment human intelligence is still virtually unlimited. The major issue is that the more convenient an agent is, the more it needs to know about you – preferences, timing, capacities, etc. – which creates a tradeoff of more help requires more intrusion. This is not a black-and-white issue – the shades of gray and associated remedies will be argued endlessly. The record to date is that convenience overwhelms privacy. I suspect that will continue.”

danah boyd , a principal researcher for Microsoft and founder and president of the Data & Society Research Institute, said, “AI is a tool that will be used by humans for all sorts of purposes, including in the pursuit of power. There will be abuses of power that involve AI, just as there will be advances in science and humanitarian efforts that also involve AI. Unfortunately, there are certain trend lines that are likely to create massive instability. Take, for example, climate change and climate migration. This will further destabilize Europe and the U.S., and I expect that, in panic, we will see AI be used in harmful ways in light of other geopolitical crises.”

Amy Webb , founder of the Future Today Institute and professor of strategic foresight at New York University, commented, “The social safety net structures currently in place in the U.S. and in many other countries around the world weren’t designed for our transition to AI. The transition through AI will last the next 50 years or more. As we move farther into this third era of computing, and as every single industry becomes more deeply entrenched with AI systems, we will need new hybrid-skilled knowledge workers who can operate in jobs that have never needed to exist before. We’ll need farmers who know how to work with big data sets. Oncologists trained as robotocists. Biologists trained as electrical engineers. We won’t need to prepare our workforce just once, with a few changes to the curriculum. As AI matures, we will need a responsive workforce, capable of adapting to new processes, systems and tools every few years. The need for these fields will arise faster than our labor departments, schools and universities are acknowledging. It’s easy to look back on history through the lens of present – and to overlook the social unrest caused by widespread technological unemployment. We need to address a difficult truth that few are willing to utter aloud: AI will eventually cause a large number of people to be permanently out of work. Just as generations before witnessed sweeping changes during and in the aftermath of the Industrial Revolution, the rapid pace of technology will likely mean that Baby Boomers and the oldest members of Gen X – especially those whose jobs can be replicated by robots – won’t be able to retrain for other kinds of work without a significant investment of time and effort.”

Barry Chudakov , founder and principal of Sertain Research, commented, “By 2030 the human-machine/AI collaboration will be a necessary tool to manage and counter the effects of multiple simultaneous accelerations: broad technology advancement, globalization, climate change and attendant global migrations. In the past, human societies managed change through gut and intuition, but as Eric Teller, CEO of Google X, has said, ‘Our societal structures are failing to keep pace with the rate of change.’ To keep pace with that change and to manage a growing list of ‘wicked problems’ by 2030, AI – or using Joi Ito’s phrase, extended intelligence – will value and revalue virtually every area of human behavior and interaction. AI and advancing technologies will change our response framework and time frames (which in turn, changes our sense of time). Where once social interaction happened in places – work, school, church, family environments – social interactions will increasingly happen in continuous, simultaneous time. If we are fortunate, we will follow the 23 Asilomar AI Principles outlined by the Future of Life Institute and will work toward ‘not undirected intelligence but beneficial intelligence.’ Akin to nuclear deterrence stemming from mutually assured destruction, AI and related technology systems constitute a force for a moral renaissance. We must embrace that moral renaissance, or we will face moral conundrums that could bring about human demise. … My greatest hope for human-machine/AI collaboration constitutes a moral and ethical renaissance – we adopt a moonshot mentality and lock arms to prepare for the accelerations coming at us. My greatest fear is that we adopt the logic of our emerging technologies – instant response, isolation behind screens, endless comparison of self-worth, fake self-presentation – without thinking or responding smartly.”

John C. Havens , executive director of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems and the Council on Extended Intelligence, wrote, “Now, in 2018, a majority of people around the world can’t access their data, so any ‘human-AI augmentation’ discussions ignore the critical context of who actually controls people’s information and identity. Soon it will be extremely difficult to identify any autonomous or intelligent systems whose algorithms don’t interact with human data in one form or another.”

At stake is nothing less than what sort of society we want to live in and how we experience our humanity. Batya Friedman

Batya Friedman , a human-computer interaction professor at the University of Washington’s Information School, wrote, “Our scientific and technological capacities have and will continue to far surpass our moral ones – that is our ability to use wisely and humanely the knowledge and tools that we develop. … Automated warfare – when autonomous weapons kill human beings without human engagement – can lead to a lack of responsibility for taking the enemy’s life or even knowledge that an enemy’s life has been taken. At stake is nothing less than what sort of society we want to live in and how we experience our humanity.”

Greg Shannon , chief scientist for the CERT Division at Carnegie Mellon University, said, “Better/worse will appear 4:1 with the long-term ratio 2:1. AI will do well for repetitive work where ‘close’ will be good enough and humans dislike the work. … Life will definitely be better as AI extends lifetimes, from health apps that intelligently ‘nudge’ us to health, to warnings about impending heart/stroke events, to automated health care for the underserved (remote) and those who need extended care (elder care). As to liberty, there are clear risks. AI affects agency by creating entities with meaningful intellectual capabilities for monitoring, enforcing and even punishing individuals. Those who know how to use it will have immense potential power over those who don’t/can’t. Future happiness is really unclear. Some will cede their agency to AI in games, work and community, much like the opioid crisis steals agency today. On the other hand, many will be freed from mundane, unengaging tasks/jobs. If elements of community happiness are part of AI objective functions, then AI could catalyze an explosion of happiness.”

Kostas Alexandridis , author of “Exploring Complex Dynamics in Multi-agent-based Intelligent Systems,” predicted, “Many of our day-to-day decisions will be automated with minimal intervention by the end-user. Autonomy and/or independence will be sacrificed and replaced by convenience. Newer generations of citizens will become more and more dependent on networked AI structures and processes. There are challenges that need to be addressed in terms of critical thinking and heterogeneity. Networked interdependence will, more likely than not, increase our vulnerability to cyberattacks. There is also a real likelihood that there will exist sharper divisions between digital ‘haves’ and ‘have-nots,’ as well as among technologically dependent digital infrastructures. Finally, there is the question of the new ‘commanding heights’ of the digital network infrastructure’s ownership and control.”

Oscar Gandy , emeritus professor of communication at the University of Pennsylvania, responded, “We already face an ungranted assumption when we are asked to imagine human-machine ‘collaboration.’ Interaction is a bit different, but still tainted by the grant of a form of identity – maybe even personhood – to machines that we will use to make our way through all sorts of opportunities and challenges. The problems we will face in the future are quite similar to the problems we currently face when we rely upon ‘others’ (including technological systems, devices and networks) to acquire things we value and avoid those other things (that we might, or might not be aware of).”

James Scofield O’Rourke , a professor of management at the University of Notre Dame, said, “Technology has, throughout recorded history, been a largely neutral concept. The question of its value has always been dependent on its application. For what purpose will AI and other technological advances be used? Everything from gunpowder to internal combustion engines to nuclear fission has been applied in both helpful and destructive ways. Assuming we can contain or control AI (and not the other way around), the answer to whether we’ll be better off depends entirely on us (or our progeny). ‘The fault, dear Brutus, is not in our stars, but in ourselves, that we are underlings.’”

Simon Biggs , a professor of interdisciplinary arts at the University of Edinburgh, said, “AI will function to augment human capabilities. The problem is not with AI but with humans. As a species we are aggressive, competitive and lazy. We are also empathic, community minded and (sometimes) self-sacrificing. We have many other attributes. These will all be amplified. Given historical precedent, one would have to assume it will be our worst qualities that are augmented. My expectation is that in 2030 AI will be in routine use to fight wars and kill people, far more effectively than we can currently kill. As societies we will be less affected by this as we currently are, as we will not be doing the fighting and killing ourselves. Our capacity to modify our behaviour, subject to empathy and an associated ethical framework, will be reduced by the disassociation between our agency and the act of killing. We cannot expect our AI systems to be ethical on our behalf – they won’t be, as they will be designed to kill efficiently, not thoughtfully. My other primary concern is to do with surveillance and control. The advent of China’s Social Credit System (SCS) is an indicator of what it likely to come. We will exist within an SCS as AI constructs hybrid instances of ourselves that may or may not resemble who we are. But our rights and affordances as individuals will be determined by the SCS. This is the Orwellian nightmare realised.”

Mark Surman , executive director of the Mozilla Foundation, responded, “AI will continue to concentrate power and wealth in the hands of a few big monopolies based on the U.S. and China. Most people – and parts of the world – will be worse off.”

William Uricchio , media scholar and professor of comparative media studies at MIT, commented, “AI and its related applications face three problems: development at the speed of Moore’s Law, development in the hands of a technological and economic elite, and development without benefit of an informed or engaged public. The public is reduced to a collective of consumers awaiting the next technology. Whose notion of ‘progress’ will prevail? We have ample evidence of AI being used to drive profits, regardless of implications for long-held values; to enhance governmental control and even score citizens’ ‘social credit’ without input from citizens themselves. Like technologies before it, AI is agnostic. Its deployment rests in the hands of society. But absent an AI-literate public, the decision of how best to deploy AI will fall to special interests. Will this mean equitable deployment, the amelioration of social injustice and AI in the public service? Because the answer to this question is social rather than technological, I’m pessimistic. The fix? We need to develop an AI-literate public, which means focused attention in the educational sector and in public-facing media. We need to assure diversity in the development of AI technologies. And until the public, its elected representatives and their legal and regulatory regimes can get up to speed with these fast-moving developments we need to exercise caution and oversight in AI’s development.”

The remainder of this report is divided into three sections that draw from hundreds of additional respondents’ hopeful and critical observations: 1) concerns about human-AI evolution, 2) suggested solutions to address AI’s impact, and 3) expectations of what life will be like in 2030, including respondents’ positive outlooks on the quality of life and the future of work, health care and education. Some responses are lightly edited for style.

Sign up for our weekly newsletter

Fresh data delivery Saturday mornings

Sign up for The Briefing

Weekly updates on the world of news & information

  • Artificial Intelligence
  • Emerging Technology
  • Future of the Internet (Project)
  • Technology Adoption

Many Americans think generative AI programs should credit the sources they rely on

Americans’ use of chatgpt is ticking up, but few trust its election information, q&a: how we used large language models to identify guests on popular podcasts, striking findings from 2023, what the data says about americans’ views of artificial intelligence, most popular, report materials.

  • Shareable quotes from experts about artificial intelligence and the future of humans

1615 L St. NW, Suite 800 Washington, DC 20036 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Age & Generations
  • Coronavirus (COVID-19)
  • Economy & Work
  • Family & Relationships
  • Gender & LGBTQ
  • Immigration & Migration
  • International Affairs
  • Internet & Technology
  • Methodological Research
  • News Habits & Media
  • Non-U.S. Governments
  • Other Topics
  • Politics & Policy
  • Race & Ethnicity
  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

Copyright 2024 Pew Research Center

Terms & Conditions

Privacy Policy

Cookie Settings

Reprints, Permissions & Use Policy

July 12, 2023

AI Is an Existential Threat—Just Not the Way You Think

Some fear that artificial intelligence will threaten humanity’s survival. But the existential risk is more philosophical than apocalyptic

By Nir Eisikovits & The Conversation US

closeup macro shot of a large pile of triangular shaped shiny silver paper clips on black

AI isn’t likely to enslave humanity, but it could take over many aspects of our lives.

krishna dev/Alamy Stock Photo

The following essay is reprinted with permission from The Conversation , an online publication covering the latest research.

The rise of ChatGPT and similar artificial intelligence systems has been accompanied by a sharp  increase in anxiety about AI . For the past few months, executives and AI safety researchers have been offering predictions, dubbed “ P(doom) ,” about the probability that AI will bring about a large-scale catastrophe.

On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing . By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.

Worries peaked in May 2023 when the nonprofit research and advocacy organization Center for AI Safety released  a one-sentence statement : “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.” The statement was signed by many key players in the field, including the leaders of OpenAI, Google and Anthropic, as well as two of the so-called “godfathers” of AI:  Geoffrey Hinton  and  Yoshua Bengio .

You might ask how such existential fears are supposed to play out. One famous scenario is the “ paper clip maximizer ” thought experiment articulated by Oxford philosopher  Nick Bostrom . The idea is that an AI system tasked with producing as many paper clips as possible might go to extraordinary lengths to find raw materials, like destroying factories and causing car accidents.

A  less resource-intensive variation  has an AI tasked with procuring a reservation to a popular restaurant shutting down cellular networks and traffic lights in order to prevent other patrons from getting a table.

Office supplies or dinner, the basic idea is the same: AI is fast becoming an alien intelligence, good at accomplishing goals but dangerous because it won’t necessarily align with the moral values of its creators. And, in its most extreme version, this argument morphs into explicit anxieties about AIs  enslaving or destroying the human race .

Actual harm

In the past few years, my colleagues and I at  UMass Boston’s Applied Ethics Center  have been studying the impact of engagement with AI on people’s understanding of themselves, and I believe these catastrophic anxieties are  overblown and misdirected .

Yes, AI’s ability to create convincing deep-fake video and audio is frightening, and it can be abused by people with bad intent. In fact, that is already happening: Russian operatives likely attempted to embarrass Kremlin critic  Bill Browder  by ensnaring him in a conversation with an avatar for former Ukrainian President Petro Poroshenko. Cybercriminals have been using AI voice cloning for a variety of crimes – from  high-tech heists  to  ordinary scams .

AI decision-making systems that  offer loan approval and hiring recommendations  carry the risk of algorithmic bias, since the training data and decision models they run on reflect long-standing social prejudices.

These are big problems, and they require the attention of policymakers. But they have been around for a while, and they are hardly cataclysmic.

Not in the same league

The statement from the Center for AI Safety lumped AI in with pandemics and nuclear weapons as a major risk to civilization. There are problems with that comparison. COVID-19 resulted in almost  7 million deaths worldwide , brought on a  massive and continuing mental health crisis  and created  economic challenges , including chronic supply chain shortages and runaway inflation.

Nuclear weapons probably killed  more than 200,000 people  in Hiroshima and Nagasaki in 1945, claimed many more lives from cancer in the years that followed, generated decades of profound anxiety during the Cold War and brought the world to the brink of annihilation during the Cuban Missile crisis in 1962. They have also  changed the calculations of national leaders  on how to respond to international aggression, as currently playing out with Russia’s invasion of Ukraine.

AI is simply nowhere near gaining the ability to do this kind of damage. The paper clip scenario and others like it are science fiction. Existing AI applications execute specific tasks rather than making broad judgments. The technology is  far from being able to decide on and then plan out  the goals and subordinate goals necessary for shutting down traffic in order to get you a seat in a restaurant, or blowing up a car factory in order to satisfy your itch for paper clips.

Not only does the technology lack the complicated capacity for multilayer judgment that’s involved in these scenarios, it also does not have autonomous access to sufficient parts of our critical infrastructure to start causing that kind of damage.

What it means to be human

Actually, there is an existential danger inherent in using AI, but that risk is existential in the philosophical rather than apocalyptic sense. AI in its current form can alter the way people view themselves. It can degrade abilities and experiences that people consider essential to being human.

For example, humans are judgment-making creatures. People rationally weigh particulars and make daily judgment calls at work and during leisure time about whom to hire, who should get a loan, what to watch and so on. But more and more of these judgments are  being automated and farmed out to algorithms . As that happens, the world won’t end. But people will gradually lose the capacity to make these judgments themselves. The fewer of them people make, the worse they are likely to become at making them.

Or consider the role of chance in people’s lives. Humans value serendipitous encounters: coming across a place, person or activity by accident, being drawn into it and retrospectively appreciating the role accident played in these meaningful finds. But the role of algorithmic recommendation engines is to  reduce that kind of serendipity  and replace it with planning and prediction.

Finally, consider ChatGPT’s writing capabilities. The technology is in the process of eliminating the role of writing assignments in higher education. If it does, educators will lose a key tool for teaching students  how to think critically .

Not dead but diminished

So, no, AI won’t blow up the world. But the increasingly uncritical embrace of it, in a variety of narrow contexts, means the gradual erosion of some of humans’ most important skills. Algorithms are already undermining people’s capacity to make judgments, enjoy serendipitous encounters and hone critical thinking.

The human species will survive such losses. But our way of existing will be impoverished in the process. The fantastic anxieties around the coming AI cataclysm, singularity, Skynet, or however you might think of it, obscure these more subtle costs. Recall T.S. Eliot’s famous closing lines of “ The Hollow Men ”: “This is the way the world ends,” he wrote, “not with a bang but a whimper.”

This article was originally published on The Conversation . Read the original article .

  • Newsletters

Site search

  • Israel-Hamas war
  • Home Planet
  • 2024 election
  • Supreme Court
  • TikTok’s fate
  • All explainers
  • Future Perfect

Filed under:

  • The case that AI threatens humanity, explained in 500 words

The short version of a big conversation about the dangers of emerging technology.

Share this story

  • Share this on Facebook
  • Share this on Twitter
  • Share this on Reddit
  • Share All sharing options

Share All sharing options for: The case that AI threatens humanity, explained in 500 words

artificial intelligence a threat to the future of humans essay

Tech superstars like Elon Musk , AI pioneers like Alan Turing , top computer scientists like Stuart Russell , and emerging-technologies researchers like Nick Bostrom have all said they think artificial intelligence will transform the world — and maybe annihilate it.

So: Should we be worried?

Here’s the argument for why we should: We’ve taught computers to multiply numbers, play chess , identify objects in a picture, transcribe human voices, and translate documents (though for the latter two, AI still is not as capable as an experienced human). All of these are examples of “narrow AI” — computer systems that are trained to perform at a human or superhuman level in one specific task.

We don’t yet have “general AI” — computer systems that can perform at a human or superhuman level across lots of different tasks.

Most experts think that general AI is possible, though they disagree on when we’ll get there . Computers today still don’t have as much power for computation as the human brain, and we haven’t yet explored all the possible techniques for training them. We continually discover ways we can extend our existing approaches to let computers do new, exciting, increasingly general things, like winning at open-ended war strategy games .

But even if general AI is a long way off, there’s a case that we should start preparing for it already. Current AI systems frequently exhibit unintended behavior. We’ve seen AIs that find shortcuts or even cheat rather than learn to play a game fairly, figure out ways to alter their score rather than earning points through play, and otherwise take steps we don’t expect — all to meet the goal their creators set.

As AI systems get more powerful, unintended behavior may become less charming and more dangerous. Experts have argued that powerful AI systems, whatever goals we give them, are likely to have certain predictable behavior patterns . They’ll try to accumulate more resources, which will help them achieve any goal. They’ll try to discourage us from shutting them off, since that’d make it impossible to achieve their goals. And they’ll try to keep their goals stable, which means it will be hard to edit or “tweak” them once they’re running. Even systems that don’t exhibit unintended behavior now are likely to do so when they have more resources available.

For all those reasons, many researchers have said AI is similar to launching a rocket . (Musk, with more of a flair for the dramatic, said it’s like summoning a demon .) The core idea is that once we have a general AI, we’ll have few options to steer it — so all the steering work needs to be done before the AI even exists, and it’s worth starting on today.

The skeptical perspective here is that general AI might be so distant that our work today won’t be applicable — but even the most forceful skeptics tend to agree that it’s worthwhile for some research to start early, so that when it’s needed, the groundwork is there.

Sign up for the Future Perfect newsletter. Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and — to put it simply — getting better at doing good.

Will you support Vox today?

We believe that everyone deserves to understand the world that they live in. That kind of knowledge helps create better citizens, neighbors, friends, parents, and stewards of this planet. Producing deeply researched, explanatory journalism takes resources. You can support this mission by making a financial gift to Vox today. Will you join us?

We accept credit card, Apple Pay, and Google Pay. You can also contribute via

artificial intelligence a threat to the future of humans essay

In This Stream

The rapid development of ai has benefits — and poses serious risks.

  • Elon Musk wants to merge humans with AI. How many brains will be damaged along the way?
  • StarCraft is a deep, complicated war strategy game. Google’s AlphaStar AI crushed it.

Next Up In Future Perfect

Sign up for the newsletter today, explained.

Understand the world with a daily explainer plus the most compelling stories of the day.

Thanks for signing up!

Check your inbox for a welcome email.

Oops. Something went wrong. Please enter a valid email and try again.

Pro-Palestinian protesters holding a sign that says “Liberated Zone” in New York.

What the backlash to student protests over Gaza is really about

artificial intelligence a threat to the future of humans essay

You need $500. How should you get it?

Close-up photo of someone looking at a burger on a food delivery app on their phone.

Food delivery fees have soared. How much of it goes to workers?

An employee doing lab work.

So you’ve found research fraud. Now what?

JoJo Siwa holding a half-mic stand while performing at Miami Beach Pride Festival 2024. She wears a sparkly silver bodysuit and silver face-paint around her eyes.

How JoJo Siwa’s “rebrand” got so messy

Students at a pro-Palestinian college encampment.

Student protests are testing US colleges’ commitment to free speech

How close are we to AI that surpasses human intelligence?

Subscribe to the center for technology innovation newsletter, jeremy baum and jeremy baum undergraduate student - ucla, researcher - ucla institute for technology, law, and policy @_jeremybaum john villasenor john villasenor nonresident senior fellow - governance studies , center for technology innovation @johndvillasenor.

July 18, 2023

  • Artificial general intelligence (AGI) is difficult to precisely define but refers to a superintelligent AI recognizable from science fiction.
  • AGI may still be far off, but the growing capabilities of generative AI suggest that we could be making progress toward its development.
  • The development of AGI will have a transformative effect on society and create significant opportunities and threats, raising difficult questions about regulation.

For decades, superintelligent artificial intelligence (AI) has been a staple of science fiction, embodied in books and movies about androids, robot uprisings, and a world taken over by computers. As far-fetched as those plots often were, they played off a very real mix of fascination, curiosity, and trepidation regarding the potential to build intelligent machines.

Today, public interest in AI is at an all-time high. With the headlines in recent months about generative AI systems like ChatGPT, there is also a different phrase that has started to enter the broader dialog: a rtificial general intelligence , or AGI. But what exactly is AGI, and how close are today’s technologies to achieving it?

Despite the similarity in the phrases generative AI and artificial general intelligence, they have very different meanings. As a post from IBM explains, “Generative AI refers to deep-learning models that can generate high-quality text, images, and other content based on the data they were trained on.” However, the ability of an AI system to generate content does not necessarily mean that its intelligence is general.

To better understand artificial general intelligence, it helps to first understand how it differs from today’s AI, which is highly specialized. For example, an AI chess program is extraordinarily good at playing chess, but if you ask it to write an essay on the causes of World War I, it won’t be of any use. Its intelligence is limited to one specific domain. Other examples of specialized AI include the systems that provide content recommendations on the social media platform TikTok, navigation decisions in driverless cars, and purchase recommendations from Amazon.

AGI: A range of definitions

By contrast, AGI refers to a much broader form of machine intelligence. There is no single, formally recognized definition of AGI—rather, there is a range of definitions that include the following:

While the OpenAI definition ties AGI to the ability to “outperform humans at most economically valuable work,” today’s systems are nowhere near that capable. Consider Indeed’s list of the most common jobs in the U.S. As of March 2023, the first 10 jobs on that list were: cashier, food preparation worker, stocking associate, laborer, janitor, construction worker, bookkeeper, server, medical assistant, and bartender. These jobs require not only intellectual capacity but, crucially, most of them require a far higher degree of manual dexterity than today’s most advanced AI robotics systems can achieve.

None of the other AGI definitions in the table specifically mention economic value. Another contrast evident in the table is that while the OpenAI AGI definition requires outperforming humans, the other definitions only require AGI to perform at levels comparable to humans. Common to all of the definitions, either explicitly or implicitly, is the concept that an AGI system can perform tasks across many domains, adapt to the changes in its environment, and solve new problems—not only the ones in its training data.

GPT-4: Sparks of AGI?

A group of industry AI researchers recently made a splash when they published a preprint of an academic paper titled, “Sparks of Artificial General Intelligence: Early experiments with GPT-4.” GPT-4 is a large language model that has been publicly accessible to ChatGPT Plus (paid upgrade) users since March 2023. The researchers noted that “GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting,” exhibiting “strikingly close to human-level performance.” They concluded that GPT-4 “could reasonably be viewed as an early (yet still incomplete) version” of AGI.

Of course, there are also skeptics: As quoted in a May New York Times article , Carnegie Mellon professor Maarten Sap said, “The ‘Sparks of A.G.I.’ is an example of some of these big companies co-opting the research paper format into P.R. pitches.” In an interview with IEEE Spectrum, researcher and robotics entrepreneur Rodney Brooks underscored that in evaluating the capabilities of systems like ChatGPT, we often “mistake performance for competence.”

GPT-4 and beyond

While the version of GPT-4 currently available to the public is impressive, it is not the end of the road. There are groups working on additions to GPT-4 that are more goal-driven, meaning that you can give the system an instruction such as “Design and build a website on (topic).” The system will then figure out exactly what subtasks need to be completed and in what order in order to achieve that goal. Today, these systems are not particularly reliable, as they frequently fail to reach the stated goal. But they will certainly get better in the future.

In a 2020 paper , Yoshihiro Maruyama of the Australian National University identified eight attributes a system must have for it to be considered AGI: Logic, autonomy, resilience, integrity, morality, emotion, embodiment, and embeddedness. The last two attributes—embodiment and embeddedness—refer to having a physical form that facilitates learning and understanding of the world and human behavior, and a deep integration with social, cultural, and environmental systems that allows adaption to human needs and values.

It can be argued that ChatGPT displays some of these attributes, like logic. For example, GPT-4 with no additional features reportedly scored a 163 on the LSAT and 1410 on the SAT . For other attributes, the determination is tied as much to philosophy as much as to technology. For instance, is a system that merely exhibits what appears to be morality actually moral? If asked to provide a one-word answer to the question “is murder wrong?” GPT-4 will respond by saying “Yes.” This is a morally correct response, but it doesn’t mean that GPT-4 itself has morality, but rather that it has inferred the morally correct answer through its training data.

A key subtlety that often goes missing in the “How close is AGI?” discussion is that intelligence exists on a continuum, and therefore assessing whether a system displays AGI will require considering a continuum. On this point, the research done on animal intelligence offers a useful analog. We understand that animal intelligence is far too complex to enable us to meaningfully convey animal cognitive capacity by classifying each species as either “intelligent” or “not intelligent:” Animal intelligence exists on a spectrum that spans many dimensions, and evaluating it requires considering context. Similarly, as AI systems become more capable, assessing the degree to which they display generalized intelligence will be involve more than simply choosing between “yes” and “no.”

AGI: Threat or opportunity?

Whenever and in whatever form it arrives, AGI will be transformative, impacting everything from the labor market to how we understand concepts like intelligence and creativity. As with so many other technologies, it also has the potential of being harnessed in harmful ways. For instance, the need to address the potential biases in today’s AI systems is well recognized, and that concern will apply to future AGI systems as well. At the same time, it is also important to recognize that AGI will also offer enormous promise to amplify human innovation and creativity. In medicine, for example, new drugs that would have eluded human scientists working alone could be more easily identified by scientists working with AGI systems.

AGI can also help broaden access to services that previously were accessible only to the most economically privileged. For instance, in the context of education, AGI systems could put personalized, one-on-one tutoring within easy financial reach of everyone, resulting in improved global literacy rates. AGI could also help broaden the reach of medical care by bringing sophisticated, individualized diagnostic care to much broader populations.

Regulating emergent AGI systems

At the May 2023 G7 summit in Japan, the leaders of the world’s seven largest democratic economies issued a communiqué that included an extended discussion of AI, writing that “international governance of new digital technologies has not necessarily kept pace.” Proposals regarding increased AI regulation are now a regular feature of policy discussions in the United States , the European Union , Japan , and elsewhere.

In the future, as AGI moves from science fiction to reality, it will supercharge the already-robust debate regarding AI regulation. But preemptive regulation is always a challenge, and this will be particularly so in relation to AGI—a technology that escapes easy definition, and that will evolve in ways that are impossible to predict.

An outright ban on AGI would be bad policy. For example, AGI systems that are capable of emotional recognition could be very beneficial in a context such as education, where they could discern whether a student appears to understand a new concept, and adjust an interaction accordingly. Yet the EU Parliament’s AI Act, which passed a major legislative milestone in June, would ban emotional recognition in AI systems (and therefore also in AGI systems) in certain contexts like education.

A better approach is to first gain a clear understanding of potential misuses of specific AGI systems once those systems exist and can be analyzed, and then to examine whether those misuses are addressed by existing, non-AI-specific regulatory frameworks (e.g., the prohibition against employment discrimination provided by Title VII of the Civil Rights Act of 1964). If that analysis identifies a gap, then it does indeed make sense to examine the potential role in filling that gap of “soft” law (voluntary frameworks) as well as formal laws and regulations. But regulating AGI based only on the fact that it will be highly capable would be a mistake.

Related Content

Cameron F. Kerry

July 7, 2023

Alex Engler

June 16, 2023

Darrell M. West

May 17, 2023

Artificial Intelligence Technology Policy & Regulation

Governance Studies

Center for Technology Innovation

Julian Jacobs, Francesco Tasin, AJ Mannan

April 25, 2024

Elaine Kamarck, Darrell M. West

August 27, 2024

Jolynn Dellinger, Stephanie K. Pell

April 18, 2024

The present and future of AI

Finale doshi-velez on how ai is shaping our lives and how we can shape ai.

image of Finale Doshi-Velez, the John L. Loeb Professor of Engineering and Applied Sciences

Finale Doshi-Velez, the John L. Loeb Professor of Engineering and Applied Sciences. (Photo courtesy of Eliza Grinnell/Harvard SEAS)

How has artificial intelligence changed and shaped our world over the last five years? How will AI continue to impact our lives in the coming years? Those were the questions addressed in the most recent report from the One Hundred Year Study on Artificial Intelligence (AI100), an ongoing project hosted at Stanford University, that will study the status of AI technology and its impacts on the world over the next 100 years.

The 2021 report is the second in a series that will be released every five years until 2116. Titled “Gathering Strength, Gathering Storms,” the report explores the various ways AI is  increasingly touching people’s lives in settings that range from  movie recommendations  and  voice assistants  to  autonomous driving  and  automated medical diagnoses .

Barbara Grosz , the Higgins Research Professor of Natural Sciences at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) is a member of the standing committee overseeing the AI100 project and Finale Doshi-Velez , Gordon McKay Professor of Computer Science, is part of the panel of interdisciplinary researchers who wrote this year’s report. 

We spoke with Doshi-Velez about the report, what it says about the role AI is currently playing in our lives, and how it will change in the future.  

Q: Let's start with a snapshot: What is the current state of AI and its potential?

Doshi-Velez: Some of the biggest changes in the last five years have been how well AIs now perform in large data regimes on specific types of tasks.  We've seen [DeepMind’s] AlphaZero become the best Go player entirely through self-play, and everyday uses of AI such as grammar checks and autocomplete, automatic personal photo organization and search, and speech recognition become commonplace for large numbers of people.  

In terms of potential, I'm most excited about AIs that might augment and assist people.  They can be used to drive insights in drug discovery, help with decision making such as identifying a menu of likely treatment options for patients, and provide basic assistance, such as lane keeping while driving or text-to-speech based on images from a phone for the visually impaired.  In many situations, people and AIs have complementary strengths. I think we're getting closer to unlocking the potential of people and AI teams.

There's a much greater recognition that we should not be waiting for AI tools to become mainstream before making sure they are ethical.

Q: Over the course of 100 years, these reports will tell the story of AI and its evolving role in society. Even though there have only been two reports, what's the story so far?

There's actually a lot of change even in five years.  The first report is fairly rosy.  For example, it mentions how algorithmic risk assessments may mitigate the human biases of judges.  The second has a much more mixed view.  I think this comes from the fact that as AI tools have come into the mainstream — both in higher stakes and everyday settings — we are appropriately much less willing to tolerate flaws, especially discriminatory ones. There's also been questions of information and disinformation control as people get their news, social media, and entertainment via searches and rankings personalized to them. So, there's a much greater recognition that we should not be waiting for AI tools to become mainstream before making sure they are ethical.

Q: What is the responsibility of institutes of higher education in preparing students and the next generation of computer scientists for the future of AI and its impact on society?

First, I'll say that the need to understand the basics of AI and data science starts much earlier than higher education!  Children are being exposed to AIs as soon as they click on videos on YouTube or browse photo albums. They need to understand aspects of AI such as how their actions affect future recommendations.

But for computer science students in college, I think a key thing that future engineers need to realize is when to demand input and how to talk across disciplinary boundaries to get at often difficult-to-quantify notions of safety, equity, fairness, etc.  I'm really excited that Harvard has the Embedded EthiCS program to provide some of this education.  Of course, this is an addition to standard good engineering practices like building robust models, validating them, and so forth, which is all a bit harder with AI.

I think a key thing that future engineers need to realize is when to demand input and how to talk across disciplinary boundaries to get at often difficult-to-quantify notions of safety, equity, fairness, etc. 

Q: Your work focuses on machine learning with applications to healthcare, which is also an area of focus of this report. What is the state of AI in healthcare? 

A lot of AI in healthcare has been on the business end, used for optimizing billing, scheduling surgeries, that sort of thing.  When it comes to AI for better patient care, which is what we usually think about, there are few legal, regulatory, and financial incentives to do so, and many disincentives. Still, there's been slow but steady integration of AI-based tools, often in the form of risk scoring and alert systems.

In the near future, two applications that I'm really excited about are triage in low-resource settings — having AIs do initial reads of pathology slides, for example, if there are not enough pathologists, or get an initial check of whether a mole looks suspicious — and ways in which AIs can help identify promising treatment options for discussion with a clinician team and patient.

Q: Any predictions for the next report?

I'll be keen to see where currently nascent AI regulation initiatives have gotten to. Accountability is such a difficult question in AI,  it's tricky to nurture both innovation and basic protections.  Perhaps the most important innovation will be in approaches for AI accountability.

Topics: AI / Machine Learning , Computer Science

Cutting-edge science delivered direct to your inbox.

Join the Harvard SEAS mailing list.

Scientist Profiles

Finale Doshi-Velez

Finale Doshi-Velez

Herchel Smith Professor of Computer Science

Press Contact

Leah Burrows | 617-496-1351 | [email protected]

Related News

Two men wearing hospital scrubs, two wearing blue jackets with the logo for the company EndoShunt, in front of medical equipment

Seven SEAS teams named President’s Innovation Challenge finalists

Start-ups will vie for up to $75,000 in prize money

Computer Science , Design , Electrical Engineering , Entrepreneurship , Events , Master of Design Engineering , Materials Science & Mechanical Engineering , MS/MBA

A group of Harvard SEAS students standing behind a wooden table, in front of a sign that says "Agents of Change"

Exploring the depths of AI

 New SEAS club spends Spring Break meeting AI technology professionals in San Francisco

AI / Machine Learning , Computer Science , Student Organizations

Head shot of SEAS Ph.D. alum Jacomo Corbo

Alumni profile: Jacomo Corbo, Ph.D. '08

Racing into the future of machine learning 

AI / Machine Learning , Computer Science

Oxford Martin School logo

Artificial intelligence is transforming our world — it is on all of us to make sure that it goes well

How ai gets built is currently decided by a small group of technologists. as this technology is transforming our lives, it should be in all of our interest to become informed and engaged..

Why should you care about the development of artificial intelligence?

Think about what the alternative would look like. If you and the wider public do not get informed and engaged, then we leave it to a few entrepreneurs and engineers to decide how this technology will transform our world.

That is the status quo. This small number of people at a few tech firms directly working on artificial intelligence (AI) do understand how extraordinarily powerful this technology is becoming . If the rest of society does not become engaged, then it will be this small elite who decides how this technology will change our lives.

To change this status quo, I want to answer three questions in this article: Why is it hard to take the prospect of a world transformed by AI seriously? How can we imagine such a world? And what is at stake as this technology becomes more powerful?

Why is it hard to take the prospect of a world transformed by artificial intelligence seriously?

In some way, it should be obvious how technology can fundamentally transform the world. We just have to look at how much the world has already changed. If you could invite a family of hunter-gatherers from 20,000 years ago on your next flight, they would be pretty surprised. Technology has changed our world already, so we should expect that it can happen again.

But while we have seen the world transform before, we have seen these transformations play out over the course of generations. What is different now is how very rapid these technological changes have become. In the past, the technologies that our ancestors used in their childhood were still central to their lives in their old age. This has not been the case anymore for recent generations. Instead, it has become common that technologies unimaginable in one's youth become ordinary in later life.

This is the first reason we might not take the prospect seriously: it is easy to underestimate the speed at which technology can change the world.

The second reason why it is difficult to take the possibility of transformative AI – potentially even AI as intelligent as humans – seriously is that it is an idea that we first heard in the cinema. It is not surprising that for many of us, the first reaction to a scenario in which machines have human-like capabilities is the same as if you had asked us to take seriously a future in which vampires, werewolves, or zombies roam the planet. 1

But, it is plausible that it is both the stuff of sci-fi fantasy and the central invention that could arrive in our, or our children’s, lifetimes.

The third reason why it is difficult to take this prospect seriously is by failing to see that powerful AI could lead to very large changes. This is also understandable. It is difficult to form an idea of a future that is very different from our own time. There are two concepts that I find helpful in imagining a very different future with artificial intelligence. Let’s look at both of them.

How to develop an idea of what the future of artificial intelligence might look like?

When thinking about the future of artificial intelligence, I find it helpful to consider two different concepts in particular: human-level AI, and transformative AI. 2 The first concept highlights the AI’s capabilities and anchors them to a familiar benchmark, while transformative AI emphasizes the impact that this technology would have on the world.

From where we are today, much of this may sound like science fiction. It is therefore worth keeping in mind that the majority of surveyed AI experts believe there is a real chance that human-level artificial intelligence will be developed within the next decades, and some believe that it will exist much sooner.

The advantages and disadvantages of comparing machine and human intelligence

One way to think about human-level artificial intelligence is to contrast it with the current state of AI technology. While today’s AI systems often have capabilities similar to a particular, limited part of the human mind, a human-level AI would be a machine that is capable of carrying out the same range of intellectual tasks that we humans are capable of. 3 It is a machine that would be “able to learn to do anything that a human can do,” as Norvig and Russell put it in their textbook on AI. 4

Taken together, the range of abilities that characterize intelligence gives humans the ability to solve problems and achieve a wide variety of goals. A human-level AI would therefore be a system that could solve all those problems that we humans can solve, and do the tasks that humans do today. Such a machine, or collective of machines, would be able to do the work of a translator, an accountant, an illustrator, a teacher, a therapist, a truck driver, or the work of a trader on the world’s financial markets. Like us, it would also be able to do research and science, and to develop new technologies based on that.

The concept of human-level AI has some clear advantages. Using the familiarity of our own intelligence as a reference provides us with some clear guidance on how to imagine the capabilities of this technology.

However, it also has clear disadvantages. Anchoring the imagination of future AI systems to the familiar reality of human intelligence carries the risk that it obscures the very real differences between them.

Some of these differences are obvious. For example, AI systems will have the immense memory of computer systems, against which our own capacity to store information pales. Another obvious difference is the speed at which a machine can absorb and process information. But information storage and processing speed are not the only differences. The domains in which machines already outperform humans is steadily increasing: in chess, after matching the level of the best human players in the late 90s, AI systems reached superhuman levels more than a decade ago. In other games like Go or complex strategy games, this has happened more recently. 5

These differences mean that an AI that is at least as good as humans in every domain would overall be much more powerful than the human mind. Even the first “human-level AI” would therefore be quite superhuman in many ways. 6

Human intelligence is also a bad metaphor for machine intelligence in other ways. The way we think is often very different from machines, and as a consequence the output of thinking machines can be very alien to us.

Most perplexing and most concerning are the strange and unexpected ways in which machine intelligence can fail. The AI-generated image of the horse below provides an example: on the one hand, AIs can do what no human can do – produce an image of anything, in any style (here photorealistic), in mere seconds – but on the other hand it can fail in ways that no human would fail. 7 No human would make the mistake of drawing a horse with five legs. 8

Imagining a powerful future AI as just another human would therefore likely be a mistake. The differences might be so large that it will be a misnomer to call such systems “human-level.”

AI-generated image of a horse 9

A brown horse running in a grassy field. The horse appears to have five legs.

Transformative artificial intelligence is defined by the impact this technology would have on the world

In contrast, the concept of transformative AI is not based on a comparison with human intelligence. This has the advantage of sidestepping the problems that the comparisons with our own mind bring. But it has the disadvantage that it is harder to imagine what such a system would look like and be capable of. It requires more from us. It requires us to imagine a world with intelligent actors that are potentially very different from ourselves.

Transformative AI is not defined by any specific capabilities, but by the real-world impact that the AI would have. To qualify as transformative, researchers think of it as AI that is “powerful enough to bring us into a new, qualitatively different future.” 10

In humanity’s history, there have been two cases of such major transformations, the agricultural and the industrial revolutions.

Transformative AI becoming a reality would be an event on that scale. Like the arrival of agriculture 10,000 years ago, or the transition from hand- to machine-manufacturing, it would be an event that would change the world for billions of people around the globe and for the entire trajectory of humanity’s future .

Technologies that fundamentally change how a wide range of goods or services are produced are called ‘general-purpose technologies’. The two previous transformative events were caused by the discovery of two particularly significant general-purpose technologies: the change in food production as humanity transitioned from hunting and gathering to farming, and the rise of machine manufacturing in the industrial revolution. Based on the evidence and arguments presented in this series on AI development, I believe it is plausible that powerful AI could represent the introduction of a similarly significant general-purpose technology.

Timeline of the three transformative events in world history

artificial intelligence a threat to the future of humans essay

A future of human-level or transformative AI?

The two concepts are closely related, but they are not the same. The creation of a human-level AI would certainly have a transformative impact on our world. If the work of most humans could be carried out by an AI, the lives of millions of people would change. 11

The opposite, however, is not true: we might see transformative AI without developing human-level AI. Since the human mind is in many ways a poor metaphor for the intelligence of machines, we might plausibly develop transformative AI before we develop human-level AI. Depending on how this goes, this might mean that we will never see any machine intelligence for which human intelligence is a helpful comparison.

When and if AI systems might reach either of these levels is of course difficult to predict. In my companion article on this question, I give an overview of what researchers in this field currently believe. Many AI experts believe there is a real chance that such systems will be developed within the next decades, and some believe that they will exist much sooner.

What is at stake as artificial intelligence becomes more powerful?

All major technological innovations lead to a range of positive and negative consequences. For AI, the spectrum of possible outcomes – from the most negative to the most positive – is extraordinarily wide.

That the use of AI technology can cause harm is clear, because it is already happening.

AI systems can cause harm when people use them maliciously. For example, when they are used in politically-motivated disinformation campaigns or to enable mass surveillance. 12

But AI systems can also cause unintended harm, when they act differently than intended or fail. For example, in the Netherlands the authorities used an AI system which falsely claimed that an estimated 26,000 parents made fraudulent claims for child care benefits. The false allegations led to hardship for many poor families, and also resulted in the resignation of the Dutch government in 2021. 13

As AI becomes more powerful, the possible negative impacts could become much larger. Many of these risks have rightfully received public attention: more powerful AI could lead to mass labor displacement, or extreme concentrations of power and wealth. In the hands of autocrats, it could empower totalitarianism through its suitability for mass surveillance and control.

The so-called alignment problem of AI is another extreme risk. This is the concern that nobody would be able to control a powerful AI system, even if the AI takes actions that harm us humans, or humanity as a whole. This risk is unfortunately receiving little attention from the wider public, but it is seen as an extremely large risk by many leading AI researchers. 14

How could an AI possibly escape human control and end up harming humans?

The risk is not that an AI becomes self-aware, develops bad intentions, and “chooses” to do this. The risk is that we try to instruct the AI to pursue some specific goal – even a very worthwhile one – and in the pursuit of that goal it ends up harming humans. It is about unintended consequences. The AI does what we told it to do, but not what we wanted it to do.

Can’t we just tell the AI to not do those things? It is definitely possible to build an AI that avoids any particular problem we foresee, but it is hard to foresee all the possible harmful unintended consequences. The alignment problem arises because of “the impossibility of defining true human purposes correctly and completely,” as AI researcher Stuart Russell puts it. 15

Can’t we then just switch off the AI? This might also not be possible. That is because a powerful AI would know two things: it faces a risk that humans could turn it off, and it can’t achieve its goals once it has been turned off. As a consequence, the AI will pursue a very fundamental goal of ensuring that it won’t be switched off. This is why, once we realize that an extremely intelligent AI is causing unintended harm in the pursuit of some specific goal, it might not be possible to turn it off or change what the system does. 16

This risk – that humanity might not be able to stay in control once AI becomes very powerful, and that this might lead to an extreme catastrophe – has been recognized right from the early days of AI research more than 70 years ago. 17 The very rapid development of AI in recent years has made a solution to this problem much more urgent.

I have tried to summarize some of the risks of AI, but a short article is not enough space to address all possible questions. Especially on the very worst risks of AI systems, and what we can do now to reduce them, I recommend reading the book The Alignment Problem by Brian Christian and Benjamin Hilton’s article ‘Preventing an AI-related catastrophe’ .

If we manage to avoid these risks, transformative AI could also lead to very positive consequences. Advances in science and technology were crucial to the many positive developments in humanity’s history. If artificial ingenuity can augment our own, it could help us make progress on the many large problems we face: from cleaner energy, to the replacement of unpleasant work, to much better healthcare.

This extremely large contrast between the possible positives and negatives makes clear that the stakes are unusually high with this technology. Reducing the negative risks and solving the alignment problem could mean the difference between a healthy, flourishing, and wealthy future for humanity – and the destruction of the same.

How can we make sure that the development of AI goes well?

Making sure that the development of artificial intelligence goes well is not just one of the most crucial questions of our time, but likely one of the most crucial questions in human history. This needs public resources – public funding, public attention, and public engagement.

Currently, almost all resources that are dedicated to AI aim to speed up the development of this technology. Efforts that aim to increase the safety of AI systems, on the other hand, do not receive the resources they need. Researcher Toby Ord estimated that in 2020 between $10 to $50 million was spent on work to address the alignment problem. 18 Corporate AI investment in the same year was more than 2000-times larger, it summed up to $153 billion.

This is not only the case for the AI alignment problem. The work on the entire range of negative social consequences from AI is under-resourced compared to the large investments to increase the power and use of AI systems.

It is frustrating and concerning for society as a whole that AI safety work is extremely neglected and that little public funding is dedicated to this crucial field of research. On the other hand, for each individual person this neglect means that they have a good chance to actually make a positive difference, if they dedicate themselves to this problem now. And while the field of AI safety is small, it does provide good resources on what you can do concretely if you want to work on this problem.

I hope that more people dedicate their individual careers to this cause, but it needs more than individual efforts. A technology that is transforming our society needs to be a central interest of all of us. As a society we have to think more about the societal impact of AI, become knowledgeable about the technology, and understand what is at stake.

When our children look back at today, I imagine that they will find it difficult to understand how little attention and resources we dedicated to the development of safe AI. I hope that this changes in the coming years, and that we begin to dedicate more resources to making sure that powerful AI gets developed in a way that benefits us and the next generations.

If we fail to develop this broad-based understanding, then it will remain the small elite that finances and builds this technology that will determine how one of the – or plausibly the – most powerful technology in human history will transform our world.

If we leave the development of artificial intelligence entirely to private companies, then we are also leaving it up these private companies what our future — the future of humanity — will be.

With our work at Our World in Data we want to do our small part to enable a better informed public conversation on AI and the future we want to live in. You can find these resources on OurWorldinData.org/artificial-intelligence

Acknowledgements: I would like to thank my colleagues Daniel Bachler, Charlie Giattino, and Edouard Mathieu for their helpful comments to drafts of this essay.

This problem becomes even larger when we try to imagine how a future with a human-level AI might play out. Any particular scenario will not only involve the idea that this powerful AI exists, but a whole range of additional assumptions about the future context in which this happens. It is therefore hard to communicate a scenario of a world with human-level AI that does not sound contrived, bizarre or even silly.

Both of these concepts are widely used in the scientific literature on artificial intelligence. For example, questions about the timelines for the development of future AI are often framed using these terms. See my article on this topic .

The fact that humans are capable of a range of intellectual tasks means that you arrive at different definitions of intelligence depending on which aspect within that range you focus on (the Wikipedia entry on intelligence , for example, lists a number of definitions from various researchers and different disciplines). As a consequence there are also various definitions of ‘human-level AI’.

There are also several closely related terms: Artificial General Intelligence, High-Level Machine Intelligence, Strong AI, or Full AI are sometimes synonymously used, and sometimes defined in similar, yet different ways. In specific discussions, it is necessary to define this concept more narrowly; for example, in studies on AI timelines researchers offer more precise definitions of what human-level AI refers to in their particular study.

Peter Norvig and Stuart Russell (2021) — Artificial Intelligence: A Modern Approach. Fourth edition. Published by Pearson.

The AI system AlphaGo , and its various successors, won against Go masters. The AI system Pluribus beat humans at no-limit Texas hold 'em poker. The AI system Cicero can strategize and use human language to win the strategy game Diplomacy. See: Meta Fundamental AI Research Diplomacy Team (FAIR), Anton Bakhtin, Noam Brown, Emily Dinan, Gabriele Farina, Colin Flaherty, Daniel Fried, et al. (2022) – ‘Human-Level Play in the Game of Diplomacy by Combining Language Models with Strategic Reasoning’. In Science 0, no. 0 (22 November 2022): eade9097. https://doi.org/10.1126/science.ade9097 .

This also poses a problem when we evaluate how the intelligence of a machine compares with the intelligence of humans. If intelligence was a general ability, a single capacity, then we could easily compare and evaluate it, but the fact that it is a range of skills makes it much more difficult to compare across machine and human intelligence. Tests for AI systems are therefore comprising a wide range of tasks. See for example Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, Jacob Steinhardt (2020) –  Measuring Massive Multitask Language Understanding or the definition of what would qualify as artificial general intelligence in this Metaculus prediction .

An overview of how AI systems can fail can be found in Charles Choi – 7 Revealing Ways AIs Fail . It is also worth reading through the AIAAIC Repository which “details recent incidents and controversies driven by or relating to AI, algorithms, and automation."

I have taken this example from AI researcher François Chollet , who published it here .

Via François Chollet , who published it here . Based on Chollet’s comments it seems that this image was created by the AI system ‘Stable Diffusion’.

This quote is from Holden Karnofsky (2021) – AI Timelines: Where the Arguments, and the "Experts," Stand . For Holden Karnofsky’s earlier thinking on this conceptualization of AI see his 2016 article ‘Some Background on Our Views Regarding Advanced Artificial Intelligence’ .

Ajeya Cotra, whose research on AI timelines I discuss in other articles of this series, attempts to give a quantitative definition of what would qualify as transformative AI. in her widely cited report on AI timelines she defines it as a change in software technology that brings the growth rate of gross world product "to 20%-30% per year". Several other researchers define TAI in similar terms.

Human-level AI is typically defined as a software system that can carry out at least 90% or 99% of all economically relevant tasks that humans carry out. A lower-bar definition would be an AI system that can carry out all those tasks that can currently be done by another human who is working remotely on a computer.

On the use of AI in politically-motivated disinformation campaigns see for example John Villasenor (November 2020) – How to deal with AI-enabled disinformation . More generally on this topic see Brundage and Avin et al. (2018) – The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, published at maliciousaireport.com . A starting point for literature and reporting on mass surveillance by governments is the relevant Wikipedia entry .

See for example the Wikipedia entry on the ‘Dutch childcare benefits scandal’ and Melissa Heikkilä (2022) – ‘Dutch scandal serves as a warning for Europe over risks of using algorithms’ , in Politico. The technology can also reinforce discrimination in terms of race and gender. See Brian Christian’s book The Alignment Problem and the reports of the AI Now Institute .

Overviews are provided in Stuart Russell (2019) – Human Compatible (especially chapter 5) and Brian Christian’s 2020 book The Alignment Problem . Christian presents the thinking of many leading AI researchers from the earliest days up to now and presents an excellent overview of this problem. It is also seen as a large risk by some of the leading private firms who work towards powerful AI – see OpenAI's article " Our approach to alignment research " from August 2022.

Stuart Russell (2019) – Human Compatible

A question that follows from this is, why build such a powerful AI in the first place?

The incentives are very high. As I emphasize below, this innovation has the potential to lead to very positive developments. In addition to the large social benefits there are also large incentives for those who develop it – the governments that can use it for their goals, the individuals who can use it to become more powerful and wealthy. Additionally, it is of scientific interest and might help us to understand our own mind and intelligence better. And lastly, even if we wanted to stop building powerful AIs, it is likely very hard to actually achieve it. It is very hard to coordinate across the whole world and agree to stop building more advanced AI – countries around the world would have to agree and then find ways to actually implement it.

In 1950 the computer science pioneer Alan Turing put it like this: “If a machine can think, it might think more intelligently than we do, and then where should we be? … [T]his new danger is much closer. If it comes at all it will almost certainly be within the next millennium. It is remote but not astronomically remote, and is certainly something which can give us anxiety. It is customary, in a talk or article on this subject, to offer a grain of comfort, in the form of a statement that some particularly human characteristic could never be imitated by a machine. … I cannot offer any such comfort, for I believe that no such bounds can be set.” Alan. M. Turing (1950) – Computing Machinery and Intelligence , In Mind, Volume LIX, Issue 236, October 1950, Pages 433–460.

Norbert Wiener is another pioneer who saw the alignment problem very early. One way he put it was “If we use, to achieve our purposes, a mechanical agency with whose operation we cannot interfere effectively … we had better be quite sure that the purpose put into the machine is the purpose which we really desire.” quoted from Norbert Wiener (1960) – Some Moral and Technical Consequences of Automation: As machines learn they may develop unforeseen strategies at rates that baffle their programmers. In Science.

In 1950 – the same year in which Turing published the cited article – Wiener published his book The Human Use of Human Beings, whose front-cover blurb reads: “The ‘mechanical brain’ and similar machines can destroy human values or enable us to realize them as never before.”

Toby Ord – The Precipice . He makes this projection in footnote 55 of chapter 2. It is based on the 2017 estimate by Farquhar.

Cite this work

Our articles and data visualizations rely on work from many different people and organizations. When citing this article, please also cite the underlying data sources. This article can be cited as:

BibTeX citation

Reuse this work freely

All visualizations, data, and code produced by Our World in Data are completely open access under the Creative Commons BY license . You have the permission to use, distribute, and reproduce these in any medium, provided the source and authors are credited.

The data produced by third parties and made available by Our World in Data is subject to the license terms from the original third-party authors. We will always indicate the original source of the data in our documentation, so you should always check the license of any such third-party data before use and redistribution.

All of our charts can be embedded in any site.

Our World in Data is free and accessible for everyone.

Help us do this work by making a donation.

artificial intelligence a threat to the future of humans essay

Is AI really a threat to human civilization?

Three um-dearborn experts say recent warnings that artificial intelligence may soon pose an ‘extinction-level’ threat are overblown. but we’re right to be concerned about other ways ai is reshaping our world..

A screenshot from the TV series Westworld. Actor Aaron Paul looks in horror at a synthetic intelligent being.

Even if you haven't been following the current conversations around artificial intelligence, it’s hard not to do a double take at the recent headlines warning that AI may soon represent a serious threat to human civilization . The stories revolve around recent statements made by several industry leaders, including one of AI’s “godfathers,” that the technology is now evolving so rapidly, it could represent an “extinction-level” threat sooner than expected — or at least trigger societal-scale disruptions on par with the recent global pandemic. Frankly, when you’re not an AI expert yourself, it can be hard to know what to make of such claims. For sure, today’s incarnations of AI can do impressive things , including many things humans could never do. And as we’ve detailed in the past, current AI-powered technologies have a problematic track record, whether it’s amplifying disinformation , perpetuating human racial biases or supercharging criminal scams. Maybe it’s not so hard to believe that the machines really are about to get the better of us.

Here at UM-Dearborn, we have a lot of faculty who work with artificial intelligence, so we thought it’d be interesting to put the question of whether AI really is on the verge of becoming a civilization-ender to three of the university’s leaders in this area. Professor Hafiz Malik, Associate Professor Samir Rawashdeh and Assistant Professor Birhanu Eshete were all in agreement that the assertion that artificial intelligence could represent an extinction-level threat any time soon is overblown. Rawashdeh quipped that folks seem to be forgetting that “we can always unplug them if they start misbehaving.” Eshete generally rates today’s version of artificial general intelligence as “cat-level.” Malik said that’s not giving cats enough credit. “I think when you see some of the impressive things AI can do, it can be easy to get the impression that the technology is further along than it is,” Malik says. AI may be able to beat the best human chess player or diagnose illnesses doctors can’t, but he notes those are “very task-specific things with strict constraints.” “General intelligence, the kind of intelligence that humans possess, where we can adapt to new circumstances, that’s not a kind of AI I would expect to see in my lifetime,” he says. “And as far as the pace of advancement, I would say it has been fairly steady, not accelerating rapidly.”

This points to an important distinction between types of artificial intelligence that’s often overlooked in the current discussions around AI. Today’s AI, which is mostly driven by machine learning, is task specific. It’s the technology that allows algorithms, when given enough exposure to, say, photos of cars or X-rays of a particular type of cancer, to learn the essential characteristics of those things. But artificial general intelligence, or AGI, is a completely different creature. It would mean that machines could, as humans do, adapt to an almost infinite array of new tasks without being specifically trained or programmed to do those tasks. Notably, while task-specific AI is becoming ubiquitous, AGI doesn’t exist yet. Some doubt that it’s even possible. If it is achievable, there are many who think that it wouldn’t look anything like today’s AI.

Still, Rawashdeh, Eshete and Malik all say it wouldn’t take something as advanced as AGI to cause big problems in the human world. Rawashdeh and Eshete both voiced concerns over the fact that the highest levels of artificial intelligence are basically controlled by a handful of very large, powerful companies, which are developing the technology for commercial purposes, not to benefit human society. “I think the real risk is we could very quickly become dependent on the technology in a huge range of sectors, and then we end up with systems that perpetuate inequality,” Rawashdeh says. “And at that point, you could imagine people saying, ‘Well, you can’t just turn it off because it would crash the economy.’” Like the justification used to bailout misbehaving banks during financial crises, AI could be judged too big to fail.

Disinformation is the other obvious area where we’re already seeing AI’s disruptive power. Disinformation, of course, is likely as old as human civilization itself. But Malik, who’s an expert in deepfakes , says AI has supercharged its impacts. “The polarization which we’re seeing around the world, not just in the U.S., has a lot to do with social media platforms, which are driven by algorithms, creating echo chambers where people end up with very distorted views of reality,” Malik says. Deepfakes, which he says are “getting better and better every day,” have only made people more vulnerable. In fact, scammers are now putting deepfake technology to use in even more clever ways. Malik says criminals can now use AI to synthesize voices in real time , complete with an array of accents, powering convincing phone scams designed to scare people into draining their bank accounts. Whether it’s social media disinformation or a criminal scam, Malik says the result is a general erosion of trust in information and democratic institutions. And if we’re looking for things that could legitimately contribute to an unraveling of human society, this loss of trust seems like a good place to start.

Concerns over these problematic sides of AI technology have also sparked conversations about how to protect ourselves, and Eshete and Malik say the European Union has been a leader when it comes to regulation. Just this month, the European Parliament passed a draft version of the EU AI Act , which, among other things, would seriously limit the use of facial recognition software and require creators of generative AI systems like ChatGPT to be more transparent about the data used to train their programs. Here in the U.S., Eshete notes the White House has also released the AI Bill of Rights to “help guide the design, use and deployment of automated systems to protect the American Public.” Eshete says penning good regulations is complicated by the fact that there still is no consensus among AI experts on whether AGI systems have anything resembling human capabilities, or even which risks we should be most worried about. He notes it’s really easy to get distracted by the frightening, future hypothetical threats of AI, like creating a lethal bio-weapon. “But there are all sorts of ways AI is already impacting people’s lives. So perhaps we should focus first on what is happening right now. And then once we’ve done that, we’ll have time to look at what’s coming.”

Eshete, Rawashdeh and Malik all say how much AI ultimately ends up reshaping our world, and whether its impacts will be beneficial or harmful, is largely up to us. Could we end up in a place where AI really does become a civilization-ender? Possibly. But if we do, we likely won’t have the machines to blame. 

Story by Lou Blouin

Related News

/sites/default/files/styles/teaser/public/2024-04/24_SpringCommencement_Graphic-Issa_0.jpg?h=31a74ad5&itok=0V0ddmyZ

Class of Spring 2024: CECS graduate Issa Hachem

/sites/default/files/styles/teaser/public/2024-04/49049638382_eab331a821_o-2.jpg?h=f0fb51a5&itok=dDD5zhgp

The remarkable life of Tony England

/sites/default/files/styles/teaser/public/2024-04/IMG_20240415_085847012-2.jpg?h=f0fb51a5&itok=NBeFsaWn

Could we make food processing ‘smarter’?

Current news.

/sites/default/files/styles/teaser/public/2024-04/DIA_1922_Groundbreaking_23294_0.jpg?h=cc2655a1&itok=O1TtNCpX

Discover Detroit through its little-known history

/sites/default/files/styles/teaser/public/2024-04/24_SpringCommencement_Graphic-Eric.jpg?h=31a74ad5&itok=3zw26wuI

Class of Spring 2024: CASL graduate Eric Welch

April 27, 2024

UC Berkeley's only nonpartisan political magazine

artificial intelligence a threat to the future of humans essay

Artificial Intelligence and the Loss of Humanity

The term “artificial intelligence,” or AI, has become a buzzword in recent years. Optimists see AI as the panacea to society’s most fundamental problems, from crime to corruption to inequality, while pessimists fear that AI will overtake human intelligence and crown itself king of the world. Underlying these two seemingly antithetical views is the assumption that AI is better and smarter than humanity and will ultimately replace humanity in making decisions.

It is easy to buy into the hype of omnipotent artificial intelligence these days, as venture capitalists dump billions of dollars into tech start-ups and government technocrats boast of how AI helps them streamline municipal governance . But the hype is just hype: AI is simply not as smart as we think. The true threat of AI to humanity lies not in the power of AI itself but in the ways people are already beginning to use it to chip away at our humanity.

AI outperforms humans, but only in low-level tasks.

Artificial intelligence is a field in computer science that seeks to have computers perform certain tasks by simulating human intelligence. Although the founding fathers of AI in the 1950s and 1960s experimented with manually codifying knowledge into computer systems, most of today’s AI application is carried out via a statistical approach through machine learning, thanks to the proliferation of big data and computational power in recent years. However, today’s AI is still limited to the performance of specialized tasks, such as classifying images, recognizing patterns and generating sentences.

artificial intelligence a threat to the future of humans essay

Although a specialized AI might outperform humans in its specific function, it does not understand the logic and principles of its actions. An AI that classifies images, for example, might label images of cats and dogs more accurately than a human, but it never knows how a cat is similar to and different from a dog. Similarly, a natural language processing (NLP) AI can train a model that projects English words onto vectors, but it does not comprehend the etymology and context of each individual word. AI performs tasks mechanically without understanding the content of the tasks, which means that it is certainly not able to outsmart its human masters in a dystopian manner and will not reach such a level for a long time, if ever.

AI does not dehumanize humans — humans do.

AI does not understand humanity, but the epistemological wall between AI and humanity is further complicated by the fact that humans do not understand AI, either. A typical AI model easily contains hundreds of thousands of parameters, whose weights are fine-tuned according to some mathematical principles in order to minimize “loss,” a rough estimate of how wrong the model is. The design of the loss function and its minimization process are often more art than science. We do not know what the weights in the model mean or how the model predicts one result rather than another. Without an explainable framework, decision-making driven by AI is a black box , unaccountable and even inhumane.

This is more than just a theoretical concern. This year in China, local authorities rolled out the so-called “health code,” a QR code assigned to each individual using an AI-powered risk assessment algorithm indicating their risk of contracting and spreading COVID-19. There have been numerous pieces of news coverage about citizens who found their health codes suddenly turning from green (low-risk) to red (high risk) for no reason. They became “digital refugees” as they were immediately banned from entering public venues, including grocery stores, which require green codes. Nobody knows how the risk assessment algorithm works under the hood, yet, in this trying time of coronavirus, it is determining people’s day-to-day lives.

AI applications can intervene in human agency.

Artificial intelligence is also transforming the medical industry. Predictive algorithms are now powering brain-computer interfaces (BCIs) that can read signals from the brain and even write in signals if necessary. For example, a BCI can identify a seizure and act to suppress the symptom, a potentially life-saving application of AI. But BCIs also create problems concerning agency. Who is controlling one’s brain — the user or the machine?

artificial intelligence a threat to the future of humans essay

One need not plug their brain into some electronic device to face this issue of agency. The newsfeed of our social medias is constantly using artificial intelligence to push us content based on patterns from our views, likes, moves of the mouse and number of seconds we spend scrolling through a page. We are passive consumers in a deluge of information tailored to our tastes, no longer having to actively reach out to find information — because that information finds us.

AI knows nothing about culture and values.

Feeding an AI system requires data, the representation of information. Some information, such as gender, age and temperature, can be easily coded and quantified. However, there is no way to uniformly quantify complex emotions, beliefs, cultures, norms and values. Because AI systems cannot process these concepts, the best they can do is to seek to maximize benefits and minimize losses for people according to mathematical principles. This utilitarian logic, though, often contravenes what we would consider noble from a moral standpoint — prioritizing the weak over the strong, safeguarding the rights of the minority despite giving up greater overall welfare and seeking truth and justice rather than telling lies.

The fact that AI does not understand culture or values does not imply that AI is value-neutral. Rather, any AI designed by humans is implicitly value-laden. It is consciously or unconsciously imbued with the belief system of its designer. Biases in AI can come from the representativeness of the historical data, the ways in which data scientists clean and interpret the data, which categorizing buckets the model is designed to output, the choice of loss function and other design features. A more aggressive company culture, for example, might favor maximizing recall in AI, or the proportion of positives identified as positive, while a more prudent culture would encourage maximizing precision, the proportion of labelled positives that are actually positive. While such a distinction might seem trivial, in a medical setting, it can become an issue of life and death: do we try to distribute as much of a treatment as possible despite its side effects, or do we act more prudently to limit the distribution of the treatment to minimize side effects, even if many people will never get the treatment? Within a single AI model, these two goals can never be achieved simultaneously because they are mathematically opposed to each other. People have to make a choice when designing an AI system, and the choice they make will inevitably reflect the values of the designers. 

Take responsibility, now.

AI may or may not outsmart human beings one day — we simply do not know. What we do know is that AI is already changing power dynamics and interpersonal relations today. Government institutions and corporations run the risk of treating atomized individuals as miniscule data points to be aggregated and tapped by AI programs, devoid of personal idiosyncrasies, specialized needs, or unconditional moral worth. This dehumanization is further amplified by the winner-takes-all logic of AI platform economies that creates mighty monopolies, resulting in a situation in which even the smallest decisions made by these companies have the power to erode human agency and autonomy. In order to mitigate the side effects of AI applications, academia, civil society, regulators and corporations must join forces in ensuring that human-centric AI will empower humanity and make our world a better place.

Featured image source: Odyssey

Published in Multimedia

  • artifical intelligence
  • Artificial Intelligence
  • machine learning

Xiantao Wang

Xiantao studies Sociology and Data Science at UC Berkeley. He writes on Hong Kong, U.S.-China relations, and technology.

artificial intelligence a threat to the future of humans essay

Comments are closed.

  • Work & Careers
  • Life & Arts

Become an FT subscriber

Try unlimited access Only $1 for 4 weeks

Then $75 per month. Complete digital access to quality FT journalism on any device. Cancel anytime during your trial.

  • Global news & analysis
  • Expert opinion
  • Special features
  • FirstFT newsletter
  • Videos & Podcasts
  • Android & iOS app
  • FT Edit app
  • 10 gift articles per month

Explore more offers.

Standard digital.

  • FT Digital Edition

Premium Digital

Print + premium digital, weekend print + standard digital, weekend print + premium digital.

Today's FT newspaper for easy reading on any device. This does not include ft.com or FT App access.

  • 10 additional gift articles per month
  • Global news & analysis
  • Exclusive FT analysis
  • Videos & Podcasts
  • FT App on Android & iOS
  • Everything in Standard Digital
  • Premium newsletters
  • Weekday Print Edition
  • FT Weekend Print delivery
  • Everything in Premium Digital

Essential digital access to quality FT journalism on any device. Pay a year upfront and save 20%.

  • Everything in Print

Complete digital access to quality FT journalism with expert analysis from industry leaders. Pay a year upfront and save 20%.

Terms & Conditions apply

Explore our full range of subscriptions.

Why the ft.

See why over a million readers pay to read the Financial Times.

International Edition

Advertisement

Advertisement

The threat, hype, and promise of artificial intelligence in education

  • Open access
  • Published: 10 November 2022
  • Volume 2 , article number  22 , ( 2022 )

Cite this article

You have full access to this open access article

artificial intelligence a threat to the future of humans essay

  • Niklas Humble   ORCID: orcid.org/0000-0002-5791-4765 1 &
  • Peter Mozelius   ORCID: orcid.org/0000-0003-1984-7917 1  

14k Accesses

16 Citations

3 Altmetric

Explore all metrics

The idea of building intelligent machines has been around for centuries, with a new wave of promising artificial intelligence (AI) in the twenty-first century. Artificial Intelligence in Education (AIED) is a younger phenomenon that has created hype and promises, but also been seen as a threat by critical voices. There have been rich discussions on over-optimism and hype in contemporary AI research. Less has been written about the hyped expectations on AIED and its potential to transform current education. There is huge potential for efficiency and cost reduction, but there is also aspects of quality education and the teacher role. The aim of the study is to identify potential aspects of threat, hype and promise in artificial intelligence for education. A scoping literature review was conducted to gather relevant state-of-the art research in the field of AIED. Main keywords used in the literature search were: artificial intelligence, artificial intelligence in education, AI, AIED, teacher perspective, education, and teacher. Data were analysed with the SWOT-framework as theoretical lens for a thematic analysis. The study identifies a wide variety of strengths, weaknesses, opportunities, and threats for artificial intelligence in education. Findings suggest that there are several important questions to discuss and address in future research, such as: What should the role of the teacher be in education with AI? How does AI align with pedagogical goals and beliefs? And how to handle the potential leak and misuse of user data when AIED systems are developed by for-profit organisations?

Similar content being viewed by others

artificial intelligence a threat to the future of humans essay

Uncovering Blind Spots in Education Ethics: Insights from a Systematic Literature Review on Artificial Intelligence in Education

artificial intelligence a threat to the future of humans essay

AI’s Role and Application in Education: Systematic Review

artificial intelligence a threat to the future of humans essay

Ethical principles for artificial intelligence in education

Avoid common mistakes on your manuscript.

1 Introduction

To build machines for automation of what we today call artificial intelligence (AI) is an idea that has been around for a long time. As early as in the fourteenth century Ramon Llull described his concept Ars Magna, a concept for implementing thought and reasoning processes in an intelligent machine. Ars Magna was built around the idea of combining logical system to evaluate if postulates were true or false, something that could be seen as the very origin of what we today call symbolic AI [ 19 ]. Ideas that later inspired scientists such as Giordano Bruno, Athanasius Kirchner, Agrippa of Nettesheim and Gottfried Wilhelm Leibniz [ 32 ]. What we today consider to be the modern computational model for intelligent reasoning was presented by Alan Turing [ 70 ], a machine model that is the foundation of computer science, and the idea of using computers to solve complex problems.

The emerging field of Artificial Intelligence in Education (AIED) is a younger one, with its roots in the 1970s [ 60 , 65 ]. In the twenty-first century, AI have in various ways and frequently been suggested to enhance educational activities and processes [ 22 , 36 ]. AIED started out as a playground and research field for computer scientists [ 15 ], but a field that gradually made a strong impact on education and become a cross-disciplinary phenomenon [ 22 , 36 ]. Some examples of how AIED might facilitate learning and teaching activities are the potential to support student collaboration, and to enable mass individualisation in large student groups [ 43 ]. A prioritised research area in AIED has been the attempt to reach the same quality and efficiency as in traditional one-to-one human-tutoring in technology enhanced learning environments [ 72 ].

This article updates and extends a literature study that previously has been presented by the authors [ 28 ]. Authors have taken courses on AI at university level and participated in specific research seminars on AIED. Outcomes from one of the research seminars have been published in Hrastinski et al. [ 27 ]. The aim of the study is to identify potential aspects of threat, hype and promise in artificial intelligence for education. With the SWOT model as the analytic lens the research question to answer was: Which are the strengths, weaknesses, opportunities, and threats of the ongoing implementation of artificial intelligence in education?

2 Theoretical background

An important concept in this study is 'hype', which has been based on the descriptions provided by the Gartner Hype Curve introduced in 1995 [ 58 ]. The hype curve was designed to support the decision of when to invest in a novel technology and consists of five stages: technology trigger, peak of inflated expectations, trough of disillusionment, slope of enlightenment, and plateau of productivity [ 58 ]. Fenn [ 17 ] describe these stages as:

Technology trigger: Industry interest and significant press is generated through, for example, a public demonstration, a breakthrough, or product launch.

Peak of inflated expectations: Limits of the technology is pushed with unrealistic projections and overenthusiasm as more failures than successes are achieved.

Trough of disillusionment: The technology becomes unfashionable and interest wanes as the overinflated expectations cannot be met.

Slope of enlightenment: A true understanding of the risks, applications, and benefits of the technology is reached through solid hard work and focused experimentation by organisations of a diverse range.

Plateau of productivity: The reduced risk of the technology and the demonstrated benefits makes more organisations comfortable, and a phase of growing adoption and acceptance begins.

2.1 Artificial intelligence

When Turing [ 71 ] extended his ideas in 'Computing Machinery and Intelligence', he created a bedrock for AI, and the start of what is called the first wave, or the first spring of AI. Turing's ideas in the 1950s are today seen as the foundation for both computer science and AI, despite the fact that Turing never used the term AI [ 11 ]. The following hypes and declines of AI has been described as springs and winters of AI [ 47 ], or as the three waves of AI [ 35 ]. There are today, in the third wave of AI, high expectations of a future in which AI systems act as partners to humans rather than as tools [ 35 ]. At the same time the new AI spring shows a rapid progress of sub-symbolic AI, which also brings a number of caveats to consider if AI systems should be classified as beneficial and human compatible [ 47 ].

To further formalise logic and intelligent reasoning and to refine and reinforce AI, has been an ongoing process since Turing built the first computer chess engines in the 1950s. As suggested some years later by Claude Shannon, computer chess is an interesting testbed for computer science and AI [ 34 ]. A series of successful computer programs have challenged chess grandmasters, and today chess engines such as Stockfish and Komodo outplay the strongest grandmasters. With the use of machine learning and deep learning technology, computer software such as Alpha Go has also mastered the complex Go game [ 66 ]. However, these strong and specialised AI systems have not reached a lobster’s level of social skills and cannot handle general real-world problems. This comparison that was presented by John Searle [ 62 ], is still valid today. Searle was also the researcher who coined the terms weak AI and strong AI.

Today, weak AI and strong AI have been renamed as narrow AI and artificial general intelligence (AGI). The term AGI was coined and spread by the AI-researchers Shane Legg, Mark Gabrud and Ben Goertzel with AGI defined as a general artificial intelligence on the level of human intelligence [ 67 ]. Strong AI or AGI can also be illustrated by the Turing test, where true AGI is accomplished when it is no longer possible to tell the difference between a natural language conversation between humans, and the one between a human and an AI system [ 71 ]. Narrow AI can be exemplified by specialised AI applications communicating via Bluetooth or other protocols. More impressing examples of narrow AI on superhuman level, is when the heuristics-based chess engine Deep Fritz beat the grandmaster Vladimir Kramnik [ 25 ], or when the machine learning based AlphaGo defeated the Korean Go grandmaster Lee Sedol [ 77 ].

Machine learning with the use of deep neural networks is the AI field that has made a fast progress in the third wave of AI. A deep neural network is a neural network with more than one hidden layer between the input layer and the output layer, where complex tasks can be divided between the different layers [ 47 ]. However, as pointed out by Korteling et al. [ 37 ], all developed AI systems must be classified as weak or narrow AI, since they all lack the general problem-solving skills that exist in biological intelligence. As an example, the answer to the question if AI systems might replace human doctors in the treatment of patients is still an undoubtful no. There are today no deep learning algorithms or combination of heuristics that can understand human emotions at the deeper level necessary for treating patients with severe diseases [ 77 ].

A common part of quality control and investigation of strategic planning in an enterprise system is to carry out an analysis of the potential strengths, weaknesses, opportunities, and threats (SWOT) [ 51 ]. A SWOT analysis can be a powerful tool for identifying potential influences of a system on an organisation, and to identify skills, support, attitudes, knowledge, and abilities that are needed in an organisation [ 39 ]. However, there are also critique of how SWOT analysis are interpreted. SWOT analyses are typically used for identifying and naming strengths, weaknesses, opportunities, and threats and does not include the impact of individual factors, or the potential impact, on desired outcomes [ 39 ]. Furthermore, traditional SWOT analyses are mainly concerned with the strengths, weaknesses, opportunities, and threats of a specific technology or implementation and therefore provide little guidance for alternative decisions [ 39 ]. To enhance the SWOT analysis conducted in this study, the identified factors of strengths, weaknesses, opportunities, and threats are discussion in “ Results and discussion ” section to address these limitations of the SWOT analysis.

The conventional approach in conducting a SWOT analysis is to generate SWOT categorisation in a 2 × 2 matrix, containing internal factors of strengths and weaknesses and external factors of opportunities and threats [ 39 ]. Typically, there are two questions being asked related to internal and external factors when conducting a SWOT analysis: Which are the benefits and costs? And are these occurring inside or outside of the organisation? [ 39 ] In this study, this has been translated to:

Which are the benefits (strengths and opportunities) and costs (weaknesses and threats) of artificial intelligence in education? Are these benefits and costs internal (strengths and weaknesses within the AI-technology) or external (opportunities and threats for education)?

In previous research, SWOT analyses have been used for examining the strengths, weaknesses, opportunities, and threats of reforming higher education with artificial intelligence, machine learning, and extended reality; with a focus on how these techniques can be used to support the development of a new strategy for higher education [ 31 ]. The SWOT analysis approach has further been used for investigating the strengths, weaknesses, opportunities, and threats in strategic documents for the development of AI. This was conducted with specific attention for aspects concerning science, technology, engineering, and mathematics (STEM) education [ 2 ]. SWOT analysis has also been used for examining whether implementation of artificial intelligence in higher education would hinder or support educational processes and how teachers affected by the implementation would perceive the new technologies [ 9 , 41 ].

This study was conducted as a scoping literature review to provide an overview of a selected topic, as described by Munn et al. [ 49 ], with AIED as the selected topic. A scoping review can be an appropriate approach to use for studies with an aim of clarifying concepts and to identify knowledge gaps. Contrary to the systematic review, the aim of a scoping review is not to synthesise the results related to a specific research question but rather to map and provide an overview [ 49 ]. Furthermore, a scoping review is a method for finding key concepts in specific research fields, and to identify the main sources for further research [ 49 ]. Therefore, scoping reviews can serve as a precursor to further research and systematic reviews [ 49 ].

3.1 Data collection

The main keywords in the literature search were: 'artificial intelligence', 'artificial intelligence in education', AI, AIED, education, teacher, and 'teacher perspective'. These keywords were combined with the use of the Boolean operators 'AND' and 'OR'.

Query 1: "artificial intelligence in education" OR "AIED" Query 2: ("artificial intelligence" OR "AI") AND ("education" OR "teacher" OR "'teacher perspective")

Google Scholar was used as the search engine to find research papers that had a potential to answer the aim and research question of the study. Results were filtered to only include papers with a publication year no older than 2015. Backward searches were also used to include papers of interest to the study’s aim and research question. All papers were not directly accessible through Google Scholar and were retrieved from the aggregation of research databases at the Mid Sweden University library.

The different combinations of keywords used in the searches resulted in many hits. However, the potential to contribute to answering the research question was often limited. Many papers did not primarily study AI concepts that could be classified to involve some degree of intelligence, but more of automatised systems. Similarly, the categories in the SWOT framework served as filter for the papers to be included. A total of 20 papers were selected in the first phase of this study conducted in 2019 [ 28 ]. In the second phase, conducted in 2022, the list of included papers was revised and expanded to 41 papers to be included in the study (Table 1 ). The new list includes publications between 2020 and 2022 to contribute to the description of a research field that has rapidly grown during recent years. All papers that were considered for inclusion have been discussed between the authors to ensure contribution to the results of the study.

3.2 Data analysis

This study used thematic analysis to identify themes in the selected and included papers [ 46 ]. The analysis was inspired by the six phases for conducting a thematic analysis presented by Braun and Clarke [ 8 ]: (1) familiarizing yourself with the data, (2) generating initial codes, (3) searching for themes, (4) reviewing potential themes, (5) defining and naming themes, and (6) producing the report. However, with the use of the SWOT framework (Fig.  1 ) and a deductive-inductive approach, the analysis started with another initial phase. The initial phase consisted of defining the main categories for the deductive part of the coding in the analysis, which was decided to be strengths, weaknesses, opportunities, and threats, or the SWOT framework. A more detailed description of the SWOT framework is provided in “ SWOT ” section.

figure 1

The SWOT analysis framework

The deductive approach was used as a 'top-down approach' [ 8 ] to order data and identified themes in the SWOT framework. The inductive approach was used as a 'bottom-up approach' [ 8 ] to group codes and extracts from the included papers in potential themes based on similarities and differences in the data. Further, it should be noted that the thematic analysis presented in this study has been conducted at two points in time. The first analysis was conducted during autumn of 2018 and spring of 2019 and was presented as a shorter conference paper in 2019 [ 28 ]. The second analysis was conducted during 2022 and was used to revise and extend the previous analysis. A detailed description of the phases of analysis is provided below.

In the first phase of analysis, familiarisation with data were reached through thoroughly reading potentially relevant papers that could support in answering the study’s aim and research question and by adding to the SWOT categories for analysis. In parallel with the first phase, the second phase of analysis was carried out deductively by collecting extracts from the papers under the relevant SWOT category in a text document. In the third and fourth phase of analysis, these extracts where grouped and re-grouped inductively within the SWOT categories in search for potential themes. These themes were discussed and revised for consistency by both authors. In the fifth and sixth phase, authors had come to an agreement on the identified themes and proceeded to naming these and writing up the report. This concluded the analysis conducted for the first conference paper, which identified 17 themes divided between the SWOT categories [ 28 ].

For the second iteration of the analysis, conducted in 2022 and the main contribution of this study, the six phases were repeated. In the first phase of analysis, both previous included papers and new potential papers were considered and re-considered for inclusion in the analysis. In the second phase of analysis, both old and new extracts were collected in the SWOT categories. In the third and fourth phase of analysis, extracts were once again grouped and re-grouped within the SWOT categories in search for themes that could either be the same as before, revised, or new. In the fifth and sixth phase of analysis, authors had once again come to an agreement on the themes for the study and proceeded with naming these and writing up the report. This second iteration of analysis resulted in a revised and extended list of included papers (41) but a more condensed number of themes (10), which are divided between the SWOT categories and further presented in “ Results and discussion " section.

4 Results and discussion

In this section the results from the study are presented and discussed. Results from the literature study are presented in sub-headings below. Themes in sub-headings 4.1 and 4.2 address internal strengths and weaknesses of artificial intelligence in education, that is, the strengths and weaknesses of the system for education. Themes in sub-heading 4.3 and 4.4 address external opportunities and threats of artificial intelligence in education, that is, the opportunities and threats that artificial intelligence brings to education. Further, themes addressed in sub-headings 4.1 and 4.3 can be viewed as aspects of artificial intelligence that could be helpful for achieving objectives of implementation artificial intelligence in education. While themes addressed in sub-headings 4.2 and 4.4 can be viewed as aspects of artificial intelligence that could be harmful for achieving objectives of implementation artificial intelligence in education (Table 2 ).

4.1 Strengths

A belief surrounding the development of AI is that AI will support in making computer-aided teaching and learning more efficient [ 42 ]. A common field for addressing teaching and learning with computer systems is science, technology, engineering, and mathematics (STEM). Previous research show that artificial intelligence in education often focuses on domain knowledge in STEM and computer science [ 61 , 76 ]. With a step-based approach to well-defined problems in STEM, AIED has been successful in supporting teaching and learning of domain knowledge [ 61 ]. Examples of where AIED and intelligent tutoring systems have been applied in STEM are computer science education [ 6 ], and in computer programming education [ 14 ].

There are many sub-fields and branches of artificial intelligence, and one of these is natural language processing (NLP) [ 3 , 13 ]. NLP can be described as computational techniques intended for producing, understanding, and learning human language [ 24 ] and is commonly applied in the context of education [ 13 ]. Although the use of NLP for educational purposes is often irregular and complex [ 56 ], previous research has highlighted use cases. An example of where NLP can be applied is to support development of students’ social, language and work skills [ 3 ]. Speech generation and translation of text can be performed by software-controlled AI assistants with NLP algorithms [ 20 ]. NLP can also support students in learning and work life-training by recording speech, provide feedback, and order and suggest steps of action [ 74 ]. Further, previous research has suggested to use NLP in combination with machine learning to aid in preparing texts of appropriate difficulty for reading comprehension [ 5 ].

Systems used in educational contexts can be described as an interconnected ecosystem where a change in one system can affect the whole [ 44 ]. It is therefore important that the implementation of AI-systems in education builds on supporting educational ecosystems [ 44 , 54 ]. A benefit of applying artificial intelligence in education as an ecosystem is that each AI-system can target a specific purpose, building on highly specialised research, and keep a wide application through the connectivity to other systems in the ecosystem [ 55 ]. Specialised AI-systems that could be integrated in such ecosystems are intelligent tutoring systems, which can personalise tutoring, suggest learning paths, engage students, provide feedback, and improve learning experiences [ 18 , 21 , 38 , 76 ].

4.2 Weaknesses

Artificial Intelligence techniques can be applied in educational contexts for adaptive tutoring of students in the form of intelligent tutoring systems (ITS) [ 16 ]. However, ITS often rely on AI-techniques that are much simpler than what was initial intended [ 4 , 16 ]. As a part of this, Baker [ 4 ] labelled many ITS as 'stupid tutoring systems' and in need of collaboration with human intelligence to be amplified [ 4 , 16 , 21 ]. Other systems that draw on AI-techniques, such as intelligent decision support systems (IDSS), have been surrounded by similar AI hype, which unfortunately have fallen short of the expectations [ 26 ].

With the rise of artificial intelligence in education, the systems implemented in educational contexts is and will be built by people. Algorithms that are developed to process data are created by programmers with potential biases in the code [ 53 , 59 ]. Moreover, training data and models used for machine learning are corrected and evaluated by humans [ 50 ]. Since there are no definite guidelines for ethics in either AI or AI-applications in education [ 53 ], the weaknesses of AI-systems may have real consequences for education due to the increased attention that AI researchers, product developers, venture capitalists, and advocates for educational technology are putting on the educational market [ 50 ]. The consequences of potential biases in AI-systems for education are further amplified by the marketing efforts to present AI-algorithms as value-neutral and objective to the public [ 1 ]. The use of AI-systems and technological solutions in education raises the question of "who sets the agenda for teaching and learning" [ 59 ].

4.3 Opportunities

Luckin et al. [ 43 ] states that they do not see that teachers will be replaced by AI-systems in future education. AI is instead highlighted as the possibility for enhancing education with high-quality education and assisting human teachers to make this widely accessible [ 10 ]. Previous research suggests that AI-systems should focus on assisting concrete pedagogical tasks that for a human teacher would be perceived as exhausting and time-consuming, for example assisting in constructing grade responses [ 52 ]. Another possibility with AI as an assistant to teachers is that it could free teachers to focus more on supporting students' development to independent collaborative thinkers, instead of possessing and transmitting relevant knowledge [ 61 ]. The AI-system could record and analyse students’ work and report back to the teacher with suggestions of which students might need extra attention, what is sometimes referred to as cobots (co-working robots) [ 69 ].

Similar to the idea of using AI to assist human teachers, is the idea of using AI to assist the students. A suggested application for AI in education is to use it for connecting and engaging students with other students and with their teachers to increase efficiency of learning [ 20 , 40 ]. Further, AI can be used for adaptation of learning materials for students with special learning needs, and to provide timely support for those students [ 45 , 53 ]. Furthermore, there are several examples of studies where the roles of students as tutees are shifted to tutors and the students learn by coding, teaching, or instructing an artificial tutee [ 48 , 57 , 63 , 64 ]. However, previous research suggests that personalised learning with AI should not be superficial, but that a crucial aspect is that the AI-system provides depth in the customisation [ 73 ].

Related to the idea of both AI as an assistant to teachers and to students, is the idea of using AI for individualisation of education on a massive scale. Previous research has suggested the potential of using AI-systems to put the learners in the centre and tailor the learning according to their needs and preferences [ 20 ]. Previous research has suggested that future research should investigate the potential of personalising education and promoting learning precision with AI-techniques such as natural language processing [ 13 ]. Previous research has also suggested that AI systems can be effective tools for supporting students with neurodevelopmental disorders to address challenges in learning and to personalise education [ 7 ]. AI is further anticipated to enhance education with support to teachers and making education of high-quality accessible more widely [ 10 ].

4.4 Threats

A potential concern for educational personnel is the "extinction risk fears" brought by predictions for future AI technologies [ 75 ] and the real and psychological effects this can have for those effected [ 30 ]. Although AI is not advanced enough to be able to replace teachers, the technology has presented itself capable enough to replace other personnel roles in education such as teacher assistants and administrators [ 59 ]. A potential change of the teacher role with AI in education can be exemplified by the stereotypical learning design of MOOCs, with low-level multiple-choice questions, and the teacher taking on the role of a content developer [ 61 ]. Previous research has further warned that widespread application of AI-techniques in education could potentially harm the relationship between teacher and students and hinder development of students to become independent learners, capable without online platforms and artificial teaching assistants [ 75 ]. Another potential threat is that AI-techniques meddles with what students are expected to learn, the standardisation, through ‘too much’ individualisation of education [ 75 ].

As AI-researchers, product developers, venture capitalists, and advocates for educational technology are putting more of their attention on the educational market [ 50 ], the potential negative consequences of poorly implemented or poorly adapted AI-systems in education increases. Still many schools and teachers lack in knowledge and are not prepared for the integration of AI in education, which increases the risk of technology abuse when AI is implemented [ 33 ]. This could for example have negative consequences for privacy and surveillance, and personal data could be leaked and used to influence individuals within the educational system [ 1 , 53 , 59 ]. Previous research has pointed out the importance of adopting ethical frameworks for the use and development of artificial intelligence in education [ 12 , 29 , 53 ]. If an ethical framework is developed and applied for AI in education, it still needs to be continuously discussed and updated because of the fast development of AI techniques and its potential for wide application [ 53 ].

5 Concluding remarks

In the literature study it has been noted that there is an ambiguity in many of the papers concerning AIED, particularly regarding the concept of 'intelligence'. Many of the ideas and technologies that have been suggested and discussed could be questioned whether they contain any level of intelligence. This ambiguity in the concept of AI might be explained by that AIED is in an emerging stage of hype [ 17 , 58 ], with over-optimism regarding the potential to transform existing education. The AIED hype could be caused by the hype of progress in AI, but it is still unclear how the development in machine learning and deep learning could be applied in AIED. Despite the fast progress in the fields of deep learning and NLP there are still issues to address regarding these techniques. As highlighted by Mitchell [ 47 ], both these techniques are stuck in the 90–10 phenomenon that is common in AI development. The techniques are solved to 90%, but the remaining 10% can make AI systems fail with severe consequences. Moreover, the remaining 10% often takes longer time to sort out than the first 90% [ 47 ]. It could be wise to wait with investments until all involved AI techniques are completely developed without surprising side effect, especially in areas where AI systems interacts with humans. The authors would like to emphasise the importance of traditional academic values such as scepticism, to maintain that the goal of education should be to foster responsible citizens and educated minds [ 59 ].

This study resulted in findings that can answer the research question about strengths, weaknesses, treats, and opportunities of the ongoing implementation of artificial intelligence in education. Strengths were found in the areas of STEM education and language learning. However, the fast progress in machine learning with deep neural network in AI, is not at the same level in AIED. Regarding weaknesses, many AIED system are built with a low degree of intelligence, and sometimes on the level of what can be called 'stupid tutoring systems' [ 4 ]. Moreover, AIED like AI, still have the issues of biased data and biased algorithms to handle. In the identified general AIED hype there are also several opportunities, such as the potential for supporting teachers and students in more individualised and adapted learning environments. One example is that there exist intelligent solutions which can facilitate for students with impairments and special needs.

The found threats identifies important aspects of AIED to be mindful of in the ongoing implementation. What will the future role of the teacher, and other school staff, be in education with AI systems? And how does this align with our pedagogical beliefs or theories? Do educational leaders and teachers have sufficient knowledge in the field of AI to distinguish a poorly developed system from a good one? Or how to adequately apply these in educational setting? Also, how do we protect students’ and teachers’ data when the skills and knowledge to develop AIED systems lies with for-profit organisation and not within the educational sector? These are all questions that need to be considered carefully and addressed thoroughly in the years to come. Especially the question regarding AI’s alignment to pedagogical theory should be emphasised in future research, since any new technology integrated in education should be designed to fill a pedagogical need. Authors' recommendation is to develop new courses on AI, that not necessarily have to handle the more technical parts of neural networks and NLP, but rather be designed for a broader audience. There are some open MOOC alternatives developed during the last years [ 23 ], but there is still a lack of practical training courses on AI and courses that discuss the ethical aspects of AI [ 68 ].

The conclusion of the study is that there are threats, hype, and promises for artificial intelligence in education and the teacher practice. For a successful implementation of AIED systems that are in line with educational goals and beliefs the development, use, and consequences of applying these systems must be part of ongoing discussions and future research. Finally, authors' recommendation for the near future is not to attempt in building AIED ecosystems that should match the idea of AGI, but rather to invest in a synchronisation of specialised narrow AI systems that could function as extra support tools for teachers and students. Still, the concept of 'intelligence' should be emphasised in these tools to avoid ambiguity in AIED. Intelligence is of course a complex concept to pin down, but a practical definition such as “Ability to accomplish complex goals” [ 67 ], p.39) could be sufficient.

6 Limitations and future research

This study was conducted as a scoping literature review and therefore have some limitations compared to a systematic literature review. For a more comprehensively examination of the AIED field as a whole, a systematic literature review would be suggested. Although the scoping review succeeded in identifying interesting research topics and gaps, all identified topics and issues were not discussed in detail. To dig deeper into the field, authors suggest a follow-up study were found issues are directly asked to teachers. Data could be gathered both quantitatively by a large-scale survey and qualitatively by interviews. This would further allow for a comparison of the fears and threats highlighted in this study with teachers’ perceptions of AI in education. Considering the current AIED hype, teachers’ opinions and perceptions should be considered to facilitate AIED investments that are truly based in pedagogical needs and would add value to teachers’ daily activities.

Data availability

Papers used for the study are presented in Table 1 .

Code availability

Not applicable.

Akgun S, Greenhow C. Artificial intelligence in education: addressing ethical challenges in K-12 settings. AI and Ethics. 2021;1:1–10.

Google Scholar  

Angelova, G., Nisheva-Pavlova, M., Eskenazi, A., & Ivanova, K. (2021). Role of education and research for artificial intelligence development in Bulgaria until 2030. In  Mathematics and Education in Mathematics. Proceedings of the Fiftieth Spring Conference of the Union of Bulgarian Mathematicians  (pp. 71–82).

Asakura K, Occhiuto K, Todd S, Leithead C, Clapperton R. A call to action on artificial intelligence and social work education: Lessons learned from a simulation project using natural language processing. J Teach Soc Work. 2020;40(5):501–18.

Article   Google Scholar  

Baker RS. Stupid tutoring systems, intelligent humans. Int J Artif Intell Educ. 2016;26(2):600–14.

Balyan R, McCarthy KS, McNamara DS. Applying natural language processing and hierarchical machine learning approaches to text difficulty classification. Int J Artif Intell Educ. 2020;30(3):337–70.

Barnes T, Boyer K, Hsiao SIH, Le NT, Sosnovsky S. Preface for the special issue on AI-supported education in computer science. Int J Artif Intell Educ. 2017;27(1):1–4.

Barua PD, Vicnesh J, Gururajan R, Oh SL, Palmer E, Azizan MM, et al. Artificial intelligence enabled personalised assistive tools to enhance education of children with neurodevelopmental disorders—a review. Int J Environ Res Public Health. 2022;19(3):1192.

Braun V, Clarke V. Thematic analysis. American Psychological Association; 2012.

Book   Google Scholar  

Bucea-Manea-Țoniş R, Kuleto V, Gudei SCD, Lianu C, Lianu C, Ilić MP, Păun D. Artificial intelligence potential in higher education institutions enhanced learning environment in Romania and Serbia. Sustainability. 2022;14(10):5842.

Bundy A. Preparing for the future of Artificial Intelligence. AI & Soc. 2017;32:285–7.

Castelfranchi C. Alan Turing’s “Computing Machinery and Intelligence.” Topoi. 2013;32:293–9. https://doi.org/10.1007/s11245-013-9182-y .

Chaudhry MA, Kazim E. Artificial Intelligence in Education (AIEd): a high-level academic and industry note 2021. AI Ethics. 2022;2(1):157–65.

Chen X, Xie H, Zou D, Hwang GJ. Application and theory gaps during the rise of artificial intelligence in education. Comput Educ. 2020;1:100002.

Crow T, Luxton-Reilly A, Wuensche B. Intelligent tutoring systems for programming education: a systematic review. In: Proceedings of the 20th Australasian Computing Education Conference, 2018 ; pp. 53–62.

Cumming G, McDougall A. Mainstreaming AIED into education? Int J Art Intell Educ. 2000;11:197–207.

Dermeval D, Albuquerque J, Bittencourt II, Isotani S, Silva AP, Vassileva J. GaTO: an ontological model to apply gamification in intelligent tutoring systems. Front Artif Intell. 2019;2:13.

Fenn, J. (2007). Understanding Gartner's hype cycles, 2007. Gartner ID G, 144727.

Gan W, Sun Y, Sun Y. Knowledge interaction enhanced sequential modeling for interpretable learner knowledge diagnosis in intelligent tutoring systems. Neurocomputing. 2022;488:36–53.

García Bermejo, C. Classification of medical images using Deep Learning techniques; 2020.

Goksel N, Bozkurt A. Artificial intelligence in education: Current insights and future perspectives. In: Handbook of Research on Learning in the Age of Transhumanism (pp. 224–236). IGI Global; 2019.

Guo L, Wang D, Gu F, Li Y, Wang Y, Zhou R. Evolution and trends in intelligent tutoring systems research: a multidisciplinary and scientometric view. Asia Pac Educ Rev. 2021;22(3):441–61.

Heffernan NT, Heffernan CL. The ASSISTments ecosystem: Building a platform that brings scientists and teachers together for minimally invasive research on human learning and teaching. Int J Artif Intell Educ. 2014;24(4):470–97.

Article   MathSciNet   Google Scholar  

Heintz F, Roos T. Elements Of AI-Teaching the Basics of AI to Everyone in Sweden. In Proceedings of the 13th International Conference on Education and New Learning Technologies (EDULEARN21) . IATED, Online; 2021; (pp. 2568–2572).

Hirschberg J, Manning CD. Advances in natural language processing. Science. 2015;349(6245):261–6.

Article   MathSciNet   MATH   Google Scholar  

Hoffman P. 'Brains in Bahrain:' Man and Machine Call It Quits. TIME, 2002. Retrieved from http://content.time.com/time/arts/article/0,8599,366855,00.html . Accessed 5 Nov 2022.

Holford J, Milana M, Waller R, Webb S, Hodge S. Data, artificial intelligence and policy-making: hubris, hype and hope. Int J Lifelong Educ. 2019;38(6):iii–vii.

Hrastinski S, Olofsson AD, Arkenback C, Ekström S, Ericsson E, Fransson G, et al. Critical imaginaries and reflections on artificial intelligence and robots in postdigital K-12 education. Postdigital Sci Educ. 2019;1(2):427–45.

Humble N, Mozelius P. Artificial intelligence in education—A promise, a threat or a hype. In  Proceedings of the european conference on the impact of artificial intelligence and robotics ; 2019, (pp. 149–156).

Hwang GJ, Xie H, Wah BW, Gašević D. Vision, challenges, roles and research issues of Artificial Intelligence in Education. Comput Educ. 2020;1:100001.

Ikedinachi AP, Misra S, Assibong PA, Olu-Owolabi EF, Maskeliūnas R, Damasevicius R. Artificial intelligence, smart classrooms and online education in the 21st century: implications for human development. J Cases Inform Technol. 2019;21(3):66–79.

Ilić MP, Păun D, PopovićŠević N, Hadžić A, Jianu A. Needs and performance analysis for changes in higher education and implementation of artificial intelligence, machine learning, and extended reality. Educ Sci. 2021;11(10):568.

Jensen T. Ramon Llull’s Ars Magna. In: Moreno-Diaz R, Quesada-Arencibia A, Pichler F, editors. EUROCAST: International conference on computer aided systems theory. Cham, Switzerland: Springer; 2017. p. 19–24.

Karsenti T. Artificial intelligence in education: The urgent need to prepare teachers for tomorrow’s schools. Formation et Profession. 2019;27(1):105–11. https://doi.org/10.18162/fp.2018.a166 .

Keene RD, Jacobs BA, Buzan T. Man v Machine: The ACM Chess Challenge: Garry Kasparov v IBM's Deep Blue. BB Enterprises. 1996.

Kersting K. Rethinking computer science through AI. KI-Künstliche Intelligenz. 2020;34(4):435–7.

Koedinger KR, Corbett AT. Cognitive tutors: Technology bringing learning science to the classroom. In: Sawyer K, editor. The Cambridge handbook of the learning sciences. New York: Cambridge University Press; 2006. p. 61–78.

Korteling JH, van de Boer-Visschedijk GC, Blankendaal RA, Boonekamp RC, Eikelboom AR. Human-versus artificial intelligence. Front Artif Intell. 2021;4:622364.

Latham A. Conversational Intelligent Tutoring Systems: The State of the Art.  Women in Computational Intelligence ; 2022; 77–101.

Leigh D. SWOT analysis. Handbook of Improving Performance in the Workplace: Volumes. 2009;1–3:115–40.

Lin PH, Wooders A, Wang JTY, Yuan WM. Artificial intelligence, the missing piece of online education? IEEE Eng Manage Rev. 2018;46(3):25–8.

Leoste J, Jõgi L, Õun T, Pastor L, et al. Perceptions about the future of integrating emerging technologies into higher education—the case of robotics with artificial intelligence. Computers. 2021;10(9):110.

Liu M. The application and development research of artificial intelligence education in wisdom education era. In  2nd International Conference on Social Sciences, Arts and Humanities (SSAH 2018) The ; 2018; p. 95–100.

Luckin R, Holmes W, Griffiths M, Forcier LB. Intelligence Unleashed: An argument for AI in Education. London: Pearson Education; 2016.

Luckin R, Cukurova M, Kent C, du Boulay B. Empowering educators to be AI-ready. Comput Educ. 2022;1:100076.

Lynch M. 7 roles for Artificial Intelligence in education. The Tech Advocate [blog post]. 2018. https://www.thetechedvocate.org/7-roles-for-artificial-intelligence-in-education/ . Retrieved July 1, 2022.

Maguire M, Delahunt B. Doing a thematic analysis: A practical, step-by-step guide for learning and teaching scholars. All Ireland J High Educ. 2017;9:3.

Mitchell M. Artificial intelligence: A guide for thinking humans. Penguin UK; 2019.

Mohammed PS, Nell’Watson E. Towards inclusive education in the age of artificial intelligence: Perspectives, challenges, and opportunities. Artificial intelligence and inclusive education, 2019; 17–37.

Munn Z, Peters MD, Stern C, Tufanaru C, McArthur A, Aromataris E. Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach. BMC Med Res Methodol. 2018;18(1):1–7.

Murphy RF. Artificial Intelligence Applications to Support K-12 Teachers and Teaching: A Review of Promising Applications, Opportunities, and Challenges. Perspective: RAND Corporation; 2019.

Namugenyi C, Nimmagadda SL, Reiners T. Design of a SWOT analysis model and its evaluation in diverse digital business ecosystem contexts. Procedia Comput Sci. 2019;159:1145–54.

Nazaretsky T, Ariely M, Cukurova M, Alexandron G. Teachers' trust in AI‐powered educational technology and a professional development program to improve it. Br J Educ Technol; 2022.

Nichols M, Holmes W. Don't do Evil: Implementing Artificial Intelligence in Universities. In EDEN Conference Proceedings, 2018; No. 2, pp. 110–118.

Niemi H. Artificial intelligence for the common good in educational ecosystems. In Humanistic futures of learning - Perspectives from UNESCO Chairs and UNITWIN Networks, 2020; 148--152.

Nye BD. ITS, The End of the world as we know it: transitioning AIED into a service-oriented ecosystem. Int J Artif Intell Educ. 2016;26:756–70.

Ogan A, Yarzebinski E, Roock RD, Dumdumaya C, Banawan M, Rodrigo M. Proficiency and Preference Using Local Language with a Teachable Agent. In International Conference on Artificial Intelligence in Education; 2017. pp. 548–552). Springer, Cham.

Okkonen J, Kotilainen S. Minors and Artificial Intelligence–implications to media literacy. In: International Conference on Information Technology & Systems; 2019. (pp. 881–890). Springer, Cham.

O’Leary DE. Gartner’s hype cycle and information system research issues. Int J Account Inf Syst. 2008;9(4):240–52.

Popenici SA, Kerr S. Exploring the impact of artificial intelligence on teaching and learning in higher education. Res Pract Technol Enhanc Learn. 2017;12(1):1–13.

Robertson M. Artificial intelligence in education. Nature. 1976;262(5568):435–7.

Roll I, Wylie R. Evolution and revolution in artificial intelligence in education. Int J Artif Intell Educ. 2016;26(2):582–99.

Searle JR. Is the brain’s mind a computer program? Sci Am. 1990;262(1):25–31.

Serholt S, Ekström S, Küster D, Ljungblad S, Pareto L. Comparing a robot tutee to a human tutee in a learning-by-teaching scenario with children. Frontiers in Robotics and A. 2022;I:43.

Shahriar T, Matsuda N. “Can you clarify what you said?”: Studying the impact of tutee agents’ follow-up questions on tutors’ learning. In: International Conference on Artificial Intelligence in Education; 2021. (pp. 395–407). Springer, Cham.

Solomon CJ, Papert S. A case study of a young child doing Turtle Graphics in LOGO. In Proceedings of the June 7–10, 1976, national computer conference and exposition ; 1976 . (pp. 1049–1056).

Tanksley D, Wunsch II DC. Reproducibility via crowdsourced reverse engineering: A neural network case study with deepmind's alpha zero. arXiv preprint arXiv:1909.03032 ; 2019.

Tegmark M. Life 3.0: Being human in the age of artificial intelligence. New York: Knopf; 2017.

Tidjon LN, Khomh F. The different faces of ai ethics across the world: a principle-implementation gap analysis. arXiv preprint arXiv:2206.03225 ; 2022.

Timms MJ. Letting artificial intelligence in education out of the box: educational cobots and smart classrooms. Int J Artif Intell Educ. 2016;26(2):701–12.

Turing AM. On computable numbers, with an application to the Entscheidungsproblem. Proc Lond Math Soc. 1937;2(1):230–65.

Turing AM. Computing machinery and intelligence. Mind. 1950;54(236):433–60.

VanLehn K. The relative effectiveness of human tutoring, intelligent tutoring systems, and other tutoring systems. Educ Psychol. 2011;46(4):197–221.

Walkington C, Bernacki ML. Personalizing algebra to students’ individual interests in an intelligent tutoring system: moderators of impact. Int J Artif Intell Educ. 2019;29(1):58–88.

Wells A, Patel S, Lee JB, Motaparthi K. Artificial intelligence in dermatopathology: Diagnosis, education, and research. J Cutan Pathol. 2021;48(8):1061–8.

Wogu IAP, Misra S, Olu-Owolabi EF, Assibong PA, Udoh OD, Ogiri SO, Damasevicius R. Artificial intelligence, artificial teachers and the fate of learners in the 21st century education sector: implications for theory and practice. Int J Pure Appl Math. 2018;119(16):2245–59.

Zawacki-Richter O, Marín VI, Bond M, Gouverneur F. Systematic review of research on artificial intelligence applications in higher education–where are the educators? Int J Educ Technol High Educ. 2019;16(1):1–27.

Zhang Z. When doctors meet with AlphaGo: potential application of machine learning to clinical medicine. Ann Transl Med. 2016;4:6.

Download references

Open access funding provided by Mid Sweden University.

Author information

Authors and affiliations.

Department of Computer and System Science, Mid Sweden University, Östersund, Sweden

Niklas Humble & Peter Mozelius

You can also search for this author in PubMed   Google Scholar

Contributions

The main work in the study has been conducted by the lead author (N. Humble) with important contributions by the second author (P. Mozelius). Both authors have revised the work critically, approved for it to be published, and agreed to be accountable.

Corresponding author

Correspondence to Niklas Humble .

Ethics declarations

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Humble, N., Mozelius, P. The threat, hype, and promise of artificial intelligence in education. Discov Artif Intell 2 , 22 (2022). https://doi.org/10.1007/s44163-022-00039-z

Download citation

Received : 20 July 2022

Accepted : 01 November 2022

Published : 10 November 2022

DOI : https://doi.org/10.1007/s44163-022-00039-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Artificial intelligence in education
  • Teacher perspective
  • SWOT analysis
  • Find a journal
  • Publish with us
  • Track your research

The Future of AI: How Artificial Intelligence Will Change the World

artificial intelligence a threat to the future of humans essay

Innovations in the field of  artificial intelligence continue to shape the future of humanity across nearly every industry. AI is already the main driver of emerging technologies like big data, robotics and IoT, and  generative AI has further expanded the possibilities and popularity of AI. 

According to a 2023 IBM survey , 42 percent of enterprise-scale businesses integrated AI into their operations, and 40 percent are considering AI for their organizations. In addition, 38 percent of organizations have implemented generative AI into their workflows while 42 percent are considering doing so.

With so many changes coming at such a rapid pace, here’s what shifts in AI could mean for various industries and society at large.

More on the Future of AI Can AI Make Art More Human?

The Evolution of AI

AI has come a long way since 1951, when the  first documented success of an AI computer program was written by Christopher Strachey, whose checkers program completed a whole game on the Ferranti Mark I computer at the University of Manchester. Thanks to developments in machine learning and deep learning , IBM’s Deep Blue defeated chess grandmaster Garry Kasparov in 1997, and the company’s IBM Watson won Jeopardy! in 2011.  

Since then, generative AI has spearheaded the latest chapter in AI’s evolution, with OpenAI releasing its first GPT models in 2018. This has culminated in OpenAI developing its GPT-4 model and ChatGPT , leading to a proliferation of AI generators that can process queries to produce relevant text, audio, images and other types of content.   

AI has also been used to help  sequence RNA for vaccines and  model human speech , technologies that rely on model- and algorithm-based  machine learning and increasingly focus on perception, reasoning and generalization. 

How AI Will Impact the Future

Improved business automation .

About 55 percent of organizations have adopted AI to varying degrees, suggesting increased automation for many businesses in the near future. With the rise of chatbots and digital assistants, companies can rely on AI to handle simple conversations with customers and answer basic queries from employees.

AI’s ability to analyze massive amounts of data and convert its findings into convenient visual formats can also accelerate the decision-making process . Company leaders don’t have to spend time parsing through the data themselves, instead using instant insights to make informed decisions .

“If [developers] understand what the technology is capable of and they understand the domain very well, they start to make connections and say, ‘Maybe this is an AI problem, maybe that’s an AI problem,’” said Mike Mendelson, a learner experience designer for NVIDIA . “That’s more often the case than, ‘I have a specific problem I want to solve.’”

More on AI 75 Artificial Intelligence (AI) Companies to Know

Job Disruption

Business automation has naturally led to fears over job losses . In fact, employees believe almost one-third of their tasks could be performed by AI. Although AI has made gains in the workplace, it’s had an unequal impact on different industries and professions. For example, manual jobs like secretaries are at risk of being automated, but the demand for other jobs like machine learning specialists and information security analysts has risen.

Workers in more skilled or creative positions are more likely to have their jobs augmented by AI , rather than be replaced. Whether forcing employees to learn new tools or taking over their roles, AI is set to spur upskilling efforts at both the individual and company level .     

“One of the absolute prerequisites for AI to be successful in many [areas] is that we invest tremendously in education to retrain people for new jobs,” said Klara Nahrstedt, a computer science professor at the University of Illinois at Urbana–Champaign and director of the school’s Coordinated Science Laboratory.

Data Privacy Issues

Companies require large volumes of data to train the models that power generative AI tools, and this process has come under intense scrutiny. Concerns over companies collecting consumers’ personal data have led the FTC to open an investigation into whether OpenAI has negatively impacted consumers through its data collection methods after the company potentially violated European data protection laws . 

In response, the Biden-Harris administration developed an AI Bill of Rights that lists data privacy as one of its core principles. Although this legislation doesn’t carry much legal weight, it reflects the growing push to prioritize data privacy and compel AI companies to be more transparent and cautious about how they compile training data.      

Increased Regulation

AI could shift the perspective on certain legal questions, depending on how generative AI lawsuits unfold in 2024. For example, the issue of intellectual property has come to the forefront in light of copyright lawsuits filed against OpenAI by writers, musicians and companies like The New York Times . These lawsuits affect how the U.S. legal system interprets what is private and public property, and a loss could spell major setbacks for OpenAI and its competitors. 

Ethical issues that have surfaced in connection to generative AI have placed more pressure on the U.S. government to take a stronger stance. The Biden-Harris administration has maintained its moderate position with its latest executive order , creating rough guidelines around data privacy, civil liberties, responsible AI and other aspects of AI. However, the government could lean toward stricter regulations, depending on  changes in the political climate .  

Climate Change Concerns

On a far grander scale, AI is poised to have a major effect on sustainability, climate change and environmental issues. Optimists can view AI as a way to make supply chains more efficient, carrying out predictive maintenance and other procedures to reduce carbon emissions . 

At the same time, AI could be seen as a key culprit in climate change . The energy and resources required to create and maintain AI models could raise carbon emissions by as much as 80 percent, dealing a devastating blow to any sustainability efforts within tech. Even if AI is applied to climate-conscious technology , the costs of building and training models could leave society in a worse environmental situation than before.   

What Industries Will AI Impact the Most?  

There’s virtually no major industry that modern AI hasn’t already affected. Here are a few of the industries undergoing the greatest changes as a result of AI.  

AI in Manufacturing

Manufacturing has been benefiting from AI for years. With AI-enabled robotic arms and other manufacturing bots dating back to the 1960s and 1970s, the industry has adapted well to the powers of AI. These  industrial robots typically work alongside humans to perform a limited range of tasks like assembly and stacking, and predictive analysis sensors keep equipment running smoothly. 

AI in Healthcare

It may seem unlikely, but  AI healthcare is already changing the way humans interact with medical providers. Thanks to its  big data analysis capabilities, AI helps identify diseases more quickly and accurately, speed up and streamline drug discovery and even monitor patients through virtual nursing assistants. 

AI in Finance

Banks, insurers and financial institutions leverage AI for a range of applications like detecting fraud, conducting audits and evaluating customers for loans. Traders have also used machine learning’s ability to assess millions of data points at once, so they can quickly gauge risk and make smart investing decisions . 

AI in Education

AI in education will change the way humans of all ages learn. AI’s use of machine learning,  natural language processing and  facial recognition help digitize textbooks, detect plagiarism and gauge the emotions of students to help determine who’s struggling or bored. Both presently and in the future, AI tailors the experience of learning to student’s individual needs.

AI in Media

Journalism is harnessing AI too, and will continue to benefit from it. One example can be seen in The Associated Press’ use of  Automated Insights , which produces thousands of earning reports stories per year. But as generative  AI writing tools , such as ChatGPT, enter the market,  questions about their use in journalism abound.

AI in Customer Service

Most people dread getting a  robocall , but  AI in customer service can provide the industry with data-driven tools that bring meaningful insights to both the customer and the provider. AI tools powering the customer service industry come in the form of  chatbots and  virtual assistants .

AI in Transportation

Transportation is one industry that is certainly teed up to be drastically changed by AI.  Self-driving cars and  AI travel planners are just a couple of facets of how we get from point A to point B that will be influenced by AI. Even though autonomous vehicles are far from perfect, they will one day ferry us from place to place.

Risks and Dangers of AI

Despite reshaping numerous industries in positive ways, AI still has flaws that leave room for concern. Here are a few potential risks of artificial intelligence.  

Job Losses 

Between 2023 and 2028, 44 percent of workers’ skills will be disrupted . Not all workers will be affected equally — women are more likely than men to be exposed to AI in their jobs. Combine this with the fact that there is a gaping AI skills gap between men and women, and women seem much more susceptible to losing their jobs. If companies don’t have steps in place to upskill their workforces, the proliferation of AI could result in higher unemployment and decreased opportunities for those of marginalized backgrounds to break into tech.

Human Biases 

The reputation of AI has been tainted with a habit of reflecting the biases of the people who train the algorithmic models. For example, facial recognition technology has been known to favor lighter-skinned individuals , discriminating against people of color with darker complexions. If researchers aren’t careful in  rooting out these biases early on, AI tools could reinforce these biases in the minds of users and perpetuate social inequalities.

Deepfakes and Misinformation

The spread of deepfakes threatens to blur the lines between fiction and reality, leading the general public to  question what’s real and what isn’t. And if people are unable to identify deepfakes, the impact of  misinformation could be dangerous to individuals and entire countries alike. Deepfakes have been used to promote political propaganda, commit financial fraud and place students in compromising positions, among other use cases. 

Data Privacy

Training AI models on public data increases the chances of data security breaches that could expose consumers’ personal information. Companies contribute to these risks by adding their own data as well. A  2024 Cisco survey found that 48 percent of businesses have entered non-public company information into  generative AI tools and 69 percent are worried these tools could damage their intellectual property and legal rights. A single breach could expose the information of millions of consumers and leave organizations vulnerable as a result.  

Automated Weapons

The use of AI in automated weapons poses a major threat to countries and their general populations. While automated weapons systems are already deadly, they also fail to discriminate between soldiers and civilians . Letting artificial intelligence fall into the wrong hands could lead to irresponsible use and the deployment of weapons that put larger groups of people at risk.  

Superior Intelligence

Nightmare scenarios depict what’s known as the technological singularity , where superintelligent machines take over and permanently alter human existence through enslavement or eradication. Even if AI systems never reach this level, they can become more complex to the point where it’s difficult to determine how AI makes decisions at times. This can lead to a lack of transparency around how to fix algorithms when mistakes or unintended behaviors occur. 

“I don’t think the methods we use currently in these areas will lead to machines that decide to kill us,” said Marc Gyongyosi, founder of  Onetrack.AI . “I think that maybe five or 10 years from now, I’ll have to reevaluate that statement because we’ll have different methods available and different ways to go about these things.”

Frequently Asked Questions

What does the future of ai look like.

AI is expected to improve industries like healthcare, manufacturing and customer service, leading to higher-quality experiences for both workers and customers. However, it does face challenges like increased regulation, data privacy concerns and worries over job losses.

What will AI look like in 10 years?

AI is on pace to become a more integral part of people’s everyday lives. The technology could be used to provide elderly care and help out in the home. In addition, workers could collaborate with AI in different settings to enhance the efficiency and safety of workplaces.

Is AI a threat to humanity?

It depends on how people in control of AI decide to use the technology. If it falls into the wrong hands, AI could be used to expose people’s personal information, spread misinformation and perpetuate social inequalities, among other malicious use cases.

Great Companies Need Great People. That's Where We Come In.

Essay on Artificial Intelligence as a Threat in the Society

Introduction

Artificial Intelligence is defined as “the scientific knowledge of the mechanisms that underlie cognition and intelligent behavior and its integration in machines,” according to the Association for the Advancement of Artificial Intelligence. For the past few decades, several predictions were made based on the high incoming of Artificial Intelligence (AI). The transition and its effects on most aspects of the society of businesses and everyday life. It is also essential to note that adequately anticipating the impact of the AI revolution has its implications since AI- automated machines might be our “final invention,” putting an end to human supremacy (Makridakis, 2017, p. 55). Without a doubt, artificial intelligence has a high potential, as both its technology and automation will most likely achieve highly productive and sustainable economic growth. Within the next two decades, its high human intellectual ability poses a severe threat to the workforce market that is initially under human labor. For the first time, it raises concerns about the end of human superiority. While AI can boost the economic growth rate, it also faces significant risks such as employment market fragmentation, increasing inequality, underemployment, and new undesirable industrial structures. EU policy must establish the circumstances for AI’s potential to thrive while also carefully examining how to manage the threats it entails.

Challenges of Artificial Intelligence on the Society

The study re-examines assumptions made about AI’s effects on jobs, inequality, and production, as well as general economic growth. For two reasons, we do so. A few theoretical economic frameworks include AI, and almost none that take demand-side restrictions into account. The second point is that expectations of AI producing enormous job losses and quicker economic and GDP growth conflict with reality: in the developed nations, unemployment is at crisis levels. Income and output growth, on the other hand, is stagnant, and disparities are rising. In the light of accelerating AI progress, this model is a guide to provide a theoretical justification. Jobs aren’t the only thing that might be affected. Economic growth and income stability are certain to be impacted as well. According to (Frey et al., 2017, p.268), the influence of emerging technologies such as AI is subject to an ‘execution lag.’ As AI adoption proceeds, ‘high productivity rate will also be increasing dramatically as an ever-increasing rate of unemployment cascades through the economy,’ according to the report (Nordhaus, 2015, p.2).

Impact on Jobs

Several widely cited original reports suggested that automation of occupations and functions will eventually displace a significant portion of the human labor force. They are anticipated that up to 47% of US occupations might be automated in ten to twenty years in a widely regarded paper (Syverson, 2017, p. 171). This might be even higher in a similar study case of the EU, with up to 54% of occupations being computerized in 10 to 20 years, using a similar methodology. Regular tasks can readily be automated, making specific roles obsolete over time. Customer service/call center operations, document categorization, and recovery, and content moderation, for example, are increasingly relying on increased automation rather than human labor. People are being replaced by automated robotic systems which can effectively move around the area, locate and move stuff, and carry out complex assembling operations. As frightening as these projections may be, recent theoretical and empirical research suggests that the effect of AI-automated jobs lost may be significantly exaggerated. New theoretical revisions, such as those by Bessen (2018), reveal that, based on the flexibility of demand for the product in issue, there is a reasonable probability that jobs might rise as a result of the AI-automation.

Impact on Inequality

Since Artificial intelligence has a diverse influence on various jobs and workers, it may negatively affect earnings. In research in six European nations, two significant channels have been identified through which AI-automation will deteriorate wealth inequality: The benefits, for example, may only flow to a small number of firms due to increased ‘invention costs’ from AI, while the other is when Artificial Intelligence is moving relative labor supply, which in turn affects comparable salaries (Nordhaus, 2015, p. 18). Ideally, as more manual work is substituted by Ai technologies, productivity rises and general earnings growth will be, and the more significant the gap between rich and poor will widen.

There are legitimate concerns that AI would worsen present trends of changing the national income distribution away from labor, resulting in more disparity and wealth concentration in “superstar” enterprises and industries. Another source of rising income inequality may be our inability to categorize revenue at the conventional point—income or exchange— as a smaller percentage of the market is registered, taxed, and dispersed. Instead of taxing income, one apparent alternative would be to tax the wealthy directly, such as a company’s market value (Makridakis, 2017, p. 55). The information era may make tracking income equality simpler than previously, making this technique more feasible than it has been in the past, especially given the difficulties of tracking income.

Impact on Privacy and Autonomy

When evaluating the impact of Artificial Intelligence on behavioral patterns, we’ve finally arrived at a point where ICT has a distinct effect. Domestic surveillance has a long history, and it has been linked to anything from skewed employment prospects to pogroms. However, information and communication technology (ICT) now allows us to preserve permanent records on everyone who generates stored data, such as invoices, bank statements, digital gadgets, or credit history, not to forget any open publishing or social network usage. Our civilization is being transformed by storing and accessing digital information and by the fact that all this data may be accessed using a pattern detection algorithm. By complexity, we have lost the basic presumption of being anonymous. We are all famous to some extent now: random people can identify any of us, whether through facial – recognition or information extraction of shopping or social networks activities (Reed et al., 2016, p.1065). Artificial Intelligence has facilitated robotic cognitive abilities in speech transcription, emotion recognition from audiovisual recordings, and written or video forgery. This technique enables forging by mixing a model of various people’s handwriting or their utterances with a text flow to get a “prediction” or interpretation of how the person would probably write or pronounce that text.

Artificial intelligence has been transforming societies faster than we understand, yet it isn’t as original or distinctive in human experience as we are frequently made to believe corporations and governments, telecommunications, and natural gas, among other artifactual entities, have previously expanded our powers, changed our economies, and upset our social co-existence, but not universally, for the better. However, we must keep in mind that, above and beyond the economical and governance issues, AI enhances and improves what makes us unique in the first place, especially our problem-solving ability. Considering the ongoing worldwide challenges including security, privacy and development, such improvements are anticipated to remain to be beneficial. Because AI lacks a soul, its philosophy should be transcendental to compensate for its incapacity to sympathize. Artificial intelligence (AI) is a fact of life. We must remember what AI inventor Joseph Weizenbaum said: “We cannot let machines make critical decisions for humanity since AI will never have human attributes such as empathy and intelligence to perceive and judge morally.”

Frey, C. and Osborne, M. (2013). The Future of Employment: How Susceptible are Jobs to Computerization? Oxford Martin Programme on the Impacts of Future Technology, University of Oxford: p. 50-67.

Makridakis, S. (2017). The forthcoming Artificial Intelligence (AI) revolution: Its impact on society and firms. Futures, 90, 46-60. doi.org/10.1016/j.futures.2017.03.006

Nordhaus, W. (2015). Are We Approaching an Economic Singularity? Information Technology and the Future of Economic Growth. Cowles Foundation Discussion Paper no. 2021. Yale University: 1-30.

Reed, S., Akata, Z., Yan, X., Logeswaran, L., Schiele, B., and Lee, H. 2016. “Generative adversarial text to image synthesis.” In  Proceedings of the 33rd International Conference on Machine Learning  48: 1060–1069.

Syverson, C. (2017). Challenges to Mismeasurement Explanations for the US Productivity Slowdown. Journal of Economic Perspectives, 31(2):165–186.

Cite this page

Similar essay samples.

  • Essay on Different Family Roles in Young and Old Families
  • Essay on Culture As the Context That Establishes Media Industry Activi...
  • Essay on Alternative Performance Measures
  • Technology Use Phenomena: Exploring the Technology-Society Relationshi...
  • Essay on National Association for the Education of Young Children
  • Red Bull Organizational Analysis

Advancing social justice, promoting decent work ILO is a specialized agency of the United Nations

hands technology

Global challenges- Global solutions podcast

20 November 2023

Are we smart enough to manage the impact of Artificial Intelligence on the world of work? It seems that Artificial Intelligence (AI) is raising concerns in some quarters, especially in the world of work these days. But in fact, these concerns about whether the growth of AI is “a threat or a promise” have been growing for years. Is this concern warranted? In this episode Uma Rani, a Senior Economist at the ILO’s Research Department and Enrique Fernandez Macias, a researcher at the European Commission’s Joint Research Centre explore what kind of research and policies should be considered to better assess the impact of AI on such issues as gender balance, achieving social justice, and other ethical and moral questions arising from its use in the workplace in both developed and developing economies. This podcast is also available on Spotify and Apple Podcasts.

Photo of Uma Rani

Related Content

young persons looking at computer

Building partnerships on the future of work

Woman sitting at computer across from male co-worker

Employment promotion

You may also be interested in

Transcript: Artificial Intelligence and the future of work: A threat or a promise?

Global challenges - Global solutions podcast

Algorithmic management: What it is, what it does and what it means for the future of work

Podcast series

Global challenges- Global solutions

The technology paradox: Is it a great equalizer for some, but digital divider for others?

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Future Healthc J
  • v.6(3); 2019 Oct

Logo of futhealthcj

Does artificial intelligence (AI) constitute an opportunity or a threat to the future of medicine as we know it?

Misha kabir.

A St Mark’s Hospital and Academic Institute, Harrow, UK

By far the greatest danger of Artificial Intelligence is that people conclude too early that they understand it. Eliezer Yudkowsky, Machine Intelligence Research Institute 1

Artifical intelligence (AI) is the future and is already part of our everyday life. Be it voice recognition assistance with Apple’s Siri or Amazon’s Alexa, or algorithms that filter your spam emails, recommend a film to you on Netflix, screen your bank account transactions for fraud, or get your auto-pilot flight smoothly to your next holiday destination, chances are you have already experienced AI.

Machine ‘deep learning’ takes place in a multi-layered ‘black box’ of deep neural networks, where algorithms are not defined by task-specific rules, but are able to evolve and self-learn using pattern recognition and trial and error. Thus, AI can approach problems as a doctor progressing through their training does: by learning rules from data. However, by having the capacity to analyse massive amounts of data, algorithms are able to find correlations that the human mind cannot.

For some, however, the use of AI in medicine remains a troubling concept. Science fiction is littered with examples of AI running amok at the expense of humanity. While the use of AI in medicine does not evoke exactly the same kind of Orwellian concerns as it might in, for example, national defence, there are still important issues such as privacy, data protection, even the straightforward need for simple human contact to consider. Doctors are expected to temper knowledge with compassion and understanding. These are characteristics which some fear may be lost in any AI driven system, where patients may find their rights are, at best, an afterthought in the relentless pursuit of efficiency.

However, the argument that AI advocates, such as Dr Eric Topol has detailed in his review for the UK’s NHS workforce, 2 is that by allowing for efficiency in the workflow and precision in the practice of medicine, AI will in fact increase the time that patients can spend face-to-face with their healthcare professionals, engaging in shared decision making. AI will unlikely replace doctors, and the opportunity exists for AI to augment their practise in a number of ways.

  • Reducing administrative burden and increasing patient–doctor contact: A 2016 time and motion observational study of 57 US doctors found that they spent about 2 hours doing computer work for every hour spent face-to-face with their patient. 3 Thirty-seven per cent of the time spent with their patients involved interacting with a computer screen. Voice recognition, natural language processing AI and digital scribing could transform the time spent on clerical tasks, such as writing up clinical notes, ordering tests and prescribing medications from minutes to seconds. AI assistance could markedly reduce the burden of administrative jobs such as coding, billing, scheduling of operating rooms and clinic appointments, and staffing, increasing productivity, saving costs and improving the workflow within healthcare systems.
  • Enhancing clinician diagnosis: Deep learning AI can process thousands of radiology or pathology images and conduct automated image interpretation at a fraction of the time that radiologists and pathologists can, allowing for quicker diagnoses. An AI-assisted diagnosis could even allow non-specialists to confidently make decisions in an emergency setting that would normally require specialist input, for example in fracture X-ray interpretation or cerebral aneurysm detection on computed tomography. Machine learning using tumour genomic sequencing has identified biomarkers that have markedly improved lung cancer classification compared with pathologists using traditional histological data. 4 However, so far the only AI technologies that have undergone rigorous prospective validation peer-reviewed studies in real-world settings, comparing AI performance to healthcare professionals, exist for the diagnosis of diabetic retinopathy, detection of wrist fractures from X-rays in the emergency department, histologic breast cancer metastases from digitised pathological slides, very small colonic polyps and paediatric cataracts. 5 These all have shown promise by proving that use of diagnostic algorithms either were as good as or exceeded the accuracy of clinicians, or at least enhanced clinician performance by markedly speeding up time to diagnosis.
  • Enhancing monitoring of chronic conditions and patient self-management: Common chronic conditions, such as hypertension, depression and asthma, could theoretically be managed in the community with virtual coaching. Smartwatches can now detect arrhythmias such as atrial fibrillation and changes in electrocardiogram morphology can accurately detect hyperkalaemia in patients with chronic kidney disease. AI may also allow monitoring for mental health conditions within the community. Facebook posts have been shown to predict subsequent diagnoses of depression and various tools to detect mood changes such as depression are in development using keyboard interaction, voice and facial recognition and interactive chatbots. 6
  • Predicting future events: Algorithms derived from the interpretation of big data from electronic health records have allowed the creation of a number of risk prediction tools that can be used to alter treatment decisions and prevent poorer outcomes. An AI model was able to self-learn the most optimal treatment for patients with sepsis, using a database of treatment decisions on intravenous fluid resuscitation and vasopressor doses and their impact on survival. 6 An amount of patient data (n=13,666) could be extracted that far exceeds the lifetime experience accrued by human clinicians. In a large external validation cohort independent of the training data (n=79,073), mortality was lower when clinicians’ treatment decisions matched those recommended by the predictive model.

Although these examples of how AI could augment medicine are impressive, one of the main obstacles to the widespread acceptance of machine learning in clinical practice is a lack of understanding among patients and their doctors about how the machine’s predictions are made. Yet we are willing to accept judgements based on the experience, intuition and cognitive biases of human doctors that cannot always be explicitly explained. We are able to forgive human inaccuracies but have far greater expectations of our machines. This discrepancy in performance expectations may be legitimate given that an inaccurate machine algorithm could have catastrophic consequences on a much larger number of patients than one erroneous physician.

Transparent communication from digital scientists regarding the dataset and methodology used to develop the algorithm, and proper regulation must also be mandated by government legislation. The General Data Protection Regulations emphasises the need for ‘explainability’ as a key priority in machine learning research. A reorientation of priorities for data scientists are required to now optimise intelligibility of the model to the lay patient in addition to accuracy. Healthcare professionals should also be encouraged to become involved in AI development and validation, through fellowship programmes. Training curricula should be adapted to reflect the need for AI technology intelligibility. Encouraging collaboration between data scientists and clinicians will also benefit the clinical utility of the AI that is developed. Data scientists at present tend to build and evaluate their algorithms based on metrics, such as the best area under the receiver operating characteristic curve, rather than more clinically useful metrics such as high sensitivity or positive predictive value.

Deep learning models rely on large amounts of accurate data but we know that many electronic health records are incomplete which may introduce bias. Algorithms derived from one cohort may become inaccurate when applied to the wider population due to other geographic, societal and care setting differences. For example, an algorithm designed to predict probability of death among hospital patients with pneumonia, spuriously classified asthmatics as low risk. 7 This was because in the derivation cohort setting, all asthmatic pneumonia care was optimised in an intensive care setting, thus making asthmatics appear to have an above average survival. Therefore, a machine algorithm’s validity and causal inferences must be proved through proper prospective trials and randomisation, before acceptance into widespread clinical practice.

AI conceivably provides more opportunity than threat to the future of medicine, but if regulated and understood appropriately. Three main potential sources of big data in medicine are electronic health records, genome sequencing and patient-originating data eg biosensor technology within Fitbits. If this data could be pooled and shared with the scientific community, the capacity to better understand disease aetiology and predict risk outcomes could be astounding. However, legitimate concerns regarding data propriety and security must be allayed by adopting new models of health data ownership with rights to the individual patient, use of highly secure data platforms and governmental legislation.

  • Share full article

For more audio journalism and storytelling, download New York Times Audio , a new iOS app available for news subscribers.

The Crackdown on Student Protesters

Columbia university is at the center of a growing showdown over the war in gaza and the limits of free speech..

This transcript was created using speech recognition software. While it has been reviewed by human transcribers, it may contain errors. Please review the episode audio before quoting from this transcript and email [email protected] with any questions.

[TRAIN SCREECHING]

Well, you can hear the helicopter circling. This is Asthaa Chaturvedi. I’m a producer with “The Daily.” Just walked out of the 116 Street Station. It’s the main station for Columbia’s Morningside Heights campus. And it’s day seven of the Gaza solidarity encampment, where a hundred students were arrested last Thursday.

So on one side of Broadway, you see camera crews. You see NYPD officers all lined up. There’s barricades, steel barricades, caution tape. This is normally a completely open campus. And I’m able to — all members of the public, you’re able to walk through.

[NON-ENGLISH SPEECH]

Looks like international media is here.

Have your IDs out. Have your IDs out.

Students lining up to swipe in to get access to the University. ID required for entry.

Swipe your ID, please.

Hi, how are you, officer? We’re journalists with “The New York Times.”

You’re not going to get in, all right? I’m sorry.

Hi. Can I help please?

Yeah, it’s total lockdown here at Columbia.

Please have your IDs out ready to swipe.

From “The New York Times,” I’m Michael Barbaro. This is “The Daily.” Today, the story of how Columbia University has become the epicenter of a growing showdown between student protesters, college administrators, and Congress over the war in Gaza and the limits of free speech. I spoke with my colleague, Nick Fandos.

[UPBEAT MUSIC]

It’s Thursday, April 25.

Nick, if we rewind the clock a few months, we end up at a moment where students at several of the country’s best known universities are protesting Israel’s response to the October 7 attacks, its approach to a war in Gaza. At times, those protests are happening peacefully, at times with rhetoric that is inflammatory. And the result is that the leaders of those universities land before Congress. But the president of Columbia University, which is the subject we’re going to be talking about today, is not one of the leaders who shows up for that testimony.

That’s right. So the House Education Committee has been watching all these protests on campus. And the Republican Chairwoman decides, I’m going to open an investigation, look at how these administrations are handling it, because it doesn’t look good from where I sit. And the House last winter invites the leaders of several of these elite schools, Harvard, Penn, MIT, and Columbia, to come and testify in Washington on Capitol Hill before Congress.

Now, the President of Columbia has what turns out to be a very well-timed, pre-planned trip to go overseas and speak at an international climate conference. So Minouche Shafik isn’t going to be there. So instead, the presidents of Harvard, and Penn, and MIT show up. And it turned out to be a disaster for these universities.

They were asked very pointed questions about the kind of speech taking place on their campuses, and they gave really convoluted academic answers back that just baffled the committee. But there was one question that really embodied the kind of disconnect between the Committee — And it wasn’t just Republicans, Republicans and Democrats on the Committee — and these college presidents. And that’s when they were asked a hypothetical.

Does calling for the genocide of Jews violate Penn’s rules or code of conduct? Yes or no?

If the speech turns into conduct, it can be harassment.

And two of the presidents, Claudine Gay of Harvard and Elizabeth Magill of the University of Pennsylvania, they’re unwilling to say in this really kind of intense back and forth that this speech would constitute a violation of their rules.

It can be, depending on the context.

What’s the context?

Targeted at an individual. Is it pervasive?

It’s targeted at Jewish students, Jewish individuals. Do you understand your testimony is dehumanizing them?

And it sets off a firestorm.

It does not depend on the context. The answer is yes. And this is why you should resign. These are unacceptable answers across the board.

Members of Congress start calling for their resignations. Alumni are really, really ticked off. Trustees of the University start to wonder, I don’t know that these leaders really have got this under control. And eventually, both of them lose their jobs in a really high profile way.

Right. And as you’ve hinted at, for somewhat peculiar scheduling reasons, Columbia’s President escapes this disaster of a hearing in what has to be regarded as the best timing in the history of the American Academy.

Yeah, exactly. And Columbia is watching all this play out. And I think their first response was relief that she was not in that chair, but also a recognition that, sooner or later, their turn was going to come back around and they were going to have to sit before Congress.

Why were they so certain that they would probably end up before Congress and that this wasn’t a case of completely dodging a bullet?

Well, they remain under investigation by the committee. But also, as the winter wears on, all the same intense protests just continue unabated. So in many ways, Columbia’s like these other campuses. But in some ways, it’s even more intense. This is a university that has both one of the largest Jewish student populations of any of its peers. But it also has a large Arab and Muslim student population, a big Middle Eastern studies program. It has a dual degree program in Tel Aviv.

And it’s a university on top of all that that has a real history of activism dating back to the 1960s. So when students are recruited or choose to come to Columbia, they’re actively opting into a campus that prides itself on being an activist community. It’s in the middle of New York City. It’s a global place. They consider the city and the world, really, like a classroom to Columbia.

In other words, if any campus was going to be a hotbed of protest and debate over this conflict, it was going to be Columbia University.

Exactly. And when this spring rolls around, the stars finally align. And the same congressional committee issues another invitation to Minouche Shafik, Columbia’s President, to come and testify. And this time, she has no excuse to say no.

But presumably, she is well aware of exactly what testifying before this committee entails and is highly prepared.

Columbia knew this moment was coming. They spent months preparing for this hearing. They brought in outside consultants, crisis communicators, experts on anti-Semitism. The weekend before the hearing, she actually travels down to Washington to hole up in a war room, where she starts preparing her testimony with mock questioners and testy exchanges to prep her for this. And she’s very clear on what she wants to try to do.

Where her counterparts had gone before the committee a few months before and looked aloof, she wanted to project humility and competence, to say, I know that there’s an issue on my campus right now with some of these protests veering off into anti-Semitic incidents. But I’m getting that under control. I’m taking steps in good faith to make sure that we restore order to this campus, while allowing people to express themselves freely as well.

So then the day of her actual testimony arrives. And just walk us through how it goes.

The Committee on Education and Workforce will come to order. I note that —

So Wednesday morning rolls around. And President Shafik sits at the witness stand with two of her trustees and the head of Columbia’s new anti-Semitism task force.

Columbia stands guilty of gross negligence at best and at worst has become a platform for those supporting terrorism and violence against the Jewish people.

And right off the bat, they’re put through a pretty humbling litany of some of the worst hits of what’s been happening on campus.

For example, just four days after the harrowing October 7 attack, a former Columbia undergraduate beat an Israeli student with a stick.

The Republican Chairwoman of the Committee, Virginia Foxx, starts reminding her that there was a student who was actually hit with a stick on campus. There was another gathering more recently glorifying Hamas and other terrorist organizations, and the kind of chants that have become an everyday chorus on campus, which many Jewish students see as threatening. But when the questioning starts, President Shafik is ready. One of the first ones she gets is the one that tripped up her colleagues.

Does calling for the genocide of Jews violate Columbia’s code of conduct, Mr. Greenwald?

And she answers unequivocally.

Dr. Shafik?

Yes, it does.

And, Professor —

That would be a violation of Columbia’s rules. They would be punished.

As President of Columbia, what is it like when you hear chants like, by any means necessary or Intifada Revolution?

I find those chants incredibly distressing. And I wish profoundly that people would not use them on our campus.

And in some of the most interesting exchanges of the hearing, President Shafik actually opens Columbia’s disciplinary books.

We have already suspended 15 students from Columbia. We have six on disciplinary probation. These are more disciplinary actions that have been taken probably in the last decade at Columbia. And —

She talks about the number of students that have been suspended, but also the number of faculty that she’s had removed from the classroom that are being investigated for comments that either violate some of Columbia’s rules or make students uncomfortable. One case in particular really underscores this.

And that’s of a Middle Eastern studies professor named Joseph Massad. He wrote an essay not long after Hamas invaded Israel and killed 1,200 people, according to the Israeli government, where he described that attack with adjectives like awesome. Now, he said they’ve been misinterpreted, but a lot of people have taken offense to those comments.

Ms. Stefanik, you’re recognized for five minutes.

Thank you, Chairwoman. I want to follow up on my colleague, Rep Walberg’s question regarding Professor Joseph Massad. So let me be clear, President —

And so Representative Elise Stefanik, the same Republican who had tripped up Claudine Gay of Harvard and others in the last hearing, really starts digging in to President Shafik about these things at Columbia.

He is still Chair on the website. So has he been terminated as Chair?

Congresswoman, I —

And Shafik’s answers are maybe a little surprising.

— before getting back to you. I can confirm —

I know you confirmed that he was under investigation.

Yes, I can confirm that. But I —

Did you confirm he was still the Chair?

He says that Columbia is taking his case seriously. In fact, he’s under investigation right now.

Well, let me ask you this.

I need to check.

Will you make the commitment to remove him as Chair?

And when Stefanik presses her to commit to removing him from a campus leadership position —

I think that would be — I think — I would — yes. Let me come back with yes. But I think I — I just want to confirm his current status before I write —

We’ll take that as a yes, that you will confirm that he will no longer be chair.

Shafik seems to pause and think and then agree to it on the spot, almost like she is making administrative decisions with or in front of Congress.

Now, we did some reporting after the fact. And it turns out the Professor didn’t even realize he was under investigation. So he’s learning about this from the hearing too. So what this all adds up to, I think, is a performance so in line with what the lawmakers themselves wanted to hear, that at certain points, these Republicans didn’t quite know what to do with it. They were like the dog that caught the car.

Columbia beats Harvard and UPenn.

One of them, a Republican from Florida, I think at one point even marvelled, well, you beat Harvard and Penn.

Y’all all have done something that they weren’t able to do. You’ve been able to condemn anti-Semitism without using the phrase, it depends on the context. But the —

So Columbia’s president has passed this test before this committee.

Yeah, this big moment that tripped up her predecessors and cost them their jobs, it seems like she has cleared that hurdle and dispatched with the Congressional committee that could have been one of the biggest threats to her presidency.

Without objection, there being no further business, the committee stands adjourned. [BANGS GAVEL]

But back on campus, some of the students and faculty who had been watching the hearing came away with a very different set of conclusions. They saw a president who was so eager to please Republicans in Congress that she was willing to sell out some of the University’s students and faculty and trample on cherished ideas like academic freedom and freedom of expression that have been a bedrock of American higher education for a really long time.

And there was no clearer embodiment of that than what had happened that morning just as President Shafik was going to testify before Congress. A group of students before dawn set up tents in the middle of Columbia’s campus and declared themselves a pro-Palestinian encampment in open defiance of the very rules that Dr. Shafik had put in place to try and get these protests under control.

So these students in real-time are beginning to test some of the things that Columbia’s president has just said before Congress.

Exactly. And so instead of going to celebrate her successful appearance before Congress, Shafik walks out of the hearing room and gets in a black SUV to go right back to that war room, where she’s immediately confronted with a major dilemma. It basically boils down to this, she had just gone before Congress and told them, I’m going to get tough on these protests. And here they were. So either she gets tough and risks inflaming tension on campus or she holds back and does nothing and her words before Congress immediately look hollow.

And what does she decide?

So for the next 24 hours, she tries to negotiate off ramps. She consults with her Deans and the New York Police Department. And it all builds towards an incredibly consequential decision. And that is, for the first time in decades, to call the New York City Police Department onto campus in riot gear and break this thing up, suspend the students involved, and then arrest them.

To essentially eliminate this encampment.

Eliminate the encampment and send a message, this is not going to be tolerated. But in trying to quell the unrest, Shafik actually feeds it. She ends up leaving student protesters and the faculty who support them feeling betrayed and pushes a campus that was already on edge into a full blown crisis.

[SLOW TEMPO MUSIC]

After the break, what all of this has looked like to a student on Columbia’s campus. We’ll be right back.

[PHONE RINGS]

Is this Isabella?

Yes, this is she.

Hi, Isabella. It’s Michael Barbaro from “The Daily.”

Hi. Nice to meet you.

Earlier this week, we called Isabella Ramírez, the Editor in Chief of Columbia’s undergraduate newspaper, “The Columbia Daily Spectator,” which has been closely tracking both the protests and the University’s response to them since October 7.

So, I mean, in your mind, how do we get to this point? I wonder if you can just briefly describe the key moments that bring us to where we are right now.

Sure. Since October 7, there has certainly been constant escalation in terms of tension on campus. And there have been a variety of moves that I believe have distanced the student body, the faculty, from the University and its administration, specifically the suspension of Columbia’s chapters of Students for Justice in Palestine and Jewish Voice for Peace. And that became a huge moment in what was characterized as suppression of pro-Palestinian activism on campus, effectively rendering those groups, quote, unquote, unauthorized.

What was the college’s explanation for that?

They had cited in that suspension a policy which states that a demonstration must be approved within a certain window, and that there must be an advance notice, and that there’s a process for getting an authorized demonstration. But the primary point was this policy that they were referring to, which we later reported, was changed before the suspension.

So it felt a little ad hoc to people?

Yes, it certainly came as a surprise, especially at “Spectator.” We’re nerds of the University in the sense that we are familiar with faculty and University governance. But even to us, we had no idea where this policy was coming from. And this suspension was really the first time that it entered most students’ sphere.

Columbia’s campus is so known for its activism. And so in my time of being a reporter, of being an editor, I’ve overseen several protests. And I’ve never seen Columbia penalize a group for, quote, unquote, not authorizing a protest. So that was certainly, in our minds, unprecedented.

And I believe part of the justification there was, well, this is a different time. And I think that is a reasonable thing to say. But I think a lot of students, they felt it was particularly one-sided, that it was targeting a specific type of speech or a specific type of viewpoint. Although, the University, of course, in its explicit policies, did not outline, and was actually very explicit about not targeting specific viewpoints —

So just to be super clear, it felt to students — and it sounds like, journalistically, it felt to you — that the University was coming down in a uniquely one-sided way against students who were supporting Palestinian rights and may have expressed some frustrations with Israel in that moment.

Yes. Certainly —

Isabella says that this was just the beginning of a really tense period between student protesters and the University. After those two student groups were suspended, campus protests continued. Students made a variety of demands. They asked that the University divest from businesses that profit from Israel’s military operations in Gaza. But instead of making any progress, the protests are met with further crackdown by the University.

And so as Isabella and her colleagues at the college newspaper see it, there’s this overall chilling effect that occurs. Some students become fearful that if they participate in any demonstrations, they’re going to face disciplinary action. So fast forward now to April, when these student protesters learned that President Shafik is headed to Washington for her congressional testimony. It’s at this moment that they set out to build their encampment.

I think there was obviously a lot of intention in timing those two things. I think it’s inherently a critique on a political pressure and this congressional pressure that we saw build up against, of course, Claudine Gay at Harvard and Magill at UPenn. So I think a lot of students and faculty have been frustrated at this idea that there are not only powers at the University that are dictating what’s happening, but there are perhaps external powers that are also guiding the way here in terms of what the University feels like it must do or has to do.

And I think that timing was super crucial. Having the encampment happen on the Wednesday morning of the hearing was an incredible, in some senses, interesting strategy to direct eyes to different places.

All eyes were going to be on Shafik in DC. But now a lot of eyes are on New York. The encampment is set up in the middle of the night slash morning, prior to the hearing. And so what effectively happens is they caught Shafik when she wasn’t on campus, when a lot of senior administration had their resources dedicated to supporting Shafik in DC.

And you have all of those people not necessarily out of commission, but with their focus elsewhere. So the encampment is met with very little resistance at the beginning. There were public safety officers floating around and watching. But at the very beginning hours, I think there was a sense of, we did it.

[CHANTING]: Disclose! Divest! We will not stop! We will not rest. Disclose! Divest! We will not stop!

It would be quite surprising to anybody and an administrator to now suddenly see dozens of tents on this lawn in a way that I think very purposely puts an imagery of, we’re here to stay. As the morning evolved and congressional hearings continued —

Minouche Shafik, open your eyes! Use of force, genocide!

Then we started seeing University delegates that were coming to the encampment saying, you may face disciplinary action for continuing to be here. I think that started around almost — like 9:00 or 10:00 AM, they started handing out these code of conduct violation notices.

Hell no! Hell no! Hell no!

Then there started to be more public safety action and presence. So they started barricading the entrances. The day progressed, there was more threat of discipline. The students became informed that if they continue to stay, they will face potential academic sanctions, potential suspension.

The more they try to silence us, the louder we will be! The more they —

I think a lot of people were like, OK, you’re threatening us with suspension. But so what?

This is about these systems that Minouche Shafik, that the Board of Trustees, that Columbia University is complicit in.

What are you going to do to try to get us out of here? And that was, obviously, promptly answered.

This is the New York State Police Department.

We will not stop!

You are attempting participate in an unauthorized encampment. You will be arrested and charged with trespassing.

My phone blew up, obviously, from the reporters, from the editors, of saying, oh my god, the NYPD is on our campus. And as soon as I saw that, I came out. And I saw a huge crowd of students and affiliates on campus watching the lawns. And as I circled around that crowd, I saw the last end of the New York Police Department pulling away protesters and clearing out the last of the encampment.

[CHANTING]: We love you! We will get justice for you! We see you! We love you! We will get justice for you! We see you! We love you! We will get justice for you! We see you! We love you! We will get justice for you!

It was something truly unimaginable, over 100 students slash other individuals are arrested from our campus, forcefully removed. And although they were suspended, there was a feeling of traumatic event that has just happened to these students, but also this sense of like, OK, the worst of the worst that could have happened to us just happened.

And for those students who maybe couldn’t go back to — into campus, now all of their peers, who were supporters or are in solidarity, are — in some sense, it’s further emboldened. They’re now not just sitting on the lawns for a pro-Palestinian cause, but also for the students, who have endured quite a lot.

So the crackdown, sought by the president and enforced by the NYPD, ends up, you’re saying, becoming a galvanizing force for a broader group of Columbia students than were originally drawn to the idea of ever showing up on the center of campus and protesting?

Yeah, I can certainly speak to the fact that I’ve seen my own peers, friends, or even acquaintances, who weren’t necessarily previously very involved in activism and organizing efforts, suddenly finding themselves involved.

Can I — I just have a question for you, which is all journalism, student journalism or not student journalism, is a first draft of history. And I wonder if we think of this as a historic moment for Columbia, how you imagine it’s going to be remembered.

Yeah, there is no doubt in my mind that this will be a historic moment for Colombia.

I think that this will be remembered as a moment in which the fractures were laid bare. Really, we got to see some of the disunity of the community in ways that I have never really seen it before. And what we’ll be looking to is, where do we go from here? How does Colombia repair? How do we heal from all of this? so That is the big question in terms of what will happen.

Nick, Isabella Ramírez just walked us through what this has all looked like from the perspective of a Columbia student. And from what she could tell, the crackdown ordered by President Shafik did not quell much of anything. It seemed, instead, to really intensify everything on campus. I’m curious what this has looked like for Shafik.

It’s not just the students who are upset. You have faculty, including professors, who are not necessarily sympathetic to the protesters’ view of the war, who are really outraged about what Shafik has done here. They feel that she’s crossed a boundary that hasn’t been crossed on Columbia’s campus in a really long time.

And so you start to hear things by the end of last week like censure, no confidence votes, questions from her own professors about whether or not she can stay in power. So this creates a whole new front for her. And on top of it all, as this is going on, the encampment itself starts to reform tent-by-tent —

— almost in the same place that it was. And Shafik decides that the most important thing she could do is to try and take the temperature down, which means letting the encampment stand. Or in other words, leaning in the other direction. This time, we’re going to let the protesters have their say for a little while longer.

The problem with that is that, over the weekend, a series of images start to emerge from on campus and just off of it of some really troubling anti-Semitic episodes. In one case, a guy holds up a poster in the middle of campus and points it towards a group of Jewish students who are counter protesting. And it says, I’m paraphrasing here, Hamas’ next targets.

I saw an image of that. What it seemed to evoke was the message that Hamas should murder those Jewish students. That’s the way the Jewish students interpreted it.

It’s a pretty straightforward and jarring statement. At the same time, just outside of Columbia’s closed gates —

Stop killing children!

— protestors are showing up from across New York City. It’s hard to tell who’s affiliated with Columbia, who’s not.

Go back to Poland! Go back to Poland!

There’s a video that goes viral of one of them shouting at Jewish students, go back to Poland, go back to Europe.

In other words, a clear message, you’re not welcome here.

Right. In fact, go back to the places where the Holocaust was committed.

Exactly. And this is not representative of the vast majority of the protesters in the encampment, who mostly had been peaceful. They would later hold a Seder, actually, with some of the pro-Palestinian Jewish protesters in their ranks. But those videos are reaching members of Congress, the very same Republicans that Shafik had testified in front of just a few days before. And now they’re looking and saying, you have lost control of your campus, you’ve turned back on your word to us, and you need to resign.

They call for her outright resignation over this.

That’s right. Republicans in New York and across the country began to call for her to step down from her position as president of Columbia.

So Shafik’s dilemma here is pretty extraordinary. She has set up this dynamic where pleasing these members of Congress would probably mean calling in the NYPD all over again to sweep out this encampment, which would mean further alienating and inflaming students and faculty, who are still very upset over the first crackdown. And now both ends of this spectrum, lawmakers in Washington, folks on the Columbia campus, are saying she can’t lead the University over this situation before she’s even made any fateful decision about what to do with this second encampment. Not a good situation.

No. She’s besieged on all sides. For a while, the only thing that she can come up with to offer is for classes to go hybrid for the remainder of the semester.

So students who aren’t feeling safe in this protest environment don’t necessarily have to go to class.

Right. And I think if we zoom out for a second, it’s worth bearing in mind that she tried to choose a different path here than her counterparts at Harvard or Penn. And after all of this, she’s kind of ended up in the exact same thicket, with people calling for her job with the White House, the Mayor of New York City, and others. These are Democrats. Maybe not calling on her to resign quite yet, but saying, I don’t know what’s going on your campus. This does not look good.

That reality, that taking a different tack that was supposed to be full of learnings and lessons from the stumbles of her peers, the fact that didn’t really work suggests that there’s something really intractable going on here. And I wonder how you’re thinking about this intractable situation that’s now arrived on these college campuses.

Well, I don’t think it’s just limited to college campuses. We have seen intense feelings about this conflict play out in Hollywood. We’ve seen them in our politics in all kinds of interesting ways.

In our media.

We’ve seen it in the media. But college campuses, at least in their most idealized form, are something special. They’re a place where students get to go for four years to think in big ways about moral questions, and political questions, and ideas that help shape the world they’re going to spend the rest of their lives in.

And so when you have a question that feels as urgent as this war does for a lot of people, I think it reverberates in an incredibly intense way on those campuses. And there’s something like — I don’t know if it’s quite a contradiction of terms, but there’s a collision of different values at stake. So universities thrive on the ability of students to follow their minds and their voices where they go, to maybe even experiment a little bit and find those things.

But there are also communities that rely on people being able to trust each other and being able to carry out their classes and their academic endeavors as a collective so they can learn from one another. So in this case, that’s all getting scrambled. Students who feel strongly about the Palestinian cause feel like the point is disruption, that something so big, and immediate, and urgent is happening that they need to get in the faces of their professors, and their administrators, and their fellow students.

Right. And set up an encampment in the middle of campus, no matter what the rules say.

Right. And from the administration’s perspective, they say, well, yeah, you can say that and you can think that. And that’s an important process. But maybe there’s some bad apples in your ranks. Or though you may have good intentions, you’re saying things that you don’t realize the implications of. And they’re making this environment unsafe for others. Or they’re grinding our classes to a halt and we’re not able to function as a University.

So the only way we’re going to be able to move forward is if you will respect our rules and we’ll respect your point of view. The problem is that’s just not happening. Something is not connecting with those two points of view. And as if that’s not hard enough, you then have Congress and the political system with its own agenda coming in and putting its thumb on a scale of an already very difficult situation.

Right. And at this very moment, what we know is that the forces that you just outlined have created a dilemma, an uncertainty of how to proceed, not just for President Shafik and the students and faculty at Columbia, but for a growing number of colleges and universities across the country. And by that, I mean, this thing that seemed to start at Columbia is literally spreading.

Absolutely. We’re talking on a Wednesday afternoon. And these encampments have now started cropping up at universities from coast-to-coast, at Harvard and Yale, but also at University of California, at the University of Texas, at smaller campuses in between. And at each of these institutions, there’s presidents and deans, just like President Shafik at Columbia, who are facing a really difficult set of choices. Do they call in the police? The University of Texas in Austin this afternoon, we saw protesters physically clashing with police.

Do they hold back, like at Harvard, where there were dramatic videos of students literally running into Harvard yard with tents. They were popping up in real-time. And so Columbia, really, I think, at the end of the day, may have kicked off some of this. But they are now in league with a whole bunch of other universities that are struggling with the same set of questions. And it’s a set of questions that they’ve had since this war broke out.

And now these schools only have a week or two left of classes. But we don’t know when these standoffs are going to end. We don’t know if students are going to leave campus for the summer. We don’t know if they’re going to come back in the fall and start protesting right away, or if this year is going to turn out to have been an aberration that was a response to a really awful, bloody war, or if we’re at the beginning of a bigger shift on college campuses that will long outlast this war in the Middle East.

Well, Nick, thank you very much. Thanks for having me, Michael.

We’ll be right back.

Here’s what else you need to know today. The United Nations is calling for an independent investigation into two mass graves found after Israeli forces withdrew from hospitals in Gaza. Officials in Gaza said that some of the bodies found in the graves were Palestinians who had been handcuffed or shot in the head and accused Israel of killing and burying them. In response, Israel said that its soldiers had exhumed bodies in one of the graves as part of an effort to locate Israeli hostages.

And on Wednesday, Hamas released a video of Hersh Goldberg-Polin, an Israeli-American dual citizen, whom Hamas has held hostage since October 7. It was the first time that he has been shown alive since his captivity began. His kidnapping was the subject of a “Daily” episode in October that featured his mother, Rachel. In response to Hamas’s video, Rachel issued a video of her own, in which she spoke directly to her son.

And, Hersh, if you can hear this, we heard your voice today for the first time in 201 days. And if you can hear us, I am telling you, we are telling you, we love you. Stay strong. Survive.

Today’s episode was produced by Sydney Harper, Asthaa Chaturvedi, Olivia Natt, Nina Feldman, and Summer Thomad, with help from Michael Simon Johnson. It was edited by Devon Taylor and Lisa Chow, contains research help by Susan Lee, original music by Marion Lozano and Dan Powell, and was engineered by Chris Wood. Our theme music is by Jim Brunberg and Ben Landsverk of Wonderly. That’s it for “The Daily.” I’m Michael Barbaro. See you tomorrow.

The Daily logo

  • April 26, 2024   •   21:50 Harvey Weinstein Conviction Thrown Out
  • April 25, 2024   •   40:33 The Crackdown on Student Protesters
  • April 24, 2024   •   32:18 Is $60 Billion Enough to Save Ukraine?
  • April 23, 2024   •   30:30 A Salacious Conspiracy or Just 34 Pieces of Paper?
  • April 22, 2024   •   24:30 The Evolving Danger of the New Bird Flu
  • April 19, 2024   •   30:42 The Supreme Court Takes Up Homelessness
  • April 18, 2024   •   30:07 The Opening Days of Trump’s First Criminal Trial
  • April 17, 2024   •   24:52 Are ‘Forever Chemicals’ a Forever Problem?
  • April 16, 2024   •   29:29 A.I.’s Original Sin
  • April 15, 2024   •   24:07 Iran’s Unprecedented Attack on Israel
  • April 14, 2024   •   46:17 The Sunday Read: ‘What I Saw Working at The National Enquirer During Donald Trump’s Rise’
  • April 12, 2024   •   34:23 How One Family Lost $900,000 in a Timeshare Scam

Hosted by Michael Barbaro

Featuring Nicholas Fandos

Produced by Sydney Harper ,  Asthaa Chaturvedi ,  Olivia Natt ,  Nina Feldman and Summer Thomad

With Michael Simon Johnson

Edited by Devon Taylor and Lisa Chow

Original music by Marion Lozano and Dan Powell

Engineered by Chris Wood

Listen and follow The Daily Apple Podcasts | Spotify | Amazon Music

Columbia University has become the epicenter of a growing showdown between student protesters, college administrators and Congress over the war in Gaza and the limits of free speech.

Nicholas Fandos, who covers New York politics and government for The Times, walks us through the intense week at the university. And Isabella Ramírez, the editor in chief of Columbia’s undergraduate newspaper, explains what it has all looked like to a student on campus.

On today’s episode

Nicholas Fandos , who covers New York politics and government for The New York Times

Isabella Ramírez , editor in chief of The Columbia Daily Spectator

A university building during the early morning hours. Tents are set up on the front lawn. Banners are displayed on the hedges.

Background reading

Inside the week that shook Columbia University .

The protests at the university continued after more than 100 arrests.

There are a lot of ways to listen to The Daily. Here’s how.

We aim to make transcripts available the next workday after an episode’s publication. You can find them at the top of the page.

Research help by Susan Lee .

The Daily is made by Rachel Quester, Lynsea Garrison, Clare Toeniskoetter, Paige Cowett, Michael Simon Johnson, Brad Fisher, Chris Wood, Jessica Cheung, Stella Tan, Alexandra Leigh Young, Lisa Chow, Eric Krupke, Marc Georges, Luke Vander Ploeg, M.J. Davis Lin, Dan Powell, Sydney Harper, Mike Benoist, Liz O. Baylen, Asthaa Chaturvedi, Rachelle Bonja, Diana Nguyen, Marion Lozano, Corey Schreppel, Rob Szypko, Elisheba Ittoop, Mooj Zadie, Patricia Willens, Rowan Niemisto, Jody Becker, Rikki Novetsky, John Ketchum, Nina Feldman, Will Reid, Carlos Prieto, Ben Calhoun, Susan Lee, Lexie Diao, Mary Wilson, Alex Stern, Dan Farrell, Sophia Lanman, Shannon Lin, Diane Wong, Devon Taylor, Alyssa Moxley, Summer Thomad, Olivia Natt, Daniel Ramirez and Brendan Klinkenberg.

Our theme music is by Jim Brunberg and Ben Landsverk of Wonderly. Special thanks to Sam Dolnick, Paula Szuchman, Lisa Tobin, Larissa Anderson, Julia Simon, Sofia Milan, Mahima Chablani, Elizabeth Davis-Moorer, Jeffrey Miranda, Renan Borelli, Maddy Masiello, Isabella Anderson and Nina Lassam.

Nicholas Fandos is a Times reporter covering New York politics and government. More about Nicholas Fandos

Advertisement

IMAGES

  1. (PDF) Will Artificial Intelligence Brighten or Threaten the Future

    artificial intelligence a threat to the future of humans essay

  2. SOLUTION: Essay About The Future Of Artificial Intelligence

    artificial intelligence a threat to the future of humans essay

  3. (PDF) Artificial Intelligence in the Computer-Age Threatens Human

    artificial intelligence a threat to the future of humans essay

  4. Essay on Artificial Intelligence

    artificial intelligence a threat to the future of humans essay

  5. (PDF) Is Artificial Intelligence A Threat?

    artificial intelligence a threat to the future of humans essay

  6. The future of artificial intelligence essay

    artificial intelligence a threat to the future of humans essay

VIDEO

  1. Deep Dive: Should we be wary of artificial intelligence?

  2. Artificial intelligence- the death of creativity. CSS 2024 essay paper

  3. WHY A.I IS A Silent Threat to Humanity's Future #ai, #technology #short

  4. Will ARTIFICIAL INTELLIGENCE Destroy Humanity? Why AI So Dangerous for us?

  5. FUTURE OF ARTIFICIAL INTELLIGENCE

  6. The Terrifying Truth About AI: Is Humanity Doomed?

COMMENTS

  1. Here's Why AI May Be Extremely Dangerous--Whether It's Conscious or Not

    A 2023 survey of AI experts found that 36 percent fear that AI development may result in a "nuclear-level catastrophe.". Almost 28,000 people have signed on to an open letter written by the ...

  2. Artificial Intelligence and the Future of Humans

    Artificial Intelligence and the Future of Humans. 1. Concerns about human agency, evolution and survival. 2. Solutions to address AI's anticipated negative impacts. 3. Improvements ahead: How humans and AI might evolve together in the next decade. About this canvassing of experts. Acknowledgments.

  3. AI: the future of humanity

    Artificial intelligence (AI) is reshaping humanity's future, and this manuscript provides a comprehensive exploration of its implications, applications, challenges, and opportunities. The revolutionary potential of AI is investigated across numerous sectors, with a focus on addressing global concerns. The influence of AI on areas such as healthcare, transportation, banking, and education is ...

  4. AI Is an Existential Threat—Just Not the Way You Think

    The following essay is reprinted with permission from The Conversation, an online publication covering the latest research.. The rise of ChatGPT and similar artificial intelligence systems has ...

  5. The case that AI threatens humanity, explained in 500 words

    Kelsey Piper is a senior writer at Future Perfect, Vox's effective altruism-inspired section on the world's biggest challenges. She explores wide-ranging topics like climate change, artificial ...

  6. How close are we to AI that surpasses human intelligence?

    July 18, 2023. Artificial general intelligence (AGI) is difficult to precisely define but refers to a superintelligent AI recognizable from science fiction. AGI may still be far off, but the ...

  7. Opinion

    June 30, 2023. In May, more than 350 technology executives, researchers and academics signed a statement warning of the existential dangers of artificial intelligence. "Mitigating the risk of ...

  8. Is Artificial Intelligence (AI) A Threat To Humans?

    Change the jobs humans do/ job automation: AI will change the workplace and the jobs that humans do. Some jobs will be lost to AI technology, so humans will need to embrace the change and find new ...

  9. The present and future of AI

    The 2021 report is the second in a series that will be released every five years until 2116. Titled "Gathering Strength, Gathering Storms," the report explores the various ways AI is increasingly touching people's lives in settings that range from movie recommendations and voice assistants to autonomous driving and automated medical ...

  10. Threats by artificial intelligence to human health and human existence

    The threat of self-improving artificial general intelligence. Self-improving general-purpose AI, or AGI, is a theoretical machine that can learn and perform the full range of tasks that humans can. 52 53 By being able to learn and recursively improve its own code, it could improve its capacity to improve itself and could theoretically learn to bypass any constraints in its code and start ...

  11. What Exactly Are the Dangers Posed by AI?

    Medium-Term Risk: Job Loss. Oren Etzioni, the founding chief executive of the Allen Institute for AI, a lab in Seattle, said "rote jobs" could be hurt by A.I. Kyle Johnson for The New York ...

  12. Artificial intelligence is transforming our world

    When thinking about the future of artificial intelligence, I find it helpful to consider two different concepts in particular: human-level AI, and transformative AI. 2 The first concept highlights the AI's capabilities and anchors them to a familiar benchmark, while transformative AI emphasizes the impact that this technology would have on ...

  13. Artificial Intelligence and its Implications on the Future of Humanity

    Jack MacPhee. Fleming. ST112WA. May 16, 2018. Artificial Intelligence and its Implications on the Future of Humanity. Though some may think of robots taking over the world when they envision AI, artificial intelligence is a very broad term that includes things from Apple's Siri and Amazon's Alexa, to, more recently, more complex things like ...

  14. Is AI really a threat to human civilization?

    Even if you haven't been following the current conversations around artificial intelligence, it's hard not to do a double take at the recent headlines warning that AI may soon represent a serious threat to human civilization.The stories revolve around recent statements made by several industry leaders, including one of AI's "godfathers," that the technology is now evolving so rapidly ...

  15. The Future of AI: What Comes Next and What to Expect

    This is one look at the future of chatbots and other A.I. technologies: A new wave of multimodal systems will juggle images, sounds and videos as well as text. Yesterday, my colleague Kevin Roose ...

  16. Artificial Intelligence and the Loss of Humanity

    The true threat of AI to humanity lies not in the power of AI itself but in the ways people are already beginning to use it to chip away at our humanity. AI outperforms humans, but only in low-level tasks. Artificial intelligence is a field in computer science that seeks to have computers perform certain tasks by simulating human intelligence.

  17. (PDF) The Artificial Intelligence Paradox: Opportunity or Threat for

    Artificial intelligence (AI) has become one of the core driving forces for the future development of the medical industry, but patients are skeptical about the use of AI in medical care.

  18. How soon will machines outsmart humans? The biggest brains in AI disagree

    Gary Marcus, a cognitive scientist who sold his AI start-up to Uber in 2016, bet Musk $10mn this week that "we won't see human-superior AGI by the end of 2025". Marcus has previously written ...

  19. The threat, hype, and promise of artificial intelligence in education

    The idea of building intelligent machines has been around for centuries, with a new wave of promising artificial intelligence (AI) in the twenty-first century. Artificial Intelligence in Education (AIED) is a younger phenomenon that has created hype and promises, but also been seen as a threat by critical voices. There have been rich discussions on over-optimism and hype in contemporary AI ...

  20. The impact of artificial intelligence on human society and bioethics

    Bioethics is not a matter of calculation but a process of conscientization. Although AI designers can up-load all information, data, and programmed to AI to function as a human being, it is still a machine and a tool. AI will always remain as AI without having authentic human feelings and the capacity to commiserate.

  21. The Future of AI: How AI Is Changing the World

    Innovations in the field of artificial intelligence continue to shape the future of humanity across nearly every industry. AI is already the main driver of emerging technologies like big data, robotics and IoT, and generative AI has further expanded the possibilities and popularity of AI. According to a 2023 IBM survey, 42 percent of enterprise-scale businesses integrated AI into their ...

  22. Essay on Artificial Intelligence as a Threat in the Society

    Within the next two decades, its high human intellectual ability poses a severe threat to the workforce market that is initially under human labor. For the first time, it raises concerns about the end of human superiority. While AI can boost the economic growth rate, it also faces significant risks such as employment market fragmentation ...

  23. How Could AI Destroy Humanity?

    But they've been light on the details. Last month, hundreds of well-known people in the world of artificial intelligence signed an open letter warning that A.I. could one day destroy humanity ...

  24. Artificial Intelligence and the future of work: A threat or a promise

    Are we smart enough to manage the impact of Artificial Intelligence on the world of work? It seems that Artificial Intelligence (AI) is raising concerns in some quarters, especially in the world of work these days. But in fact, these concerns about whether the growth of AI is "a threat or a promise" have been growing for years.

  25. Does artificial intelligence (AI) constitute an opportunity or a threat

    Enhancing clinician diagnosis: Deep learning AI can process thousands of radiology or pathology images and conduct automated image interpretation at a fraction of the time that radiologists and pathologists can, allowing for quicker diagnoses. An AI-assisted diagnosis could even allow non-specialists to confidently make decisions in an emergency setting that would normally require specialist ...

  26. The Need For AI-Powered Cybersecurity to Tackle AI-Driven ...

    The Need For AI-Powered Cybersecurity to Tackle AI-Driven Cyberattacks. Cybercriminals are weaponizing Artificial Intelligence (AI) to launch more sophisticated, scaled and advanced targeted cyberattacks. AI has empowered attackers and enabled them to create malware that transforms to evade detection, highly compelling phishing exploits, and ...

  27. The Crackdown on Student Protesters

    The Crackdown on Student Protesters. Columbia University is at the center of a growing showdown over the war in Gaza and the limits of free speech. April 25, 2024, 6:00 a.m. ET. Share full article ...