AI Will Transform Teaching and Learning. Let’s Get it Right.
At the recent AI+Education Summit, Stanford researchers, students, and industry leaders discussed both the potential of AI to transform education for the better and the risks at play.
When the Stanford Accelerator for Learning and the Stanford Institute for Human-Centered AI began planning the inaugural AI+Education Summit last year, the public furor around AI had not reached its current level. This was the time before ChatGPT. Even so, intensive research was already underway across Stanford University to understand the vast potential of AI, including generative AI, to transform education as we know it.
By the time the summit was held on Feb. 15, ChatGPT had reached more than 100 million unique users , and 30% of all college students had used it for assignments, making it one of the fastest-ever applications ever adopted overall – and certainly in education settings. Within the education world, teachers and school districts have been wrestling with how to respond to this emerging technology.
The AI+Education Summit explored a central question: How can AI like this and other applications be best used to advance human learning?
“Technology offers the prospect of universal access to increase fundamentally new ways of teaching,” said Graduate School of Education Dean Daniel Schwartz in his opening remarks. “I want to emphasize that a lot of AI is also going to automate really bad ways of teaching. So [we need to] think about it as a way of creating new types of teaching.”
Researchers across Stanford – from education, technology, psychology, business, law, and political science – joined industry leaders like Sal Khan, founder and CEO of Khan Academy, in sharing cutting-edge research and brainstorming ways to unlock the potential of AI in education in an ethical, equitable, and safe manner.
Participants also spent a major portion of the day engaged in small discussion groups in which faculty, students, researchers, staff, and other guests shared their ideas about AI in education. Discussion topics included natural language processing applied to education; developing students’ AI literacy; assisting students with learning differences; informal learning outside of school; fostering creativity; equity and closing achievement gaps; workforce development; and avoiding potential misuses of AI with students and teachers.
Several themes emerged over the course of the day on AI’s potential, as well as its significant risks.
First, a look at AI’s potential:
1. Enhancing personalized support for teachers at scale
Great teachers remain the cornerstone of effective learning. Yet teachers receive limited actionable feedback to improve their practice. AI presents an opportunity to support teachers as they refine their craft at scale through applications such as:
- Simulating students: AI language models can serve as practice students for new teachers. Percy Liang , director of the Stanford HAI Center for Research on Foundation Models , said that they are increasingly effective and are now capable of demonstrating confusion and asking adaptive follow-up questions.
- Real-time feedback and suggestions: Dora Demszky , assistant professor of education data science, highlighted the ability for AI to provide real-time feedback and suggestions to teachers (e.g., questions to ask the class), creating a bank of live advice based on expert pedagogy.
- Post-teaching feedback: Demszky added that AI can produce post-lesson reports that summarize the classroom dynamics. Potential metrics include student speaking time or identification of the questions that triggered the most engagement. Research finds that when students talk more, learning is improved.
- Refreshing expertise: Sal Khan, founder of online learning environment Khan Academy, suggested that AI could help teachers stay up-to-date with the latest advancements in their field. For example, a biology teacher would have AI update them on the latest breakthroughs in cancer research, or leverage AI to update their curriculum.
2. Changing what is important for learners
Stanford political science Professor Rob Reich proposed a compelling question: Is generative AI comparable to the calculator in the classroom, or will it be a more detrimental tool? Today, the calculator is ubiquitous in middle and high schools, enabling students to quickly solve complex computations, graph equations, and solve problems. However, it has not resulted in the removal of basic mathematical computation from the curriculum: Students still know how to do long division and calculate exponents without technological assistance. On the other hand, Reich noted, writing is a way of learning how to think. Could outsourcing much of that work to AI harm students’ critical thinking development?
Liang suggested that students must learn about how the world works from first principles – this could be basic addition or sentence structure. However, they no longer need to be fully proficient – in other words, doing all computation by hand or writing all essays without AI support.
In fact, by no longer requiring mastery of proficiency, Demszky argued that AI may actually raise the bar. The models won’t be doing the thinking for the students; rather, students will now have to edit and curate, forcing them to engage deeper than they have previously. In Khan’s view, this allows learners to become architects who are able to pursue something more creative and ambitious.
And Noah Goodman , associate professor of psychology and of computer science, questioned the analogy, saying this tool may be more like the printing press, which led to democratization of knowledge and did not eliminate the need for human writing skills.
3. Enabling learning without fear of judgment
Ran Liu, chief AI scientist at Amira Learning, said that AI has the potential to support learners’ self-confidence. Teachers commonly encourage class participation by insisting that there is no such thing as a stupid question. However, for most students, fear of judgment from their peers holds them back from fully engaging in many contexts. As Liu explained, children who believe themselves to be behind are the least likely to engage in these settings.
Interfaces that leverage AI can offer constructive feedback that does not carry the same stakes or cause the same self-consciousness as a human’s response. Learners are therefore more willing to engage, take risks, and be vulnerable.
One area in which this can be extremely valuable is soft skills. Emma Brunskill , associate professor of computer science, noted that there are an enormous number of soft skills that are really hard to teach effectively, like communication, critical thinking, and problem-solving. With AI, a real-time agent can provide support and feedback, and learners are able to try different tactics as they seek to improve.
4. Improving learning and assessment quality
Bryan Brown , professor of education, said that “what we know about learning is not reflected in how we teach.” For example, teachers know that learning happens through powerful classroom discussions. However, only one student can speak up at a time. AI has the potential to support a single teacher who is trying to generate 35 unique conversations with each student.
This also applies to the workforce. During a roundtable discussion facilitated by Stanford Digital Economy Lab Director Erik Brynjolfsson and Candace Thille , associate professor of education and faculty lead on adult learning at the Stanford Accelerator for Learning, attendees noted that the inability to judge a learner’s skill profile is a leading industry challenge. AI has the potential to quickly determine a learner’s skills, recommend solutions to fill the gaps, and match them with roles that actually require those skills.
Of course, AI is never a panacea. Now a look at AI’s significant risks:
1. Model output does not reflect true cultural diversity
At present, ChatGPT and AI more broadly generates text in language that fails to reflect the diversity of students served by the education system or capture the authentic voice of diverse populations. When the bot was asked to speak in the cadence of the author of The Hate U Give , which features an African American protagonist, ChatGPT simply added “yo” in front of random sentences. As Sarah Levine , assistant professor of education, explained, this overwhelming gap fails to foster an equitable environment of connection and safety for some of America’s most underserved learners.
2. Models do not optimize for student learning
While ChatGPT spits out answers to queries, these responses are not designed to optimize for student learning. As Liang noted, the models are trained to deliver answers as fast as possible, but that is often in conflict with what would be pedagogically sound, whether that’s a more in-depth explanation of key concepts or a framing that is more likely to spark curiosity to learn more.
3. Incorrect responses come in pretty packages
Goodman demonstrated that AI can produce coherent text that is completely erroneous. His lab trained a virtual tutor that was tasked with solving and explaining algebra equations in a chatbot format. The chatbot would produce perfect sentences that exhibited top-quality teaching techniques, such as positive reinforcement, but fail to get to the right mathematical answer.
4. Advances exacerbate a motivation crisis
Chris Piech , assistant professor of computer science, told a story about a student who recently came into his office crying. The student was concerned about the rapid progress of ChatGPT and how this would deter future job prospects after many years of learning how to code. Piech connected the incident to a broader existential motivation crisis, where many students may no longer know what they should be focusing on or don’t see the value of their hard-earned skills.
The full impact of AI in education remains unclear at this juncture, but as all speakers agreed, things are changing, and now is the time to get it right.
Watch the full conference:
Stanford HAI’s mission is to advance AI research, education, policy and practice to improve the human condition. Learn more
More News Topics
Artificial Intelligence and Education: A Reading List
A bibliography to help educators prepare students and themselves for a future shaped by AI—with all its opportunities and drawbacks.
How should education change to address, incorporate, or challenge today’s AI systems, especially powerful large language models? What role should educators and scholars play in shaping the future of generative AI? The release of ChatGPT in November 2022 triggered an explosion of news, opinion pieces, and social media posts addressing these questions. Yet many are not aware of the current and historical body of academic work that offers clarity, substance, and nuance to enrich the discourse.
Linking the terms “AI” and “education” invites a constellation of discussions. This selection of articles is hardly comprehensive, but it includes explanations of AI concepts and provides historical context for today’s systems. It describes a range of possible educational applications as well as adverse impacts, such as learning loss and increased inequity. Some articles touch on philosophical questions about AI in relation to learning, thinking, and human communication. Others will help educators prepare students for civic participation around concerns including information integrity, impacts on jobs, and energy consumption. Yet others outline educator and student rights in relation to AI and exhort educators to share their expertise in societal and industry discussions on the future of AI.
Nabeel Gillani, Rebecca Eynon, Catherine Chiabaut, and Kelsey Finkel, “ Unpacking the ‘Black Box’ of AI in Education ,” Educational Technology & Society 26, no. 1 (2023): 99–111.
Whether we’re aware of it or not, AI was already widespread in education before ChatGPT. Nabeel Gillani et al. describe AI applications such as learning analytics and adaptive learning systems, automated communications with students, early warning systems, and automated writing assessment. They seek to help educators develop literacy around the capacities and risks of these systems by providing an accessible introduction to machine learning and deep learning as well as rule-based AI. They present a cautious view, calling for scrutiny of bias in such systems and inequitable distribution of risks and benefits. They hope that engineers will collaborate deeply with educators on the development of such systems.
Jürgen Rudolph, Samson Tan, and Shannon Tan, “ ChatGPT: Bullshit Spewer or the End of Traditional Assessments in Higher Education? ” The Journal of Applied Learning and Teaching 6, no. 1 (January 24, 2023).
Jürgen Rudolph et al. give a practically oriented overview of ChatGPT’s implications for higher education. They explain the statistical nature of large language models as they tell the history of OpenAI and its attempts to mitigate bias and risk in the development of ChatGPT. They illustrate ways ChatGPT can be used with examples and screenshots. Their literature review shows the state of artificial intelligence in education (AIEd) as of January 2023. An extensive list of challenges and opportunities culminates in a set of recommendations that emphasizes explicit policy as well as expanding digital literacy education to include AI.
Emily M. Bender, Timnit Gebru, Angela McMillan-Major, and Shmargaret Shmitchell, “ On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜 ,” FAccT ’21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (March 2021): 610–623.
Student and faculty understanding of the risks and impacts of large language models is central to AI literacy and civic participation around AI policy. This hugely influential paper details documented and likely adverse impacts of the current data-and-resource-intensive, non-transparent mode of development of these models. Bender et al. emphasize the ways in which these costs will likely be borne disproportionately by marginalized groups. They call for transparency around the energy use and cost of these models as well as transparency around the data used to train them. They warn that models perpetuate and even amplify human biases and that the seeming coherence of these systems’ outputs can be used for malicious purposes even though it doesn’t reflect real understanding.
The authors argue that inclusive participation in development can encourage alternate development paths that are less resource intensive. They further argue that beneficial applications for marginalized groups, such as improved automatic speech recognition systems, must be accompanied by plans to mitigate harm.
Erik Brynjolfsson, “ The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence ,” Daedalus 151, no. 2 (2022): 272–87.
Erik Brynjolfsson argues that when we think of artificial intelligence as aiming to substitute for human intelligence, we miss the opportunity to focus on how it can complement and extend human capabilities. Brynjolfsson calls for policy that shifts AI development incentives away from automation toward augmentation. Automation is more likely to result in the elimination of lower-level jobs and in growing inequality. He points educators toward augmentation as a framework for thinking about AI applications that assist learning and teaching. How can we create incentives for AI to support and extend what teachers do rather than substituting for teachers? And how can we encourage students to use AI to extend their thinking and learning rather than using AI to skip learning?
Kevin Scott, “ I Do Not Think It Means What You Think It Means: Artificial Intelligence, Cognitive Work & Scale ,” Daedalus 151, no. 2 (2022): 75–84.
Brynjolfsson’s focus on AI as “augmentation” converges with Microsoft computer scientist Kevin Scott’s focus on “cognitive assistance.” Steering discussion of AI away from visions of autonomous systems with their own goals, Scott argues that near-term AI will serve to help humans with cognitive work. Scott situates this assistance in relation to evolving historical definitions of work and the way in which tools for work embody generalized knowledge about specific domains. He’s intrigued by the way deep neural networks can represent domain knowledge in new ways, as seen in the unexpected coding capabilities offered by OpenAI’s GPT-3 language model, which have enabled people with less technical knowledge to code. His article can help educators frame discussions of how students should build knowledge and what knowledge is still relevant in contexts where AI assistance is nearly ubiquitous.
Laura D. Tyson and John Zysman, “ Automation, AI & Work ,” Daedalus 151, no. 2 (2022): 256–71.
How can educators prepare students for future work environments integrated with AI and advise students on how majors and career paths may be affected by AI automation? And how can educators prepare students to participate in discussions of government policy around AI and work? Laura Tyson and John Zysman emphasize the importance of policy in determining how economic gains due to AI are distributed and how well workers weather disruptions due to AI. They observe that recent trends in automation and gig work have exacerbated inequality and reduced the supply of “good” jobs for low- and middle-income workers. They predict that AI will intensify these effects, but they point to the way collective bargaining, social insurance, and protections for gig workers have mitigated such impacts in countries like Germany. They argue that such interventions can serve as models to help frame discussions of intelligent labor policies for “an inclusive AI era.”
Todd C. Helmus, Artificial Intelligence, Deepfakes, and Disinformation: A Primer (RAND Corporation, 2022).
Educators’ considerations of academic integrity and AI text can draw on parallel discussions of authenticity and labeling of AI content in other societal contexts. Artificial intelligence has made deepfake audio, video, and images as well as generated text much more difficult to detect as such. Here, Todd Helmus considers the consequences to political systems and individuals as he offers a review of the ways in which these can and have been used to promote disinformation. He considers ways to identify deepfakes and ways to authenticate provenance of videos and images. Helmus advocates for regulatory action, tools for journalistic scrutiny, and widespread efforts to promote media literacy. As well as informing discussions of authenticity in educational contexts, this report might help us shape curricula to teach students about the risks of deepfakes and unlabeled AI.
William Hasselberger, “ Can Machines Have Common Sense? ” The New Atlantis 65 (2021): 94–109.
Students, by definition, are engaged in developing their cognitive capacities; their understanding of their own intelligence is in flux and may be influenced by their interactions with AI systems and by AI hype. In his review of The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do by Erik J. Larson, William Hasselberger warns that in overestimating AI’s ability to mimic human intelligence we devalue the human and overlook human capacities that are integral to everyday life decision making, understanding, and reasoning. Hasselberger provides examples of both academic and everyday common-sense reasoning that continue to be out of reach for AI. He provides a historical overview of debates around the limits of artificial intelligence and its implications for our understanding of human intelligence, citing the likes of Alan Turing and Marvin Minsky as well as contemporary discussions of data-driven language models.
Gwo-Jen Hwang and Nian-Shing Chen, “ Exploring the Potential of Generative Artificial Intelligence in Education: Applications, Challenges, and Future Research Directions ,” Educational Technology & Society 26, no. 2 (2023).
Gwo-Jen Hwang and Nian-Shing Chen are enthusiastic about the potential benefits of incorporating generative AI into education. They outline a variety of roles a large language model like ChatGPT might play, from student to tutor to peer to domain expert to administrator. For example, educators might assign students to “teach” ChatGPT on a subject. Hwang and Chen provide sample ChatGPT session transcripts to illustrate their suggestions. They share prompting techniques to help educators better design AI-based teaching strategies. At the same time, they are concerned about student overreliance on generative AI. They urge educators to guide students to use it critically and to reflect on their interactions with AI. Hwang and Chen don’t touch on concerns about bias, inaccuracy, or fabrication, but they call for further research into the impact of integrating generative AI on learning outcomes.
Weekly Newsletter
Get your fix of JSTOR Daily’s best stories in your inbox each Thursday.
Privacy Policy Contact Us You may unsubscribe at any time by clicking on the provided link on any marketing message.
Lauren Goodlad and Samuel Baker, “ Now the Humanities Can Disrupt ‘AI’ ,” Public Books (February 20, 2023).
Lauren Goodlad and Samuel Baker situate both academic integrity concerns and the pressures on educators to “embrace” AI in the context of market forces. They ground their discussion of AI risks in a deep technical understanding of the limits of predictive models at mimicking human intelligence. Goodlad and Baker urge educators to communicate the purpose and value of teaching with writing to help students engage with the plurality of the world and communicate with others. Beyond the classroom, they argue, educators should question tech industry narratives and participate in public discussion on regulation and the future of AI. They see higher education as resilient: academic skepticism about former waves of hype around MOOCs, for example, suggests that educators will not likely be dazzled or terrified into submission to AI. Goodlad and Baker hope we will instead take up our place as experts who should help shape the future of the role of machines in human thought and communication.
Kathryn Conrad, “ Sneak Preview: A Blueprint for an AI Bill of Rights for Education ,” Critical AI 2.1 (July 17, 2023).
How can the field of education put the needs of students and scholars first as we shape our response to AI, the way we teach about it, and the way we might incorporate it into pedagogy? Kathryn Conrad’s manifesto builds on and extends the Biden administration’s Office of Science and Technology Policy 2022 “Blueprint for an AI Bill of Rights.” Conrad argues that educators should have input into institutional policies on AI and access to professional development around AI. Instructors should be able to decide whether and how to incorporate AI into pedagogy, basing their decisions on expert recommendations and peer-reviewed research. Conrad outlines student rights around AI systems, including the right to know when AI is being used to evaluate them and the right to request alternate human evaluation. They deserve detailed instructor guidance on policies around AI use without fear of reprisals. Conrad maintains that students should be able to appeal any charges of academic misconduct involving AI, and they should be offered alternatives to any AI-based assignments that might put their creative work at risk of exposure or use without compensation. Both students’ and educators’ legal rights must be respected in any educational application of automated generative systems.
Support JSTOR Daily! Join our new membership program on Patreon today.
JSTOR is a digital library for scholars, researchers, and students. JSTOR Daily readers can access the original research behind our articles for free on JSTOR.
Get Our Newsletter
More stories.
Endangered: North American Cricket
Monastic Chant: Praising God Out Loud
The Complex History of American Dating
9 Ways to Create an “Intellectually Humble” Classroom
Recent posts.
- Painting Race
- The Governess, in Her Own Written Words
- Eisenhower and the Real-Life Nautilus
- How Renaissance Art Found Its Way to American Museums
- Lessons in Mannerism at the Palazzo del Te
Support JSTOR Daily
Sign up for our weekly newsletter.
MIT Technology Review
- Newsletters
Unleashing the power of AI for education
Provided by Microsoft Education
Artificial intelligence (AI) is a major influence on the state of education today, and the implications are huge. AI has the potential to transform how our education system operates, heighten the competitiveness of institutions, and empower teachers and learners of all abilities.
Dan Ayoub is general manager of education at Microsoft.
The opportunities for AI to support education are so broad that recently Microsoft commissioned research on this topic from IDC to understand where the company can help. The findings illustrate the strategic nature of AI in education and highlight the need for technologies and skills to make the promise of AI a reality.
The results showed almost universal acceptance among educators that AI is important for their future—99.4% said AI would be instrumental to their institution’s competitiveness within the next three years, with 15% calling it a “game-changer.” Nearly all are trying to work with it too—92% said they have started to experiment with the technology.
Yet on the other hand, most institutions still lack a formal data strategy or practical measures in place to advance AI capabilities, which remains a key inhibitor. The finding indicates that although the vast majority of leaders understand the need for an AI strategy, they may lack clarity on how to implement one. And it could be that they just don’t know where to start.
David Kellermann has become a pioneer in how to use AI in the classroom. At the University of New South Wales in Sydney, Australia, Kellermann has built a question bot capable of answering questions on its own or delivering video of past lectures. The bot can also flag student questions for teaching assistants (TAs) to follow up. What’s more, it keeps getting better at its job as it’s exposed to more and different questions over time.
Kellermann began his classroom’s transformation with a single Surface laptop. He’s also employed out-of-the-box systems like Microsoft Teams to foster collaboration among his students. Kellermann used the Microsoft Power Platform to build the question bot, and he’s also built a dashboard using Power BI that plots the class’s exam scores and builds personalized study packs based on students’ past performance.
Educators see AI as instrumental to their institution’s competitiveness, yet most institutions still lack a formal data strategy to advance AI.
Kellermann’s project illustrates a key principle for organizations in nearly every industry when it comes to working with AI and machine learning—knowing where to start, starting small, and adding to your capabilities over time. The potential applications of AI are so vast, even the most sophisticated organizations can become bogged down trying to do too much, too soon. Often, it comes down to simply having a small goal and building from there.
As an AI initiative gradually grows and becomes more sophisticated, it’s also important to have access to experts who can navigate technology and put the right systems in place. To gain a foothold with AI, institutions need tools, technologies, and skills.
This is a big focus for our work at Microsoft—to support educational institutions and classrooms. We’ve seen the strides some institutions have already taken to bring the potential of AI technologies into the classroom. But we also know there is much more work to do. Over the next few years, AI’s impact will be felt in several ways—managing operations and processes, data-driven programs to increase effectiveness, saving energy with smart buildings, creating a modern campus with a secure and safe learning environment.
But its most important and far-reaching impact may lie in AI’s potential to change the way teachers teach and students learn, helping maximize student success and prepare them for the future.
Collective intelligence tools will be available to save teachers time with tasks like grading papers so teachers and TAs can spend more time with students. AI can help identify struggling students through behavioral cues and give them a nudge in the right direction.
AI can also help educators foster greater inclusivity—AI-based language translation, for example, can enable more students with diverse backgrounds to participate in a class or listen to a lecture. Syracuse University’s School of Information Studies is working to drive experiential learning for students while also helping solve real-world problems, such as Our Ability , a website that helps people with disabilities get jobs.
AI has the power to become an equalizer in education and a key differentiator for institutions that embrace it.
Schools can even use AI to offer a truly personalized learning experience—overcoming one of the biggest limitations of our modern, one-to-many education model. Kellermann’s personalized learning system in Sydney shows that the technology is here today.
AI has the power to become a great equalizer in education and a key differentiator for institutions that embrace it. Schools that adopt AI in clever ways are going to show better student success and empower their learners to enter the work force of tomorrow.
Given its importance, institutions among that 92% should start thinking about the impact they can achieve with AI technologies now. Do you want to more quickly grade papers? Empower teachers to spend more time with students? Whatever it is, it’s important to have that goal in mind, and then maybe dream a little.
Artificial intelligence
Why does AI hallucinate?
The tendency to make things up is holding chatbots back. But that’s just what they do.
- Will Douglas Heaven archive page
How generative AI could reinvent what it means to play
AI-powered NPCs that don’t need a script could make games—and other worlds—deeply immersive.
- Niall Firth archive page
Google DeepMind trained a robot to beat humans at table tennis
It was able to draw on vast amounts of data to refine its playing style and adjust its tactics as matches progressed.
- Rhiannon Williams archive page
Synthesia’s hyperrealistic deepfakes will soon have full bodies
With bodies that move and hands that wave, deepfakes just got a whole lot more realistic.
- Melissa Heikkilä archive page
Stay connected
Get the latest updates from mit technology review.
Discover special offers, top stories, upcoming events, and more.
Thank you for submitting your email!
It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
- Research article
- Open access
- Published: 24 April 2023
Artificial intelligence in higher education: the state of the field
- Helen Crompton ORCID: orcid.org/0000-0002-1775-8219 1 , 3 &
- Diane Burke 2
International Journal of Educational Technology in Higher Education volume 20 , Article number: 22 ( 2023 ) Cite this article
113k Accesses
144 Citations
64 Altmetric
Metrics details
This systematic review provides unique findings with an up-to-date examination of artificial intelligence (AI) in higher education (HE) from 2016 to 2022. Using PRISMA principles and protocol, 138 articles were identified for a full examination. Using a priori, and grounded coding, the data from the 138 articles were extracted, analyzed, and coded. The findings of this study show that in 2021 and 2022, publications rose nearly two to three times the number of previous years. With this rapid rise in the number of AIEd HE publications, new trends have emerged. The findings show that research was conducted in six of the seven continents of the world. The trend has shifted from the US to China leading in the number of publications. Another new trend is in the researcher affiliation as prior studies showed a lack of researchers from departments of education. This has now changed to be the most dominant department. Undergraduate students were the most studied students at 72%. Similar to the findings of other studies, language learning was the most common subject domain. This included writing, reading, and vocabulary acquisition. In examination of who the AIEd was intended for 72% of the studies focused on students, 17% instructors, and 11% managers. In answering the overarching question of how AIEd was used in HE, grounded coding was used. Five usage codes emerged from the data: (1) Assessment/Evaluation, (2) Predicting, (3) AI Assistant, (4) Intelligent Tutoring System (ITS), and (5) Managing Student Learning. This systematic review revealed gaps in the literature to be used as a springboard for future researchers, including new tools, such as Chat GPT.
A systematic review examining AIEd in higher education (HE) up to the end of 2022.
Unique findings in the switch from US to China in the most studies published.
A two to threefold increase in studies published in 2021 and 2022 to prior years.
AIEd was used for: Assessment/Evaluation, Predicting, AI Assistant, Intelligent Tutoring System, and Managing Student Learning.
Introduction
The use of artificial intelligence (AI) in higher education (HE) has risen quickly in the last 5 years (Chu et al., 2022 ), with a concomitant proliferation of new AI tools available. Scholars (viz., Chen et al., 2020 ; Crompton et al., 2020 , 2021 ) report on the affordances of AI to both instructors and students in HE. These benefits include the use of AI in HE to adapt instruction to the needs of different types of learners (Verdú et al., 2017 ), in providing customized prompt feedback (Dever et al., 2020 ), in developing assessments (Baykasoğlu et al., 2018 ), and predict academic success (Çağataylı & Çelebi, 2022 ). These studies help to inform educators about how artificial intelligence in education (AIEd) can be used in higher education.
Nonetheless, a gap has been highlighted by scholars (viz., Hrastinski et al., 2019 ; Zawacki-Richter et al., 2019 ) regarding an understanding of the collective affordances provided through the use of AI in HE. Therefore, the purpose of this study is to examine extant research from 2016 to 2022 to provide an up-to-date systematic review of how AI is being used in the HE context.
Artificial intelligence has become pervasive in the lives of twenty-first century citizens and is being proclaimed as a tool that can be used to enhance and advance all sectors of our lives (Górriz et al., 2020 ). The application of AI has attracted great interest in HE which is highly influenced by the development of information and communication technologies (Alajmi et al., 2020 ). AI is a tool used across subject disciplines, including language education (Liang et al., 2021 ), engineering education (Shukla et al., 2019 ), mathematics education (Hwang & Tu, 2021 ) and medical education (Winkler-Schwartz et al., 2019 ),
Artificial intelligence
The term artificial intelligence is not new. It was coined in 1956 by McCarthy (Cristianini, 2016 ) who followed up on the work of Turing (e.g., Turing, 1937 , 1950 ). Turing described the existence of intelligent reasoning and thinking that could go into intelligent machines. The definition of AI has grown and changed since 1956, as there has been significant advancements in AI capabilities. A current definition of AI is “computing systems that are able to engage in human-like processes such as learning, adapting, synthesizing, self-correction and the use of data for complex processing tasks” (Popenici et al., 2017 , p. 2). The interdisciplinary interest from scholars from linguistics, psychology, education, and neuroscience who connect AI to nomenclature, perceptions and knowledge in their own disciplines could create a challenge when defining AI. This has created the need to create categories of AI within specific disciplinary areas. This paper focuses on the category of AI in Education (AIEd) and how AI is specifically used in higher educational contexts.
As the field of AIEd is growing and changing rapidly, there is a need to increase the academic understanding of AIEd. Scholars (viz., Hrastinski et al., 2019 ; Zawacki-Richter et al., 2019 ) have drawn attention to the need to increase the understanding of the power of AIEd in educational contexts. The following section provides a summary of the previous research regarding AIEd.
Extant systematic reviews
This growing interest in AIEd has led scholars to investigate the research on the use of artificial intelligence in education. Some scholars have conducted systematic reviews to focus on a specific subject domain. For example, Liang et. al. ( 2021 ) conducted a systematic review and bibliographic analysis the roles and research foci of AI in language education. Shukla et. al. ( 2019 ) focused their longitudinal bibliometric analysis on 30 years of using AI in Engineering. Hwang and Tu ( 2021 ) conducted a bibliometric mapping analysis on the roles and trends in the use of AI in mathematics education, and Winkler-Schwartz et. al. ( 2019 ) specifically examined the use of AI in medical education in looking for best practices in the use of machine learning to assess surgical expertise. These studies provide a specific focus on the use of AIEd in HE but do not provide an understanding of AI across HE.
On a broader view of AIEd in HE, Ouyang et. al. ( 2022 ) conducted a systematic review of AIEd in online higher education and investigated the literature regarding the use of AI from 2011 to 2020. The findings show that performance prediction, resource recommendation, automatic assessment, and improvement of learning experiences are the four main functions of AI applications in online higher education. Salas-Pilco and Yang ( 2022 ) focused on AI applications in Latin American higher education. The results revealed that the main AI applications in higher education in Latin America are: (1) predictive modeling, (2) intelligent analytics, (3) assistive technology, (4) automatic content analysis, and (5) image analytics. These studies provide valuable information for the online and Latin American context but not an overarching examination of AIEd in HE.
Studies have been conducted to examine HE. Hinojo-Lucena et. al. ( 2019 ) conducted a bibliometric study on the impact of AIEd in HE. They analyzed the scientific production of AIEd HE publications indexed in Web of Science and Scopus databases from 2007 to 2017. This study revealed that most of the published document types were proceedings papers. The United States had the highest number of publications, and the most cited articles were about implementing virtual tutoring to improve learning. Chu et. al. ( 2022 ) reviewed the top 50 most cited articles on AI in HE from 1996 to 2020, revealing that predictions of students’ learning status were most frequently discussed. AI technology was most frequently applied in engineering courses, and AI technologies most often had a role in profiling and prediction. Finally, Zawacki-Richter et. al. ( 2019 ) analyzed AIEd in HE from 2007 to 2018 to reveal four primary uses of AIEd: (1) profiling and prediction, (2) assessment and evaluation, (3) adaptive systems and personalization, and (4) intelligent tutoring systems. There do not appear to be any studies examining the last 2 years of AIEd in HE, and these authors describe the rapid speed of both AI development and the use of AIEd in HE and call for further research in this area.
Purpose of the study
The purpose of this study is in response to the appeal from scholars (viz., Chu et al., 2022 ; Hinojo-Lucena et al., 2019 ; Zawacki-Richter et al., 2019 ) to research to investigate the benefits and challenges of AIEd within HE settings. As the academic knowledge of AIEd HE finished with studies examining up to 2020, this study provides the most up-to-date analysis examining research through to the end of 2022.
The overarching question for this study is: what are the trends in HE research regarding the use of AIEd? The first two questions provide contextual information, such as where the studies occurred and the disciplines AI was used in. These contextual details are important for presenting the main findings of the third question of how AI is being used in HE.
In what geographical location was the AIEd research conducted, and how has the trend in the number of publications evolved across the years?
What departments were the first authors affiliated with, and what were the academic levels and subject domains in which AIEd research was being conducted?
Who are the intended users of the AI technologies and what are the applications of AI in higher education?
A PRISMA systematic review methodology was used to answer three questions guiding this study. PRISMA principles (Page et al., 2021 ) were used throughout the study. The PRISMA extension Preferred Reporting Items for Systematic Reviews and Meta-Analysis for Protocols (PRISMA-P; Moher et al., 2015 ) were utilized in this study to provide an a priori roadmap to conduct a rigorous systematic review. Furthermore, the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA principles; Page et al., 2021 ) were used to search, identify, and select articles to be included in the research were used for searching, identifying, and selecting articles, then in how to read, extract, and manage the secondary data gathered from those studies (Moher et al., 2015 , PRISMA Statement, 2021 ). This systematic review approach supports an unbiased synthesis of the data in an impartial way (Hemingway & Brereton, 2009 ). Within the systematic review methodology, extracted data were aggregated and presented as whole numbers and percentages. A qualitative deductive and inductive coding methodology was also used to analyze extant data and generate new theories on the use of AI in HE (Gough et al., 2017 ).
The research begins with the search for the research articles to be included in the study. Based on the research question, the study parameters are defined including the search years, quality and types of publications to be included. Next, databases and journals are selected. A Boolean search is created and used for the search of those databases and journals. Once a set of publications are located from those searches, they are then examined against an inclusion and exclusion criteria to determine which studies will be included in the final study. The relevant data to match the research questions is then extracted from the final set of studies and coded. This method section is organized to describe each of these methods with full details to ensure transparency.
Search strategy
Only peer-reviewed journal articles were selected for examination in this systematic review. This ensured a level of confidence in the quality of the studies selected (Gough et al., 2017 ). The search parameters narrowed the search focus to include studies published in 2016 to 2022. This timeframe was selected to ensure the research was up to date, which is especially important with the rapid change in technology and AIEd.
The data retrieval protocol employed an electronic and a hand search. The electronic search included educational databases within EBSCOhost. Then an additional electronic search was conducted of Wiley Online Library, JSTOR, Science Direct, and Web of Science. Within each of these databases a full text search was conducted. Aligned to the research topic and questions, the Boolean search included terms related to AI, higher education, and learning. The Boolean search is listed in Table 1 . In the initial test search, the terms “machine learning” OR “intelligent support” OR “intelligent virtual reality” OR “chatbot” OR “automated tutor” OR “intelligent agent” OR “expert system” OR “neural network” OR “natural language processing” were used. These were removed as they were subcategories of terms found in Part 1 of the search. Furthermore, inclusion of these specific AI terms resulted in a large number of computer science courses that were focused on learning about AI and not the use of AI in learning.
Part 2 of the search ensured that articles involved formal university education. The terms higher education and tertiary were both used to recognize the different terms used in different countries. The final Boolean search was “Artificial intelligence” OR AI OR “smart technologies” OR “intelligent technologies” AND “higher education” OR tertiary OR graduate OR undergraduate. Scholars (viz., Ouyang et al., 2022 ) who conducted a systematic review on AIEd in HE up to 2020 noted that they missed relevant articles from their study, and other relevant journals should intentionally be examined. Therefore, a hand search was also conducted to include an examination of other journals relevant to AIEd that may not be included in the databases. This is important as the field of AIEd is still relatively new, and journals focused on this field may not yet be indexed in databases. The hand search included: The International Journal of Learning Analytics and Artificial Intelligence in Education, the International Journal of Artificial Intelligence in Education, and Computers & Education: Artificial Intelligence.
Electronic and hand searches resulted in 371 articles for possible inclusion. The search parameters within the electronic database search narrowed the search to articles published from 2016 to 2022, per-reviewed journal articles, and duplicates. Further screening was conducted manually, as each of the 138 articles were reviewed in full by two researchers to examine a match against the inclusion and exclusion criteria found in Table 2 .
The inter-rater reliability was calculated by percentage agreement (Belur et al., 2018 ). The researchers reached a 95% agreement for the coding. Further discussion of misaligned articles resulted in a 100% agreement. This screening process against inclusion and exclusion criteria resulted in the exclusion of 237 articles. This included the duplicates and those removed as part of the inclusion and exclusion criteria, see Fig. 1 . Leaving 138 articles for inclusion in this systematic review.
(From: Page et al., 2021 )
PRISMA flow chart of article identification and screening
The 138 articles were then coded to answer each of the research questions using deductive and inductive coding methods. Deductive coding involves examining data using a priori codes. A priori are pre-determined criteria and this process was used to code the countries, years, author affiliations, academic levels, and domains in the respective groups. Author affiliations were coded using the academic department of the first author of the study. First authors were chosen as that person is the primary researcher of the study and this follows past research practice (e.g., Zawacki-Richter et al., 2019 ). Who the AI was intended for was also coded using the a priori codes of Student, Instructor, Manager or Others. The Manager code was used for those who are involved in organizational tasks, e.g., tracking enrollment. Others was used for those not fitting the other three categories.
Inductive coding was used for the overarching question of this study in examining how the AI was being used in HE. Researchers of extant systematic reviews on AIEd in HE (viz., Chu et al., 2022 ; Zawacki-Richter et al., 2019 ) often used an a priori framework as researchers matched the use of AI to pre-existing frameworks. A grounded coding methodology (Strauss & Corbin, 1995 ) was selected for this study to allow findings of the trends on AIEd in HE to emerge from the data. This is important as it allows a direct understanding of how AI is being used rather than how researchers may think it is being used and fitting the data to pre-existing ideas.
Grounded coding process involved extracting how the AI was being used in HE from the articles. “In vivo” (Saldana, 2015 ) coding was also used alongside grounded coding. In vivo codes are when codes use language directly from the article to capture the primary authors’ language and ensure consistency with their findings. The grounded coding design used a constant comparative method. Researchers identified important text from articles related to the use of AI, and through an iterative process, initial codes led to axial codes with a constant comparison of uses of AI with uses of AI, then of uses of AI with codes, and codes with codes. Codes were deemed theoretically saturated when the majority of the data fit with one of the codes. For both the a priori and the grounded coding, two researchers coded and reached an inter-rater percentage agreement of 96%. After discussing misaligned articles, a 100% agreement was achieved.
Findings and discussion
The findings and discussion section are organized by the three questions guiding this study. The first two questions provide contextual information on the AIEd research, and the final question provides a rigorous investigation into how AI is being used in HE.
RQ1. In what geographical location was the AIEd research conducted, and how has the trend in the number of publications evolved across the years?
The 138 studies took place across 31 countries in six of seven continents of the world. Nonetheless, that distribution was not equal across continents. Asia had the largest number of AIEd studies in HE at 41%. Of the seven countries represented in Asia, 42 of the 58 studies were conducted in Taiwan and China. Europe, at 30%, was the second largest continent and had 15 countries ranging from one to eight studies a piece. North America, at 21% of the studies was the continent with the third largest number of studies, with the USA producing 21 of the 29 studies in that continent. The 21 studies from the USA places it second behind China. Only 1% of studies were conducted in South America and 2% in Africa. See Fig. 2 for a visual representation of study distribution across countries. Those continents with high numbers of studies are from high income countries and those with low numbers have a paucity of publications in low-income countries.
Geographical distribution of the AIEd HE studies
Data from Zawacki-Richter et. al.’s ( 2019 ) 2007–2018 systematic review examining countries found that the USA conducted the most studies across the globe at 43 out of 146, and China had the second largest at eleven of the 146 papers. Researchers have noted a rapid trend in Chinese researchers publishing more papers on AI and securing more patents than their US counterparts in a field that was originally led by the US (viz., Li et al., 2021 ). The data from this study corroborate this trend in China leading in the number of AIEd publications.
With the accelerated use of AI in society, gathering data to examine the use of AIEd in HE is useful in providing the scholarly community with specific information on that growth and if it is as prolific as anticipated by scholars (e.g., Chu et al., 2022 ). The analysis of data of the 138 studies shows that the trend towards the use of AIEd in HE has greatly increased. There is a drop in 2019, but then a great rise in 2021 and 2022; see Fig. 3 .
Chronological trend in AIEd in HE
Data on the rise in AIEd in HE is similar to the findings of Chu et. al. ( 2022 ) who noted an increase from 1996 to 2010 and 2011–2020. Nonetheless Chu’s parameters are across decades, and the rise is to be anticipated with a relatively new technology across a longitudinal review. Data from this study show a dramatic rise since 2020 with a 150% increase from the prior 2 years 2020–2019. The rise in 2021 and 2022 in HE could have been caused by the vast increase in HE faculty having to teach with technology during the pandemic lockdown. Faculty worldwide were using technologies, including AI, to explore how they could continue teaching and learning that was often face-to-face prior to lockdown. The disadvantage of this rapid adoption of technology is that there was little time to explore the possibilities of AI to transform learning, and AI may have been used to replicate past teaching practices, without considering new strategies previously inconceivable with the affordances of AI.
However, in a further examination of the research from 2021 to 2022, it appears that there are new strategies being considered. For example, Liu et. al.’s, 2022 study used AIEd to provide information on students’ interactions in an online environment and examine their cognitive effort. In Yao’s study in 2022, he examined the use of AI to determine student emotions while learning.
RQ2. What departments were the first authors affiliated with, and what were the academic levels and subject domains in which AIEd research was being conducted?
Department affiliations
Data from the AIEd HE studies show that of the first authors were most frequently from colleges of education (28%), followed by computer science (20%). Figure 4 presents the 15 academic affiliations of the authors found in the studies. The wide variety of affiliations demonstrate the variety of ways AI can be used in various educational disciplines, and how faculty in diverse areas, including tourism, music, and public affairs were interested in how AI can be used for educational purposes.
Research affiliations
In an extant AIED HE systematic review, Zawacki-Richter et. al.’s ( 2019 ) named their study Systematic review of research on artificial intelligence applications in higher education—where are the educators? In this study, the authors were keen to highlight that of the AIEd studies in HE, only six percent were written by researchers directly connected to the field of education, (i.e., from a college of education). The researchers found a great lack in pedagogical and ethical implications of implementing AI in HE and that there was a need for more educational perspectives on AI developments from educators conducting this work. It appears from our data that educators are now showing greater interest in leading these research endeavors, with the highest affiliated group belonging to education. This may again be due to the pandemic and those in the field of education needing to support faculty in other disciplines, and/or that they themselves needed to explore technologies for their own teaching during the lockdown. This may also be due to uptake in professors in education becoming familiar with AI tools also driven by a societal increased attention. As the focus of much research by education faculty is on teaching and learning, they are in an important position to be able to share their research with faculty in other disciplines regarding the potential affordances of AIEd.
Academic levels
The a priori coding of academic levels show that the majority of studies involved undergraduate students with 99 of the 138 (72%) focused on these students. This was in comparison to the 12 of 138 (9%) for graduate students. Some of the studies used AI for both academic levels: see Fig. 5
Academic level distribution by number of articles
This high percentage of studies focused on the undergraduate population was congruent with an earlier AIED HE systematic review (viz., Zawacki-Richter et al., 2019 ) who also reported student academic levels. This focus on undergraduate students may be due to the variety of affordances offered by AIEd, such as predictive analytics on dropouts and academic performance. These uses of AI may be less required for graduate students who already have a record of performance from their undergraduate years. Another reason for this demographic focus can also be convenience sampling, as researchers in HE typically has a much larger and accessible undergraduate population than graduates. This disparity between undergraduates and graduate populations is a concern, as AIEd has the potential to be valuable in both settings.
Subject domains
The studies were coded into 14 areas in HE; with 13 in a subject domain and one category of AIEd used in HE management of students; See Fig. 6 . There is not a wide difference in the percentages of top subject domains, with language learning at 17%, computer science at 16%, and engineering at 12%. The management of students category appeared third on the list at 14%. Prior studies have also found AIEd often used for language learning (viz., Crompton et al., 2021 ; Zawacki-Richter et al., 2019 ). These results are different, however, from Chu et. al.’s ( 2022 ) findings that show engineering dramatically leading with 20 of the 50 studies, with other subjects, such as language learning, appearing once or twice. This study appears to be an outlier that while the searches were conducted in similar databases, the studies only included 50 studies from 1996 to 2020.
Subject domains of AIEd in HE
Previous scholars primarily focusing on language learning using AI for writing, reading, and vocabulary acquisition used the affordances of natural language processing and intelligent tutoring systems (e.g., Liang et al., 2021 ). This is similar to the findings in studies with AI used for automated feedback of writing in a foreign language (Ayse et al., 2022 ), and AI translation support (Al-Tuwayrish, 2016 ). The large use of AI for managerial activities in this systematic review focused on making predictions (12 studies) and then admissions (three studies). This is positive to see this use of AI to look across multiple databases to see trends emerging from data that may not have been anticipated and cross referenced before (Crompton et al., 2022 ). For example, to examine dropouts, researchers may consider examining class attendance, and may not examine other factors that appear unrelated. AI analysis can examine all factors and may find that dropping out is due to factors beyond class attendance.
RQ3. Who are the intended users of the AI technologies and what are the applications of AI in higher education?
Intended user of AI
Of the 138 articles, the a priori coding shows that 72% of the studies focused on Students, followed by a focus on Instructors at 17%, and Managers at 11%, see Fig. 7 . The studies provided examples of AI being used to provide support to students, such as access to learning materials for inclusive learning (Gupta & Chen, 2022 ), provide immediate answers to student questions, self-testing opportunities (Yao, 2022 ), and instant personalized feedback (Mousavi et al., 2020 ).
Intended user
The data revealed a large emphasis on students in the use of AIEd in HE. This user focus is different from a recent systematic review on AIEd in K-12 that found that AIEd studies in K-12 settings prioritized teachers (Crompton et al., 2022 ). This may appear that HE uses AI to focus more on students than in K-12. However, this large number of student studies in HE may be due to the student population being more easily accessibility to HE researchers who may study their own students. The ethical review process is also typically much shorter in HE than in K-12. Therefore, the data on the intended focus should be reviewed while keeping in mind these other explanations. It was interesting that Managers were the lowest focus in K-12 and also in this study in HE. AI has great potential to collect, cross reference and examine data across large datasets that can allow data to be used for actionable insight. More focus on the use of AI by managers would tap into this potential.
How is AI used in HE
Using grounded coding, the use of AIEd from each of the 138 articles was examined and six major codes emerged from the data. These codes provide insight into how AI was used in HE. The five codes are: (1) Assessment/Evaluation, (2) Predicting, (3) AI Assistant, (4) Intelligent Tutoring System (ITS), and (5) Managing Student Learning. For each of these codes there are also axial codes, which are secondary codes as subcategories from the main category. Each code is delineated below with a figure of the codes with further descriptive information and examples.
Assessment/evaluation
Assessment and Evaluation was the most common use of AIEd in HE. Within this code there were six axial codes broken down into further codes; see Fig. 8 . Automatic assessment was most common, seen in 26 of the studies. It was interesting to see that this involved assessment of academic achievement, but also other factors, such as affect.
Codes and axial codes for assessment and evaluation
Automatic assessment was used to support a variety of learners in HE. As well as reducing the time it takes for instructors to grade (Rutner & Scott, 2022 ), automatic grading showed positive use for a variety of students with diverse needs. For example, Zhang and Xu ( 2022 ) used automatic assessment to improve academic writing skills of Uyghur ethnic minority students living in China. Writing has a variety of cultural nuances and in this study the students were shown to engage with the automatic assessment system behaviorally, cognitively, and affectively. This allowed the students to engage in self-regulated learning while improving their writing.
Feedback was a description often used in the studies, as students were given text and/or images as feedback as a formative evaluation. Mousavi et. al. ( 2020 ) developed a system to provide first year biology students with an automated personalized feedback system tailored to the students’ specific demographics, attributes, and academic status. With the unique feature of AIEd being able to analyze multiple data sets involving a variety of different students, AI was used to assess and provide feedback on students’ group work (viz., Ouatik et al., 2021 ).
AI also supports instructors in generating questions and creating multiple question tests (Yang et al., 2021 ). For example, (Lu et al., 2021 ) used natural language processing to create a system that automatically created tests. Following a Turing type test, researchers found that AI technologies can generate highly realistic short-answer questions. The ability for AI to develop multiple questions is a highly valuable affordance as tests can take a great deal of time to make. However, it would be important for instructors to always confirm questions provided by the AI to ensure they are correct and that they match the learning objectives for the class, especially in high value summative assessments.
The axial code within assessment and evaluation revealed that AI was used to review activities in the online space. This included evaluating student’s reflections, achievement goals, community identity, and higher order thinking (viz., Huang et al., 2021 ). Three studies used AIEd to evaluate educational materials. This included general resources and textbooks (viz., Koć‑Januchta et al., 2022 ). It is interesting to see the use of AI for the assessment of educational products, rather than educational artifacts developed by students. While this process may be very similar in nature, this shows researchers thinking beyond the traditional use of AI for assessment to provide other affordances.
Predicting was a common use of AIEd in HE with 21 studies focused specifically on the use of AI for forecasting trends in data. Ten axial codes emerged on the way AI was used to predict different topics, with nine focused on predictions regarding students and the other on predicting the future of higher education. See Fig. 9 .
Predicting axial codes
Extant systematic reviews on HE highlighted the use of AIEd for prediction (viz., Chu et al., 2022 ; Hinojo-Lucena et al., 2019 ; Ouyang et al., 2022 ; Zawacki-Richter et al., 2019 ). Ten of the articles in this study used AI for predicting academic performance. Many of the axial codes were often overlapping, such as predicting at risk students, and predicting dropouts; however, each provided distinct affordances. An example of this is the study by Qian et. al. ( 2021 ). These researchers examined students taking a MOOC course. MOOCs can be challenging environments to determine information on individual students with the vast number of students taking the course (Krause & Lowe, 2014 ). However, Qian et al., used AIEd to predict students’ future grades by inputting 17 different learning features, including past grades, into an artificial neural network. The findings were able to predict students’ grades and highlight students at risk of dropping out of the course.
In a systematic review on AIEd within the K-12 context (viz., Crompton et al., 2022 ), prediction was less pronounced in the findings. In the K-12 setting, there was a brief mention of the use of AI in predicting student academic performance. One of the studies mentioned students at risk of dropping out, but this was immediately followed by questions about privacy concerns and describing this as “sensitive”. The use of prediction from the data in this HE systematic review cover a wide range of AI predictive affordances. students Sensitivity is still important in a HE setting, but it is positive to see the valuable insight it provides that can be used to avoid students failing in their goals.
AI assistant
The studies evaluated in this review indicated that the AI Assistant used to support learners had a variety of different names. This code included nomenclature such as, virtual assistant, virtual agent, intelligent agent, intelligent tutor, and intelligent helper. Crompton et. al. ( 2022 ), described the difference in the terms to delineate the way that the AI appeared to the user. For example, if there was an anthropomorphic presence to the AI, such as an avatar, or if the AI appeared to support via other means, such as text prompt. The findings of this systematic review align to Crompton et. al.’s ( 2022 ) descriptive differences of the AI Assistant. Furthermore, this code included studies that provide assistance to students, but may not have specifically used the word assistance. These include the use of chatbots for student outreach, answering questions, and providing other assistance. See Fig. 10 for the axial codes for AI Assistant.
AI assistant axial codes
Many of these assistants offered multiple supports to students, such as Alex , the AI described as a virtual change agent in Kim and Bennekin’s ( 2016 ) study. Alex interacted with students in a college mathematics course by asking diagnostic questions and gave support depending on student needs. Alex’s support was organized into four stages: (1) goal initiation (“Want it”), (2) goal formation (“Plan for it”), (3) action control (“Do it”), and (4) emotion control (“Finish it”). Alex provided responses depending on which of these four areas students needed help. These messages supported students with the aim of encouraging persistence in pursuing their studies and degree programs and improving performance.
The role of AI in providing assistance connects back to the seminal work of Vygotsky ( 1978 ) and the Zone of Proximal Development (ZPD). ZPD highlights the degree to which students can rapidly develop when assisted. Vygotsky described this assistance often in the form of a person. However, with technological advancements, the use of AI assistants in these studies are providing that support for students. The affordances of AI can also ensure that the support is timely without waiting for a person to be available. Also, assistance can consider aspects on students’ academic ability, preferences, and best strategies for supporting. These features were evident in Kim and Bennekin’s ( 2016 ) study using Alex.
Intelligent tutoring system
The use of Intelligent Tutoring Systems (ITS) was revealed in the grounded coding. ITS systems are adaptive instructional systems that involve the use of AI techniques and educational methods. An ITS system customizes educational activities and strategies based on student’s characteristics and needs (Mousavinasab et al., 2021 ). While ITS may be an anticipated finding in AIED HE systematic reviews, it was interesting that extant reviews similar to this study did not always describe their use in HE. For example, Ouyang et. al. ( 2022 ), included “intelligent tutoring system” in search terms describing it as a common technique, yet ITS was not mentioned again in the paper. Zawacki-Richter et. al. ( 2019 ) on the other hand noted that ITS was in the four overarching findings of the use of AIEd in HE. Chu et. al. ( 2022 ) then used Zawacki-Richter’s four uses of AIEd for their recent systematic review.
In this systematic review, 18 studies specifically mentioned that they were using an ITS. The ITS code did not necessitate axial codes as they were performing the same type of function in HE, namely, in providing adaptive instruction to the students. For example, de Chiusole et. al. ( 2020 ) developed Stat-Knowlab, an ITS that provides the level of competence and best learning path for each student. Thus Stat-Knowlab personalizes students’ learning and provides only educational activities that the student is ready to learn. This ITS is able to monitor the evolution of the learning process as the student interacts with the system. In another study, Khalfallah and Slama ( 2018 ) built an ITS called LabTutor for engineering students. LabTutor served as an experienced instructor in enabling students to access and perform experiments on laboratory equipment while adapting to the profile of each student.
The student population in university classes can go into the hundreds and with the advent of MOOCS, class sizes can even go into the thousands. Even in small classes of 20 students, the instructor cannot physically provide immediate unique personalize questions to each student. Instructors need time to read and check answers and then take further time to provide feedback before determining what the next question should be. Working with the instructor, AIEd can provide that immediate instruction, guidance, feedback, and following questioning without delay or becoming tired. This appears to be an effective use of AIEd, especially within the HE context.
Managing student learning
Another code that emerged in the grounded coding was focused on the use of AI for managing student learning. AI is accessed to manage student learning by the administrator or instructor to provide information, organization, and data analysis. The axial codes reveal the trends in the use of AI in managing student learning; see Fig. 11 .
Learning analytics was an a priori term often found in studies which describes “the measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimizing learning and the environments in which it occurs” (Long & Siemens, 2011 , p. 34). The studies investigated in this systematic review were across grades and subject areas and provided administrators and instructors different types of information to guide their work. One of those studies was conducted by Mavrikis et. al. ( 2019 ) who described learning analytics as teacher assistance tools. In their study, learning analytics were used in an exploratory learning environment with targeted visualizations supporting classroom orchestration. These visualizations, displayed as screenshots in the study, provided information such as the interactions between the students, goals achievements etc. These appear similar to infographics that are brightly colored and draw the eye quickly to pertinent information. AI is also used for other tasks, such as organizing the sequence of curriculum in pacing guides for future groups of students and also designing instruction. Zhang ( 2022 ) described how designing an AI teaching system of talent cultivation and using the digital affordances to establish a quality assurance system for practical teaching, provides new mechanisms for the design of university education systems. In developing such a system, Zhang found that the stability of the instructional design, overcame the drawbacks of traditional manual subjectivity in the instructional design.
Another trend that emerged from the studies was the use of AI to manage student big data to support learning. Ullah and Hafiz ( 2022 ) lament that using traditional methods, including non-AI digital techniques, asking the instructor to pay attention to every student’s learning progress is very difficult and that big data analysis techniques are needed. The ability to look across and within large data sets to inform instruction is a valuable affordance of AIEd in HE. While the use of AIEd to manage student learning emerged from the data, this study uncovered only 19 studies in 7 years (2016–2022) that focused on the use of AIEd to manage student data. This lack of the use was also noted in a recent study in the K-12 space (Crompton et al., 2022 ). In Chu et. al.’s ( 2022 ) study examining the top 50 most cited AIEd articles, they did not report the use of AIEd for managing student data in the top uses of AIEd HE. It would appear that more research should be conducted in this area to fully explore the possibilities of AI.
Gaps and future research
From this systematic review, six gaps emerged in the data providing opportunities for future studies to investigate and provide a fuller understanding of how AIEd can used in HE. (1) The majority of the research was conducted in high income countries revealing a paucity of research in developing countries. More research should be conducted in these developing countries to expand the level of understanding about how AI can enhance learning in under-resourced communities. (2) Almost 50% of the studies were conducted in the areas of language learning, computer science and engineering. Research conducted by members from multiple, different academic departments would help to advance the knowledge of the use of AI in more disciplines. (3) This study revealed that faculty affiliated with schools of education are taking an increasing role in researching the use of AIEd in HE. As this body of knowledge grows, faculty in Schools of Education should share their research regarding the pedagogical affordances of AI so that this knowledge can be applied by faculty across disciplines. (4) The vast majority of the research was conducted at the undergraduate level. More research needs to be done at the graduate student level, as AI provides many opportunities in this environment. (5) Little study was done regarding how AIEd can assist both instructors and managers in their roles in HE. The power of AI to assist both groups further research. (6) Finally, much of the research investigated in this systematic review revealed the use of AIEd in traditional ways that enhance or make more efficient current practices. More research needs to focus on the unexplored affordances of AIEd. As AI becomes more advanced and sophisticated, new opportunities will arise for AIEd. Researchers need to be on the forefront of these possible innovations.
In addition, empirical exploration is needed for new tools, such as ChatGPT that was available for public use at the end of 2022. With the time it takes for a peer review journal article to be published, ChatGPT did not appear in the articles for this study. What is interesting is that it could fit with a variety of the use codes found in this study, with students getting support in writing papers and instructors using Chat GPT to assess students work and with help writing emails or descriptions for students. It would be pertinent for researchers to explore Chat GPT.
Limitations
The findings of this study show a rapid increase in the number of AIEd studies published in HE. However, to ensure a level of credibility, this study only included peer review journal articles. These articles take months to publish. Therefore, conference proceedings and gray literature such as blogs and summaries may reveal further findings not explored in this study. In addition, the articles in this study were all published in English which excluded findings from research published in other languages.
In response to the call by Hinojo-Lucena et. al. ( 2019 ), Chu et. al. ( 2022 ), and Zawacki-Richter et. al. ( 2019 ), this study provides unique findings with an up-to-date examination of the use of AIEd in HE from 2016 to 2022. Past systematic reviews examined the research up to 2020. The findings of this study show that in 2021 and 2022, publications rose nearly two to three times the number of previous years. With this rapid rise in the number of AIEd HE publications, new trends have emerged.
The findings show that of the 138 studies examined, research was conducted in six of the seven continents of the world. In extant systematic reviews showed that the US led by a large margin in the number of studies published. This trend has now shifted to China. Another shift in AIEd HE is that while extant studies lamented the lack of focus on professors of education leading these studies, this systematic review found education to be the most common department affiliation with 28% and computer science coming in second at 20%. Undergraduate students were the most studied students at 72%. Similar to the findings of other studies, language learning was the most common subject domain. This included writing, reading, and vocabulary acquisition. In examination of who the AIEd was intended for, 72% of the studies focused on students, 17% instructors, and 11% managers.
Grounded coding was used to answer the overarching question of how AIEd was used in HE. Five usage codes emerged from the data: (1) Assessment/Evaluation, (2) Predicting, (3) AI Assistant, (4) Intelligent Tutoring System (ITS), and (5) Managing Student Learning. Assessment and evaluation had a wide variety of purposes, including assessing academic progress and student emotions towards learning, individual and group evaluations, and class based online community assessments. Predicting emerged as a code with ten axial codes, as AIEd predicted dropouts and at-risk students, innovative ability, and career decisions. AI Assistants were specific to supporting students in HE. These assistants included those with an anthropomorphic presence, such as virtual agents and persuasive intervention through digital programs. ITS systems were not always noted in extant systematic reviews but were specifically mentioned in 18 of the studies in this review. ITS systems in this study provided customized strategies and approaches to student’s characteristics and needs. The final code in this study highlighted the use of AI in managing student learning, including learning analytics, curriculum sequencing, instructional design, and clustering of students.
The findings of this study provide a springboard for future academics, practitioners, computer scientists, policymakers, and funders in understanding the state of the field in AIEd HE, how AI is used. It also provides actionable items to ameliorate gaps in the current understanding. As the use AIEd will only continue to grow this study can serve as a baseline for further research studies in the use of AIEd in HE.
Availability of data and materials
The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.
Alajmi, Q., Al-Sharafi, M. A., & Abuali, A. (2020). Smart learning gateways for Omani HEIs towards educational technology: Benefits, challenges and solutions. International Journal of Information Technology and Language Studies, 4 (1), 12–17.
Google Scholar
Al-Tuwayrish, R. K. (2016). An evaluative study of machine translation in the EFL scenario of Saudi Arabia. Advances in Language and Literary Studies, 7 (1), 5–10.
Ayse, T., & Nil, G. (2022). Automated feedback and teacher feedback: Writing achievement in learning English as a foreign language at a distance. The Turkish Online Journal of Distance Education, 23 (2), 120–139. https://doi.org/10.7575/aiac.alls.v.7n.1p.5
Article Google Scholar
Baykasoğlu, A., Özbel, B. K., Dudaklı, N., Subulan, K., & Şenol, M. E. (2018). Process mining based approach to performance evaluation in computer-aided examinations. Computer Applications in Engineering Education, 26 (5), 1841–1861. https://doi.org/10.1002/cae.21971
Belur, J., Tompson, L., Thornton, A., & Simon, M. (2018). Interrater reliability in systematic review methodology: Exploring variation in coder decision-making. Sociological Methods & Research, 13 (3), 004912411887999. https://doi.org/10.1177/0049124118799372
Çağataylı, M., & Çelebi, E. (2022). Estimating academic success in higher education using big five personality traits, a machine learning approach. Arab Journal Scientific Engineering, 47 , 1289–1298. https://doi.org/10.1007/s13369-021-05873-4
Chen, L., Chen, P., & Lin, Z. (2020). Artificial intelligence in education: A review. IEEE Access, 8 , 75264–75278. https://doi.org/10.1109/ACCESS.2020.2988510
Chu, H., Tu, Y., & Yang, K. (2022). Roles and research trends of artificial intelligence in higher education: A systematic review of the top 50 most-cited articles. Australasian Journal of Educational Technology, 38 (3), 22–42. https://doi.org/10.14742/ajet.7526
Cristianini, N. (2016). Intelligence reinvented. New Scientist, 232 (3097), 37–41. https://doi.org/10.1016/S0262-4079(16)31992-3
Crompton, H., Bernacki, M. L., & Greene, J. (2020). Psychological foundations of emerging technologies for teaching and learning in higher education. Current Opinion in Psychology, 36 , 101–105. https://doi.org/10.1016/j.copsyc.2020.04.011
Crompton, H., & Burke, D. (2022). Artificial intelligence in K-12 education. SN Social Sciences, 2 , 113. https://doi.org/10.1007/s43545-022-00425-5
Crompton, H., Jones, M., & Burke, D. (2022). Affordances and challenges of artificial intelligence in K-12 education: A systematic review. Journal of Research on Technology in Education . https://doi.org/10.1080/15391523.2022.2121344
Crompton, H., & Song, D. (2021). The potential of artificial intelligence in higher education. Revista Virtual Universidad Católica Del Norte, 62 , 1–4. https://doi.org/10.35575/rvuen.n62a1
de Chiusole, D., Stefanutti, L., Anselmi, P., & Robusto, E. (2020). Stat-Knowlab. Assessment and learning of statistics with competence-based knowledge space theory. International Journal of Artificial Intelligence in Education, 30 , 668–700. https://doi.org/10.1007/s40593-020-00223-1
Dever, D. A., Azevedo, R., Cloude, E. B., & Wiedbusch, M. (2020). The impact of autonomy and types of informational text presentations in game-based environments on learning: Converging multi-channel processes data and learning outcomes. International Journal of Artificial Intelligence in Education, 30 (4), 581–615. https://doi.org/10.1007/s40593-020-00215-1
Górriz, J. M., Ramírez, J., Ortíz, A., Martínez-Murcia, F. J., Segovia, F., Suckling, J., Leming, M., Zhang, Y. D., Álvarez-Sánchez, J. R., Bologna, G., Bonomini, P., Casado, F. E., Charte, D., Charte, F., Contreras, R., Cuesta-Infante, A., Duro, R. J., Fernández-Caballero, A., Fernández-Jover, E., … Ferrández, J. M. (2020). Artificial intelligence within the interplay between natural and artificial computation: Advances in data science, trends and applications. Neurocomputing, 410 , 237–270. https://doi.org/10.1016/j.neucom.2020.05.078
Gough, D., Oliver, S., & Thomas, J. (2017). An introduction to systematic reviews (2nd ed.). Sage.
Gupta, S., & Chen, Y. (2022). Supporting inclusive learning using chatbots? A chatbot-led interview study. Journal of Information Systems Education, 33 (1), 98–108.
Hemingway, P. & Brereton, N. (2009). In Hayward Medical Group (Ed.). What is a systematic review? Retrieved from http://www.medicine.ox.ac.uk/bandolier/painres/download/whatis/syst-review.pdf
Hinojo-Lucena, F., Arnaz-Diaz, I., Caceres-Reche, M., & Romero-Rodriguez, J. (2019). A bibliometric study on its impact the scientific literature. Education Science . https://doi.org/10.3390/educsci9010051
Hrastinski, S., Olofsson, A. D., Arkenback, C., Ekström, S., Ericsson, E., Fransson, G., Jaldemark, J., Ryberg, T., Öberg, L.-M., Fuentes, A., Gustafsson, U., Humble, N., Mozelius, P., Sundgren, M., & Utterberg, M. (2019). Critical imaginaries and reflections on artificial intelligence and robots in postdigital K-12 education. Postdigital Science and Education, 1 (2), 427–445. https://doi.org/10.1007/s42438-019-00046-x
Huang, C., Wu, X., Wang, X., He, T., Jiang, F., & Yu, J. (2021). Exploring the relationships between achievement goals, community identification and online collaborative reflection. Educational Technology & Society, 24 (3), 210–223.
Hwang, G. J., & Tu, Y. F. (2021). Roles and research trends of artificial intelligence in mathematics education: A bibliometric mapping analysis and systematic review. Mathematics, 9 (6), 584. https://doi.org/10.3390/math9060584
Khalfallah, J., & Slama, J. B. H. (2018). The effect of emotional analysis on the improvement of experimental e-learning systems. Computer Applications in Engineering Education, 27 (2), 303–318. https://doi.org/10.1002/cae.22075
Kim, C., & Bennekin, K. N. (2016). The effectiveness of volition support (VoS) in promoting students’ effort regulation and performance in an online mathematics course. Instructional Science, 44 , 359–377. https://doi.org/10.1007/s11251-015-9366-5
Koć-Januchta, M. M., Schönborn, K. J., Roehrig, C., Chaudhri, V. K., Tibell, L. A. E., & Heller, C. (2022). “Connecting concepts helps put main ideas together”: Cognitive load and usability in learning biology with an AI-enriched textbook. International Journal of Educational Technology in Higher Education, 19 (11), 11. https://doi.org/10.1186/s41239-021-00317-3
Krause, S. D., & Lowe, C. (2014). Invasion of the MOOCs: The promise and perils of massive open online courses . Parlor Press.
Li, D., Tong, T. W., & Xiao, Y. (2021). Is China emerging as the global leader in AI? Harvard Business Review. https://hbr.org/2021/02/is-china-emerging-as-the-global-leader-in-ai
Liang, J. C., Hwang, G. J., Chen, M. R. A., & Darmawansah, D. (2021). Roles and research foci of artificial intelligence in language education: An integrated bibliographic analysis and systematic review approach. Interactive Learning Environments . https://doi.org/10.1080/10494820.2021.1958348
Liu, S., Hu, T., Chai, H., Su, Z., & Peng, X. (2022). Learners’ interaction patterns in asynchronous online discussions: An integration of the social and cognitive interactions. British Journal of Educational Technology, 53 (1), 23–40. https://doi.org/10.1111/bjet.13147
Long, P., & Siemens, G. (2011). Penetrating the fog: Analytics in learning and education. Educause Review, 46 (5), 31–40.
Lu, O. H. T., Huang, A. Y. Q., Tsai, D. C. L., & Yang, S. J. H. (2021). Expert-authored and machine-generated short-answer questions for assessing students learning performance. Educational Technology & Society, 24 (3), 159–173.
Mavrikis, M., Geraniou, E., Santos, S. G., & Poulovassilis, A. (2019). Intelligent analysis and data visualization for teacher assistance tools: The case of exploratory learning. British Journal of Educational Technology, 50 (6), 2920–2942. https://doi.org/10.1111/bjet.12876
Moher, D., Shamseer, L., Clarke, M., Ghersi, D., Liberati, A., Petticrew, M., Shekelle, P., & Stewart, L. (2015). Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Systematic Reviews, 4 (1), 1–9. https://doi.org/10.1186/2046-4053-4-1
Mousavi, A., Schmidt, M., Squires, V., & Wilson, K. (2020). Assessing the effectiveness of student advice recommender agent (SARA): The case of automated personalized feedback. International Journal of Artificial Intelligence in Education, 31 (2), 603–621. https://doi.org/10.1007/s40593-020-00210-6
Mousavinasab, E., Zarifsanaiey, N., Kalhori, S. R. N., Rakhshan, M., Keikha, L., & Saeedi, M. G. (2021). Intelligent tutoring systems: A systematic review of characteristics, applications, and evaluation methods. Interactive Learning Environments, 29 (1), 142–163. https://doi.org/10.1080/10494820.2018.1558257
Ouatik, F., Ouatikb, F., Fadlic, H., Elgoraria, A., Mohadabb, M. E. L., Raoufia, M., et al. (2021). E-Learning & decision making system for automate students assessment using remote laboratory and machine learning. Journal of E-Learning and Knowledge Society, 17 (1), 90–100. https://doi.org/10.20368/1971-8829/1135285
Ouyang, F., Zheng, L., & Jiao, P. (2022). Artificial intelligence in online higher education: A systematic review of empirical research from 2011–2020. Education and Information Technologies, 27 , 7893–7925. https://doi.org/10.1007/s10639-022-10925-9
Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T., Mulrow, C., Shamseer, L., Tetzlaff, J. M., Akl, E. A., Brennan, S. E., Chou, R., Glanville, J., Grimshaw, J. M., Hróbjartsson, A., Lalu, M. M., Li, T., Loder, E. W., Mayo-Wilson, E., McDonald, S., … Moher, D. (2021). The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. British Medical Journal . https://doi.org/10.1136/bmj.n71
Popenici, S. A. D., & Kerr, S. (2017). Exploring the impact of artificial intelligence on teaching and learning in higher education. Research and Practice in Technology Enhanced Learning, 12 (22), 1–13. https://doi.org/10.1186/s41039-017-0062-8
PRISMA Statement. (2021). PRISMA endorsers. PRISMA statement website. http://www.prisma-statement.org/Endorsement/PRISMAEndorsers
Qian, Y., Li, C.-X., Zou, X.-G., Feng, X.-B., Xiao, M.-H., & Ding, Y.-Q. (2022). Research on predicting learning achievement in a flipped classroom based on MOOCs by big data analysis. Computer Applied Applications in Engineering Education, 30 , 222–234. https://doi.org/10.1002/cae.22452
Rutner, S. M., & Scott, R. A. (2022). Use of artificial intelligence to grade student discussion boards: An exploratory study. Information Systems Education Journal, 20 (4), 4–18.
Salas-Pilco, S., & Yang, Y. (2022). Artificial Intelligence application in Latin America higher education: A systematic review. International Journal of Educational Technology in Higher Education, 19 (21), 1–20. https://doi.org/10.1186/S41239-022-00326-w
Saldana, J. (2015). The coding manual for qualitative researchers (3rd ed.). Sage.
Shukla, A. K., Janmaijaya, M., Abraham, A., & Muhuri, P. K. (2019). Engineering applications of artificial intelligence: A bibliometric analysis of 30 years (1988–2018). Engineering Applications of Artificial Intelligence, 85 , 517–532. https://doi.org/10.1016/j.engappai.2019.06.010
Strauss, A., & Corbin, J. (1995). Grounded theory methodology: An overview. In N. K. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (pp. 273–285). Sage.
Turing, A. M. (1937). On computable numbers, with an application to the Entscheidungs problem. Proceedings of the London Mathematical Society, 2 (1), 230–265.
Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59 , 443–460.
MathSciNet Google Scholar
Ullah, H., & Hafiz, M. A. (2022). Exploring effective classroom management strategies in secondary schools of Punjab. Journal of the Research Society of Pakistan, 59 (1), 76.
Verdú, E., Regueras, L. M., Gal, E., et al. (2017). Integration of an intelligent tutoring system in a course of computer network design. Educational Technology Research and Development, 65 , 653–677. https://doi.org/10.1007/s11423-016-9503-0
Vygotsky, L. S. (1978). Mind and society: The development of higher psychological processes . Harvard University Press.
Winkler-Schwartz, A., Bissonnette, V., Mirchi, N., Ponnudurai, N., Yilmaz, R., Ledwos, N., Siyar, S., Azarnoush, H., Karlik, B., & Del Maestro, R. F. (2019). Artificial intelligence in medical education: Best practices using machine learning to assess surgical expertise in virtual reality simulation. Journal of Surgical Education, 76 (6), 1681–1690. https://doi.org/10.1016/j.jsurg.2019.05.015
Yang, A. C. M., Chen, I. Y. L., Flanagan, B., & Ogata, H. (2021). Automatic generation of cloze items for repeated testing to improve reading comprehension. Educational Technology & Society, 24 (3), 147–158.
Yao, X. (2022). Design and research of artificial intelligence in multimedia intelligent question answering system and self-test system. Advances in Multimedia . https://doi.org/10.1155/2022/2156111
Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education—Where are the educators? International Journal of Educational Technology in Higher Education, 16 (1), 1–27. https://doi.org/10.1186/s41239-019-0171-0
Zhang, F. (2022). Design and application of artificial intelligence technology-driven education and teaching system in universities. Computational and Mathematical Methods in Medicine . https://doi.org/10.1155/2022/8503239
Zhang, Z., & Xu, L. (2022). Student engagement with automated feedback on academic writing: A study on Uyghur ethnic minority students in China. Journal of Multilingual and Multicultural Development . https://doi.org/10.1080/01434632.2022.2102175
Download references
Acknowledgements
The authors would like to thank Mildred Jones, Katherina Nako, Yaser Sendi, and Ricardo Randall for data gathering and organization.
Author information
Authors and affiliations.
Department of Teaching and Learning, Old Dominion University, Norfolk, USA
Helen Crompton
ODUGlobal, Norfolk, USA
Diane Burke
RIDIL, ODUGlobal, Norfolk, USA
You can also search for this author in PubMed Google Scholar
Contributions
HC: Conceptualization; Data curation; Project administration; Formal analysis; Methodology; Project administration; original draft; and review & editing. DB: Conceptualization; Data curation; Project administration; Formal analysis; Methodology; Project administration; original draft; and review & editing. Both authors read and approved this manuscript.
Corresponding author
Correspondence to Helen Crompton .
Ethics declarations
Competing interests.
The authors declare that they have no competing interests.
Additional information
Publisher's note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .
Reprints and permissions
About this article
Cite this article.
Crompton, H., Burke, D. Artificial intelligence in higher education: the state of the field. Int J Educ Technol High Educ 20 , 22 (2023). https://doi.org/10.1186/s41239-023-00392-8
Download citation
Received : 30 January 2023
Accepted : 23 March 2023
Published : 24 April 2023
DOI : https://doi.org/10.1186/s41239-023-00392-8
Share this article
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
- Artificial Intelligence
- Systematic review
- Higher education
The challenges and opportunities of Artificial Intelligence in education
Artificial Intelligence (AI) is producing new teaching and learning solutions that are currently being tested globally. These solutions require advanced infrastructures and an ecosystem of thriving innovators. How does that affect countries around the world, and especially developing nations? Should AI be a priority to tackle in order to reduce the digital and social divide?
These are some of the questions explored in a Working Paper entitled ‘ Artificial Intelligence in Education: Challenges and Opportunities for Sustainable Development ’ presented by UNESCO and ProFuturo at Mobile Learning Week 2019 . It features cases studies on how AI technology is helping education systems use data to improve educational equity and quality.
Concrete examples from countries such as China, Brazil and South Africa are examined on AI’s contribution to learning outcomes, access to education and teacher support. Case studies from countries including the United Arab Emirates, Bhutan and Chile are presented on how AI is helping with data analytics in education management.
The Paper also explores the curriculum and standards dimension of AI, with examples from the European Union, Singapore and the Republic of Korea on how learners and teachers are preparing for an AI-saturated world.
Beyond the opportunities, the Paper also addresses the challenges and policy implications of introducing AI in education and preparing students for an AI-powered future. The challenges presented revolve around:
- Developing a comprehensive view of public policy on AI for sustainable development : The complexity of the technological conditions needed to advance in this field require the alignment of multiple factors and institutions. Public policies have to work in partnership at international and national levels to create an ecosystem of AI that serves sustainable development.
- Ensuring inclusion and equity for AI in education : The least developed countries are at risk of suffering new technological, economic and social divides with the development of AI. Some main obstacles such as basic technological infrastructure must be faced to establish the basic conditions for implementing new strategies that take advantage of AI to improve learning.
- Preparing teachers for an AI-powered education : Teachers must learn new digital skills to use AI in a pedagogical and meaningful way and AI developers must learn how teachers work and create solutions that are sustainable in real-life environments.
- Developing quality and inclusive data systems : If the world is headed towards the datafication of education, the quality of data should be the main chief concern. It´s essential to develop state capabilities to improve data collection and systematization. AI developments should be an opportunity to increase the importance of data in educational system management.
- Enhancing research on AI in education : While it can be reasonably expected that research on AI in education will increase in the coming years, it is nevertheless worth recalling the difficulties that the education sector has had in taking stock of educational research in a significant way both for practice and policy-making.
- Dealing with ethics and transparency in data collection, use and dissemination : AI opens many ethical concerns regarding access to education system, recommendations to individual students, personal data concentration, liability, impact on work, data privacy and ownership of data feeding algorithms. AI regulation will require public discussion on ethics, accountability, transparency and security.
The key discussions taking place at Mobile Learning Week 2019 address these challenges, offering the international educational community, governments and other stakeholders a unique opportunity to explore together the opportunities and threats of AI in all areas of education.
- Download the working paper
Other recent news
Every print subscription comes with full digital access
Science News
How chatgpt and similar ai will disrupt education.
Teachers are concerned about cheating and inaccurate information
Students are turning to ChatGPT for homework help. Educators have mixed feeling about the tool and other generative AI.
Glenn Harvey
Share this:
By Kathryn Hulick
April 12, 2023 at 7:00 am
“We need to talk,” Brett Vogelsinger said. A student had just asked for feedback on an essay. One paragraph stood out. Vogelsinger, a ninth grade English teacher in Doylestown, Pa., realized that the student hadn’t written the piece himself. He had used ChatGPT.
The artificial intelligence tool, made available for free late last year by the company OpenAI, can reply to simple prompts and generate essays and stories. It can also write code.
Within a week, it had more than a million users. As of early 2023, Microsoft planned to invest $10 billion into OpenAI , and OpenAI’s value had been put at $29 billion, more than double what it was in 2021.
It’s no wonder other tech companies have been racing to put out competing tools. Anthropic, an AI company founded by former OpenAI employees, is testing a new chatbot called Claude. Google launched Bard in early February, and the Chinese search company Baidu released Ernie Bot in March.
A lot of people have been using ChatGPT out of curiosity or for entertainment. I asked it to invent a silly excuse for not doing homework in the style of a medieval proclamation. In less than a second, it offered me: “Hark! Thy servant was beset by a horde of mischievous leprechauns, who didst steal mine quill and parchment, rendering me unable to complete mine homework.”
But students can also use it to cheat. ChatGPT marks the beginning of a new wave of AI, a wave that’s poised to disrupt education.
When Stanford University’s student-run newspaper polled students at the university, 17 percent said they had used ChatGPT on assignments or exams at the end of 2022. Some admitted to submitting the chatbot’s writing as their own. For now, these students and others are probably getting away with it. That’s because ChatGPT often does an excellent job.
“It can outperform a lot of middle school kids,” Vogelsinger says. He might not have known his student had used it, except for one thing: “He copied and pasted the prompt.”
The essay was still a work in progress, so Vogelsinger didn’t see it as cheating. Instead, he saw an opportunity. Now, the student and AI are working together. ChatGPT is helping the student with his writing and research skills.
“[We’re] color-coding,” Vogelsinger says. The parts the student writes are in green. The parts from ChatGPT are in blue. Vogelsinger is helping the student pick and choose a few sentences from the AI to expand on — and allowing other students to collaborate with the tool as well. Most aren’t turning to it regularly, but a few kids really like it. Vogelsinger thinks the tool has helped them focus their ideas and get started.
This story had a happy ending. But at many schools and universities, educators are struggling with how to handle ChatGPT and other AI tools.
In early January, New York City public schools banned ChatGPT on their devices and networks. Educators were worried that students who turned to it wouldn’t learn critical-thinking and problem-solving skills. They also were concerned that the tool’s answers might not be accurate or safe. Many other school systems in the United States and around the world have imposed similar bans.
Keith Schwarz, who teaches computer science at Stanford, said he had “switched back to pencil-and-paper exams,” so students couldn’t use ChatGPT, according to the Stanford Daily .
Yet ChatGPT and its kin could also be a great service to learners everywhere. Like calculators for math or Google for facts, AI can make writing that often takes time and effort much faster. With these tools, anyone can generate well-formed sentences and paragraphs. How could this change the way we teach and learn?
Who said what?
When prompted, ChatGPT can craft answers that sound surprisingly like those from a student. We asked middle school and high school students from across the country, all participants in our Science News Learning education program , to answer some basic science questions in two sentences or less. The examples throughout the story compare how students responded with how ChatGPT responded when asked to answer the question at the same grade level.
What effect do greenhouse gases have on the Earth?
Agnes b. | grade 11, harbor city international school, minn..
Greenhouse gases effectively trap heat from dissipating out of the atmosphere, increasing the amount of heat that remains near Earth in the troposphere.
Greenhouse gases trap heat in the Earth’s atmosphere, causing the planet to warm up and leading to climate change and its associated impacts like sea level rise, more frequent extreme weather events and shifts in ecosystems.
The good, bad and weird of ChatGPT
ChatGPT has wowed its users. “It’s so much more realistic than I thought a robot could be,” says Avani Rao, a sophomore in high school in California. She hasn’t used the bot to do homework. But for fun, she’s prompted it to say creative or silly things. She asked it to explain addition, for instance, in the voice of an evil villain.
Given how well it performs, there are plenty of ways that ChatGPT could level the playing field for students and others working in a second language or struggling with composing sentences. Since ChatGPT generates new, original material, its text is not technically plagiarism.
Students could use ChatGPT like a coach to help improve their writing and grammar, or even to explain subjects they find challenging. “It really will tutor you,” says Vogelsinger, who had one student come to him excited that ChatGPT had clearly outlined a concept from science class.
Educators could use ChatGPT to help generate lesson plans, activities or assessments — perhaps even personalized to address the needs or goals of specific students.
Xiaoming Zhai, an expert in science education at the University of Georgia in Athens, tested ChatGPT to see if it could write an academic paper . He was impressed with how easy it was to summarize knowledge and generate good writing using the tool. “It’s really amazing,” he says.
All of this sounds wonderful, but really big problems exist. Most worrying, ChatGPT and other similar tools can often get things very wrong. They don’t pull facts from databases. Rather, they are trained to generate new text that sounds natural. They remix language without understanding it, which can lead to glaring mistakes.
The news website CNET came under fire earlier this year for using AI to churn out dozens of articles, many of them packed with errors. In an early advertisement for the Bard chatbot, it made a factual error about the James Webb Space Telescope, incorrectly claiming that it had taken the very first picture of an exoplanet. And ChatGPT said in a conversation posted on Twitter that the fastest marine mammal was the peregrine falcon. A falcon, of course, is a bird and doesn’t live in the ocean.
ChatGPT is “confidently wrong,” says Casey Fiesler, an expert in the ethics of technology at the University of Colorado Boulder. “There are mistakes and bad information.” She has made multiple TikTok videos about the pitfalls of ChatGPT .
Most of ChatGPT’s training data come from before September 2021, and it does not provide sources for its information. If asked for sources, it makes them up, Fiesler revealed in one video . Zhai, who sees the tool as an assistant, discovered the exact same thing. When he asked ChatGPT for citations, it gave him sources that looked correct. But they didn’t actually exist.
Define the term biodiversity.
Dante a. | grade 10, clark magnet high school, calif..
Biodiversity refers to the variety of living species and ecosystems that exist in a particular region or on the planet as a whole. It encompasses the different genes, species and ecosystems that make up the natural world and the relationships between them.
Biodiversity refers to the variety of living organisms that inhabit the Earth and the ecosystems they form. It includes the diversity of species, genes and ecosystems, and is important for maintaining the balance of nature and sustaining life on our planet.
How ChatGPT works
ChatGPT’s mistakes make sense if you know how it works. “It doesn’t reason. It doesn’t have ideas. It doesn’t have thoughts,” explains Emily M. Bender, a computational linguist at the University of Washington in Seattle.
ChatGPT was developed using at least two types of machine learning. The primary type is a large language model based on an artificial neural network. Loosely inspired by how neurons in the brain interact, this computing architecture finds statistical patterns in vast amounts of data.
A language model learns to predict what words will come next in a sentence or phrase by churning through vast amounts of text. It places words and phrases into a multidimensional map that represents their relationships to one another. Words that tend to come together, like peanut butter and jelly, end up closer together in this map.
The size of an artificial neural network is measured in parameters. These internal values get tweaked as the model learns. In 2020, OpenAI released GPT-3. At the time, it was the biggest language model ever, containing 175 billion parameters. It had trained on text from the internet as well as digitized books and academic journals. Training text also included transcripts of dialog, essays, exams and more, says Sasha Luccioni, a Montreal-based researcher at Hugging Face, a company that builds AI tools.
OpenAI improved upon GPT-3 to create GPT-3.5. In early 2022, the company released a fine-tuned version of GPT-3.5 called InstructGPT. This time, OpenAI added a new type of machine learning. Called reinforcement learning with human feedback, it puts people into the training process. These workers check the AI’s output. Responses that people like get rewarded. Human feedback can also help reduce hurtful, biased or inappropriate responses. This fine-tuned language model powers freely available ChatGPT. As of March, paying users receive answers powered by GPT-4, a bigger language model.
During ChatGPT’s development, OpenAI added extra safety rules to the model. It will refuse to answer certain sensitive prompts or provide harmful information. But this step raises another issue: Whose values are programmed into the bot, including what it is — or is not — allowed to talk about?
OpenAI is not offering exact details about how it developed and trained ChatGPT. The company has not released its code or training data. This disappoints Luccioni because it means the tool can’t benefit from the perspectives of the larger AI community. “I’d like to know how it works so I can understand how to make it better,” she says.
When asked to comment on this story, OpenAI provided a statement from an unnamed spokesperson. “We made ChatGPT available as a research preview to learn from real-world use, which we believe is a critical part of developing and deploying capable, safe AI systems,” the statement said. “We are constantly incorporating feedback and lessons learned.” Indeed, some experimenters have gotten the bot to say biased or inappropriate things despite the safety rules. OpenAI has been patching the tool as these problems come up.
ChatGPT is not a finished product. OpenAI needs data from the real world. The people who are using it are the guinea pigs. Notes Bender: “You are working for OpenAI for free.”
What are black holes and where are they found?
Althea c. | grade 11, waimea high school, hawaii.
A black hole is a place in space where gravity is so strong that nothing, not even light, may come out.
Black holes are extremely dense regions in space where the gravity is so strong that not even light can escape, and they are found throughout the universe.
ChatGPT’s academic performance
How good is ChatGPT in an academic setting? Catherine Gao, a doctor and medical researcher at Northwestern University’s Feinberg School of Medicine in Chicago, is part of one team of researchers that is putting the tool to the test.
Gao and her colleagues gathered 50 real abstracts from research papers in medical journals and then, after providing the titles of the papers and the journal names, asked ChatGPT to generate 50 fake abstracts. The team asked people familiar with reading and writing these types of research papers to identify which were which .
“I was surprised by how realistic and convincing the generated abstracts were,” Gao says. The reviewers mistook roughly one-third of the AI-generated abstracts as human-generated.
In another study, Will Yeadon and colleagues tested whether AI tools could pass a college exam . Yeadon, a physics instructor at Durham University in England, picked an exam from a course that he teaches. The test asks students to write five short essays about physics and its history. Students have an average score of 71 percent, which he says is equivalent to an A in the United States.
Yeadon used the tool davinci-003, a close cousin of ChatGPT. It generated 10 sets of exam answers. Then Yeadon and four other teachers graded the answers using their typical standards. The AI also scored an average of 71 percent. Unlike the human students, though, it had no very low or very high marks. It consistently wrote well, but not excellently. For students who regularly get bad grades in writing, Yeadon says, it “will write a better essay than you.”
These graders knew they were looking at AI work. In a follow-up study, Yeadon plans to use work from the AI and students and not tell the graders whose is whose.
What is heat?
Precious a. | grade 6, canyon day junior high school, ariz..
Heat is the transfer of kinetic energy from one medium or object to another, or from an energy source to a medium or object through radiation, conduction and convection.
Heat is a type of energy that makes things warmer. It can be produced by burning something or through electricity.
Tools to check for cheating
When it’s unclear whether ChatGPT wrote something or not, other AI tools may help. These tools typically train on AI-generated text and sometimes human-generated text as well. They can tell you how likely it is that text was composed by an AI. Many of the existing tools were trained on older language models, but developers are working quickly to put out new, improved tools.
A company called Originality.ai sells access to a tool that trained on GPT-3. Founder Jon Gillham says that in a test of 10,000 samples of texts composed by models based on GPT-3, the tool tagged 94 percent of them correctly as AI-generated. When ChatGPT came out, his team tested a smaller set of 20 samples. Each only 500 words in length, these had been created by ChatGPT and other models based on GPT-3 and GPT-3.5. Here, Gillham says, the tool “tagged all of them as AI-generated. And it was 99 percent confident, on average.”
In late January 2023, OpenAI released its own free tool for spotting AI writing, cautioning that the tool was “not fully reliable.” The company is working to add watermarks to its AI text, which would tag the output as machine-generated, but doesn’t give details on how. Gillham describes one possible approach: Whenever it generates text, the AI ranks many different possible words for each position. If its developers told it to always choose the word ranked in third place rather than first place at specific points in its output, those words could act as a fingerprint, he says.
As AI writing tools improve, the tools to sniff them out will need to improve as well. Eventually, some sort of watermark might be the only way to sort out true authorship.
What is DNA and how is it organized?
Luke m. | grade 8, eastern york middle school, pa..
DNA, or deoxyribonucleic acid, is kept inside the cells of living things, where it holds instructions for the genetics of the organism it is inhabiting.
DNA is like a set of instructions that tells our cells what to do. It’s organized into structures called chromosomes, which contain all of the DNA in a cell.
ChatGPT and the future of writing
There’s no doubt we will soon have to adjust to a world in which computers can write for us. But educators have made these sorts of adjustments before. As high school student Rao points out, Google was once seen as a threat to education because it made it possible to look up facts instantly. Teachers adapted by coming up with teaching and testing materials that don’t depend as heavily on memorization.
Now that AI can generate essays and stories, teachers may once again have to rethink how they teach and test. Rao says: “We might have to shift our point of view about what’s cheating and what isn’t.”
Some teachers will prevent students from using AI by limiting access to technology. Right now, Vogelsinger says, teachers regularly ask students to write out answers or essays at home. “I think those assignments will have to change,” he says. But he hopes that doesn’t mean kids do less writing.
Teaching students to write without AI’s help will remain essential, agrees Zhai. That’s because “we really care about a student’s thinking,” he stresses. And writing is a great way to demonstrate thinking. Though ChatGPT can help a student organize their thoughts, it can’t think for them, he says.
Kids still learn to do basic math even though they have calculators (which are often on the phones they never leave home without), Zhai acknowledges. Once students have learned basic math, they can lean on a calculator for help with more complex problems.
In the same way, once students have learned to compose their thoughts, they could turn to a tool like ChatGPT for assistance with crafting an essay or story. Vogelsinger doesn’t expect writing classes to become editing classes, where students brush up AI content. He instead imagines students doing prewriting or brainstorming, then using AI to generate parts of a draft, and working back and forth to revise and refine from there.
Though he’s overwhelmed about the prospect of having to adapt his teaching to another new technology, he says he is “having fun” figuring out how to navigate the new tech with his students.
Rao doesn’t see AI ever replacing stories and other texts generated by humans. Why? “The reason those things exist is not only because we want to read it but because we want to write it,” she says. People will always want to make their voices heard.
More Stories from Science News on Tech
Scientists want to send endangered species’ cells to the moon
Can we train ai to be creative one lab is testing ideas.
Want to spot a deepfake? The eyes could be a giveaway
This AI can predict ship-sinking ‘freak’ waves minutes in advance
AI’s understanding and reasoning skills can’t be assessed by current tests
This 3-D printer can fit in the palm of your hand
Reinforcement learning AI might bring humanoid robots to the real world
Should we use AI to resurrect digital ‘ghosts’ of the dead?
Subscribers, enter your e-mail address for full access to the Science News archives and digital editions.
Not a subscriber? Become one now .
Suggestions or feedback?
MIT News | Massachusetts Institute of Technology
- Machine learning
- Sustainability
- Black holes
- Classes and programs
Departments
- Aeronautics and Astronautics
- Brain and Cognitive Sciences
- Architecture
- Political Science
- Mechanical Engineering
Centers, Labs, & Programs
- Abdul Latif Jameel Poverty Action Lab (J-PAL)
- Picower Institute for Learning and Memory
- Lincoln Laboratory
- School of Architecture + Planning
- School of Engineering
- School of Humanities, Arts, and Social Sciences
- Sloan School of Management
- School of Science
- MIT Schwarzman College of Computing
Helping students of all ages flourish in the era of artificial intelligence
Press contact :, media download.
*Terms of Use:
Images for download on the MIT News office website are made available to non-commercial entities, press and the general public under a Creative Commons Attribution Non-Commercial No Derivatives license . You may not alter the images provided, other than to crop them to size. A credit line must be used when reproducing images; if one is not provided below, credit the images to "MIT."
Previous image Next image
A new cross-disciplinary research initiative at MIT aims to promote the understanding and use of AI across all segments of society. The effort, called Responsible AI for Social Empowerment and Education ( RAISE ), will develop new teaching approaches and tools to engage learners in settings from preK-12 to the workforce.
“People are using AI every day in our workplaces and our private lives. It’s in our apps, devices, social media, and more. It’s shaping the global economy, our institutions, and ourselves. Being digitally literate is no longer enough. People need to be AI-literate to understand the responsible use of AI and create things with it at individual, community, and societal levels,” says RAISE Director Cynthia Breazeal, a professor of media arts and sciences at MIT.
“But right now, if you want to learn about AI to make AI-powered applications, you pretty much need to have a college degree in computer science or related topic,” Breazeal adds. “The educational barrier is still pretty high. The vision of this initiative is: AI for everyone else — with an emphasis on equity, access, and responsible empowerment.”
Headquartered in the MIT Media Lab, RAISE is a collaboration with the MIT Schwarzman College of Computing and MIT Open Learning. The initiative will engage in research coupled with education and outreach efforts to advance new knowledge and innovative technologies to support how diverse people learn about AI as well as how AI can help to better support human learning. Through Open Learning and the Abdul Latif Jameel World Education Lab (J-WEL), RAISE will also extend its reach into a global network where equity and justice are key.
The initiative draws on MIT’s history as both a birthplace of AI technology and a leader in AI pedagogy. “MIT already excels at undergraduate and graduate AI education,” says Breazeal, who heads the Media Lab’s Personal Robots group and is an associate director of the Media Lab. “Now we’re building on those successes. We’re saying we can take a leadership role in educational research, the science of learning, and technological innovation to broaden AI education and empower society writ large to shape our future with AI.”
In addition to Breazeal, RAISE co-directors are Hal Abelson, professor of computer science and education; Eric Klopfer, professor and director of the Scheller Teacher Education Program; and Hae Won Park, a research scientist at the Media Lab. Other principal leaders include Professor Sanjay Sarma, vice president for open learning. RAISE draws additional participation from dozens of faculty, staff, and students across the Institute.
“In today’s rapidly changing economic and technological landscape, a core challenge nationally and globally is to improve the effectiveness, availability, and equity of preK-12 education, community college, and workforce development. AI offers tremendous promise for new pedagogies and platforms, as well as for new content. Developing and deploying advances in computing for the public good is core to the mission of the Schwarzman College of Computing, and I’m delighted to have the College playing a role in this initiative,” says Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing.
The new initiative will engage in research, education, and outreach activities to advance four strategic impact areas: diversity and inclusion in AI, AI literacy in preK-12 education, AI workforce training, and AI-supported learning. Success entails that new knowledge, materials, technological innovations, and programs developed by RAISE are leveraged by other stakeholder AI education programs across MIT and beyond to add value to their efficacy, experience, equity, and impact.
RAISE will develop AI-augmented tools to support human learning across a variety of topics. “We’ve done a lot of work in the Media Lab around companion AI,” says Park. “Personalized learning companion AI agents such as social robots support individual students’ learning and motivation to learn. This work provides an effective and safe space for students to practice and explore topics such as early childhood literacy and language development.”
Diversity and inclusion will be embedded throughout RAISE’s work, to help correct historic inequities in the field of AI. “We're seeing story after story of unintended bias and inequities that are arising because of these AI systems,” says Breazeal. “So, a mission of our initiative is to educate a far more diverse and inclusive group of people in the responsible design and use of AI technologies, who will ultimately be more representative of the communities they will be developing these products and services for.”
This spring, RAISE is piloting a K-12 outreach program called Future Makers. The program brings engaging, hands-on learning experiences about AI fundamentals and critical thinking about societal implications to teachers and students, primarily from underserved or under-resourced communities, such as schools receiving Title I services.
To bring AI to young people within and beyond the classroom, RAISE is developing and distributing curricula, teacher guides, and student-friendly AI tools that enable anyone, even those with no programming background, to create original applications for desktop and mobile computing. “ Scratch and App Inventor are already in the hands of millions of learners worldwide,” explains Abelson. “RAISE is enhancing these platforms and making powerful AI accessible to all people for increased creativity and personal expression.”
Ethics and AI will be a central component to the initiative’s curricula and teaching tools. “Our philosophy is, have kids learn about the technical concepts right alongside the ethical design practices,” says Breazeal. “Thinking through the societal implications can’t be an afterthought.”
“AI is changing the way we interact with computers as consumers as well as designers and developers of technology,” Klopfer says. “It is creating a new paradigm for innovation and change. We want to make sure that all people are empowered to use this technology in constructive, creative, and beneficial ways.”
“Connecting this initiative not only to [MIT’s schools of] engineering and computing, but also to the School of Humanities, Arts and Social Sciences recognizes the multidimensional nature of this effort,” Klopfer adds.
Sarma says RAISE also aims to boost AI literacy in the workforce, in part by adapting some of their K-12 techniques. “Many of these tools — when made somewhat more sophisticated and more germane to the adult learner — will make a tremendous difference,” says Sarma. For example, he envisions a program to train radiology technicians in how AI programs interpret diagnostic imagery and, vitally, how they can err.
“AI is having a truly transformative effect across broad swaths of society,” says Breazeal. “Children today are not only digital natives, they’re AI natives. And adults need to understand AI to be able to engage in a democratic dialogue around how we want these systems deployed.”
Share this news article on:
Press mentions.
Prof. Eric Klopfer, co-director of the RAISE initiative (Responsible AI for Social Empowerment in Education), speaks with GBH reporter Diane Adame about the importance of providing students guidance on navigating artificial intelligence systems. “I think it's really important for kids to be aware that these things exist now, because whether it's in school or out of school, they are part of systems where AI is present,” says Klopfer. “Many humans are biased. And so the [AI] systems express those same biases that they've seen online and the data that they've collected from humans.”
The New York Times
New York Times reporter Natasha Singer spotlights the Day of AI, an MIT RAISE program aimed at teaching K-12 students about AI. “Because AI is such a powerful new technology, in order for it to work well in society, it really needs some rules,” said MIT President Sally Kornbluth. Prof. Cynthia Breazeal, MIT’s dean of digital learning, added: “We want students to be informed, responsible users and informed, responsible designers of these technologies.”
Previous item Next item
Related Links
- Cynthia Breazeal
- Sanjay Sarma
- Hal Abelson
- Open Learning
- Computer Science and Artificial Intelligence Laboratory
- School of Architecture and Planning
Related Topics
- Artificial intelligence
- Collaboration
- STEM education
- K-12 education
- Office of Open Learning
- Computer Science and Artificial Intelligence Laboratory (CSAIL)
- School of Humanities Arts and Social Sciences
- MIT Sloan School of Management
Related Articles
Learning with — and about — AI technology
MIT Full STEAM Ahead offers scalable, hands-on remote learning for K-12
Students and teachers rely on MIT teaching and learning resources now more than ever
Learning about artificial intelligence: A hub of MIT resources for K-12 students
Bringing artificial intelligence and MIT to middle school classrooms
Cynthia Breazeal named Media Lab associate director
More mit news.
Lincoln Laboratory and National Strategic Research Institute launch student research program to tackle biothreats to national security
Read full story →
Christine Ortiz named director of MIT Technology and Policy Program
MIT engineers design tiny batteries for powering cell-sized robots
New open-source tool helps to detangle the brain
LLMs develop their own understanding of reality as their language abilities improve
Building bidirectional bridges
- More news on MIT News homepage →
Massachusetts Institute of Technology 77 Massachusetts Avenue, Cambridge, MA, USA
- Map (opens in new window)
- Events (opens in new window)
- People (opens in new window)
- Careers (opens in new window)
- Accessibility
- Social Media Hub
- MIT on Facebook
- MIT on YouTube
- MIT on Instagram
News > Topics
Artificial intelligence, college writing centers worry ai could replace them, aug 12 · maggie hicks.
Should Educators Put Disclosures on Teaching Materials When They Use AI?
Aug 1 · jeffrey r. young.
An Education Chatbot Company Collapsed. Where Did the Student Data Go?
Jul 15 · jeffrey r. young.
As More AI Tools Emerge in Education, so Does Concern Among Teachers About Being Replaced
Jul 2 · jeffrey r. young.
AI Might Save Teachers Time. But What Is the Cost?
Jun 17 · evi wusk.
Latest AI Announcements Mean Another Big Adjustment for Educators
Jun 6 · jeffrey r. young.
EdSurge Podcast
Should chatbots tutor dissecting that viral ai demo with sal khan and his son, jun 4 · jeffrey r. young.
Choosing Wisely: Lessons for Leaders in AI Integration
May 22 · wendy mcmahon.
Can ‘Linguistic Fingerprinting’ Guard Against AI Cheating?
May 7 · jeffrey r. young.
Is It Fair and Accurate for AI to Grade Standardized Tests?
May 3 · nadia tamez-robledo.
Los Angeles School District Launched a Splashy AI Chatbot. What Exactly Does It Do?
May 2 · jeffrey r. young.
3 Things Educators and Edtech Suppliers Need to Talk About
May 1 · sandra decastro.
Putting People First as Higher Ed Grapples With AI
Apr 29 · mordecai i. brownlee.
What Do We Gain and Lose When Students Use AI to Write?
Apr 24 · tony wan.
The Opportunities and Drawbacks of AI-Powered Reading Coaches, Assistants and Tutors
Apr 17 · natalia kucirkova.
AI Guidelines for K-12 Aim to Bring Order to the ‘Wild West’
Apr 10 · nadia tamez-robledo.
Teaching and Learning
Can using a grammar checker set off ai-detection software, apr 4 · jeffrey r. young.
How AI Can Address Critical Challenges Facing Higher Education
Mar 29 · abbie misha.
Could AI Give Civics Education a Boost?
Mar 26 · jeffrey r. young.
Unlocking the Power of Creativity and AI: Preparing Students for the Future Workforce
Mar 20 · abbie misha.
Learning Designers Call for More User Testing of Edtech Products and Teaching Materials
Mar 19 · jeffrey r. young.
Education Workforce
Can ai aid the early education workforce, mar 18 · emily tate sullivan.
When Bots Go to Class
Mar 7 · jeffrey r. young.
How Teachers Are Pondering the Ethics of AI
Feb 13 · daniel mollenkamp.
AI Is Disrupting Professions That Require College Degrees. How Should Higher Ed Respond?
Feb 13 · jeffrey r. young.
How a Holden Caulfield Chatbot Helped My Students Develop AI Literacy
Feb 9 · mike kentz, teaching fellows guide educators in integrating ai for enhanced student engagement, feb 7 · sebastian basualto.
Collaborating for the Future of Teaching and Learning With Technology
Jan 31 · anthony baker.
Inside the Push to Bring AI Literacy to Schools and Colleges
Jan 23 · jeffrey r. young.
A Technologist Spent Years Building an AI Chatbot Tutor. He Decided It Can’t Be Done.
Jan 22 · jeffrey r. young.
Popular Topics
Affordability, digital access, diversity and equity, higher education, social-emotional learning, view all topics.
Journalism that ignites your curiosity about education.
EdSurge is an editorially independent project of and
- Product Index
- Write for us
- Advertising
FOLLOW EDSURGE
© 2024 All Rights Reserved
Artificial Intelligence And Higher Education
The discussion of AI and its place in Higher Ed is on the table nationwide. Higher education institutions are joining other companies and organizations in several ways to embrace artificial intelligence for educators and students alike.
First things first, what is AI? Thanks to media influences, consumers have varying ideas of what AI is and what it does. AI creates intelligent machines and systems that perform tasks or solve problems that would otherwise require human intelligence, like playing games or understanding natural language.
So, how will AI benefit Higher Education?
Streamlining Administrative Processes
Colleges and universities use artificial intelligence tools to power student record systems, transportation, information technology (IT), maintenance, scheduling, budgeting, and more.
These tools are also used to interpret data on recruitment, admission, and retention efforts to predict if students are in danger of failing or dropping a course. As a result, faculty and staff are alerted of potential problems to support and address a student’s problems before they occur.
Furthermore, many schools are implementing artificial intelligence chatbots to address student questions about financial aid, advising, and career opportunities. This around-the-clock service provides additional answers and support to students in need outside of regular classroom or business hours.
Personalized Teaching and Learning
Teaching and learning artificial intelligence tools like virtual tours and virtual teaching assistants are used to further personalize a student’s education experience.
These virtual learning experiences offered by colleges and universities are readily available and allow students with different needs to receive information at their own personalized pace.
On the teaching side, professors and educators are beginning to use artificial intelligence tools to create all kinds of content while also detecting plagiarism,
Research & Data
Higher education institutions use artificial intelligence in research by using tools to sort through large sets and amounts of data to identify patterns, build models, recommend relevant articles, and prepare manuscripts for publication. In doing so, faculty and staff are better equipped to make informed and effective decisions.
How is Artificial Intelligence Creating a More Positive College Experience?
Artificial intelligence opens the door for more inclusion, access, and support for students, professors, and leadership in Higher Ed institutions.
There are varying opinions about AI and its place in higher education. Rightfully so, as there are valid concerns. Concerns such as overall ethics, copyright issues, intellectual property, and biases within the training data are continuously being addressed.
Artificial intelligence isn’t going away. Like any other technology, tool, or resource, Higher Ed can implement AI in a way that best fits their institutions and its needs by utilizing it as a resource to improve student experiences and the overall efficiency of the school’s operations.
Integrating or implementing AI into higher education institutions is not a futuristic vision but a present certainty. Many colleges and universities are preparing students, faculty, and staff for some type of AI-infused future.
The Best Feature
Artificial intelligence can help students choose the right college, learn more efficiently, graduate on time, and enter the job market feeling confident and prepared. Who doesn’t want that?!
Written By: Meredith Biesinger , Professional Writer/ Education Specialist
Meredith Biesinger is a licensed dyslexia therapist in Mississippi, in addition to being an experienced classroom teacher and K-12 administrator. Meredith also works as a consultant, where she bridges the bridge the gap between K-12 school districts and ed-tech organizations. With a passion for literacy, she is also a professional writer and syndicated author. With a M.Ed in Educational Leadership and a B.S. in English Education and Creative Writing, she has had rich and diverse opportunities to teach students and education professionals in different parts of the country as well as overseas.
Interested in learning more about the education market, or looking to contribute? We want to hear from you!
Sign up for our newsletter
- Recent Posts
- Artificial Intelligence and Higher Education - August 15, 2024
- Artificial Intelligence And Higher Education - August 15, 2024
- What Is The Future Of Online Learning In Higher Education? - July 26, 2024
Username or Email Address
Remember Me
Artificial Intelligence
Back-to-school for higher education sees students and professors grappling with AI in academia
America’s colleges have an AI dilemma, but students are embracing it.
As millions of students return to school this fall, ABC News spoke with students and professors learning to navigate the influence of generative artificial intelligence.
ChatGPT, which launched in November 2022, is described on its website as an AI-powered language model, "capable of generating human-like text based on context and past conversations."
At the University of California, Davis, senior Andrew Yu found himself using AI to help outline an eight-page paper for his poetry class. It needed to be in the style of an academic memorandum, which Yu had never written before -- so he turned to ChatGPT to help him visualize the project.
"I think it's kind of ironic or it's a really funny thing because I'm an English major," Yu told ABC News.
MORE: Amid spread of AI tools, new digital standard would help users tell fact from fiction
Yu says he is careful to use ChatGPT technology to template and structure his assignments but not go beyond that. "I feel like it's not authentic to me in the way that I write, so I just use it as a skeleton, like an outline," he said.
"Sometimes we get a little stuck and need extra help," said Eneesa Abdullah-Hudson, a senior at Morgan State University, a historically Black college in Baltimore. "So just having this tool here to help guide us, so we can add our feedback with it, it's definitely helpful."
Among U.S. adults who have heard of ChatGPT, only 41% between ages 18 and 29 have used it, according to Pew Research data . Now, professors across the country are learning alongside their students about the risks and rewards of generative AI in education.
University of Florida Warrington College of Business Professor Dr. Joel Davis said AI innovations like ChatGPT could be used as a tool for core language arts subjects.
"There's a natural progression," Davis, who researches the integration of generative AI solutions, told ABC News. "New tools like the calculator, like Grammarly and editing tools that came out a number of years ago that made all of our writing better, including mine, right? Those are things that are just going to keep on coming. And, we can't stop them from coming, but it's up to us to decide how to integrate them appropriately."
Despite using AI himself, Yu says he is cautious and concerned by the quality of AI responses.
MORE: As acceptance rates decline, students bill themselves as admissions experts
"It's definitely like a risk," Yu said, adding "I feel like it can be beneficial. You just have to use it responsibly and make sure that the majority of your wording is you, and none of it is by the AI."
Does AI promote or detect cheating?
Most schools are embracing the novelty of generative AI use by teachers and students, but some faculty are already facing the issue of it interfering with assignments in their classrooms.
Davis demoed AI's abilities with his own tests.
"It does fairly well on my exams," he said. "It can recite those answers pretty well, but I don't view that as a ChatGPT problem; that's my issue."
Furman University Professor Darren Hick had a hunch that one of his students used AI to write a final paper last December.
Hick said the student sat in his Greenville, South Carolina, office, hyperventilating, before confessing to using ChatGPT.
"We're not ready for it [AI chatbot cheating]," Hick, who gave this student a failing grade, told ABC News. "We're not prepared to deal with this. It's just harder to catch. That's always been my concern."
Furman's current definition of plagiarism is "copying word for word from another source without proper attribution." These instances, when AI use is detected, could be considered "inappropriate collaboration or cheating," a school spokesperson said.
An OpenAI spokesperson also said the company that powers ChatGPT has always called for "transparency" around generative AI use. Davis, at the University of Florida, told ABC News that assessments need to change if educators are worried about students using chatbots to cheat.
But Jessica Zimny, a junior at Midwestern State University in Wichita Falls, Texas, told ABC News she earned perfect scores on her political science discussion posts until her account was flagged for AI-assistance detection.
"I noticed that when I logged in, it said that I had a zero for that assignment, and to the right of it was a note stating that Turnitin detected AI use on my assignment," Zimny said.
Turnitin, a popular resource used by schools to check for plagiarism within student assignments, searches text for signs it was generated by AI. When AI detection is indicated, the company recommends that teachers have conversations with their students and this will usually "resolve" the issue one way or another.
Even though Zimny claimed she didn't use AI, she said her professor failed her because the program flagged her assignment.
"It's just really frustrating," Zimny said. "I just hate the fact that there are actually people out there that do use AI and do cheat to where you have to get to the point where there have to be detectors that are made that can falsely accuse people who aren't in the wrong."
ABC News reached out to Midwestern State for a comment on Zimny's failing grade. Interim Provost Dr. Marcy Brown Marsden said she's unable to speak directly to a student's academic record or appeals due to FERPA regulations. On the school's public directory, it says if Turnitin.com detects that an assignment was completed using AI, the student will be given a grade of zero for that assignment.
However, Turnitin leaves the final decision-making to the instructor.
"Teachers should be using similarity reports and AI reports as resources, not deciders," Annie Chechitelli, chief product officer at Turnitin, told ABC News in a statement.
Aside from plagiarism concerns, there is also worry about the security of data entered by students and teachers into generative AI tools.
"It is indeed entirely possible, and indeed likely, that bad actors will use open source large language models - which students may well use – to obtain sensitive personal data, for the purposes of targeting advertising, blackmail, and so forth," "Rebooting AI" author Gary Marcus told ABC News.
New York City Public Schools placed ChatGPT on its list of restricted websites, though it is still accessible if schools request it. However, in a statement on Chalkbeat , Chancellor of New York City Public Schools David C. Banks said, "The knee-jerk fear and risk overlooked the potential of generative AI to support students and teachers, as well as the reality that our students are participating in and will work in a world where understanding generative AI is crucial."
Is AI avoidable?
Generative AI has become the new frontier for educators and students to work around. Some universities said there's no one-size-fits-all approach while others have strict guidelines to combat unethical usage.
Carnegie Mellon University's (CMU) Academic Integrity Policy prohibits "unauthorized assistance," which would include generative AI tools unless explicitly permitted by the instructor, according to a recent letter published by the school's leaders. Most colleges -- like CMU -- still embrace the novelty of generative AI use by teachers and students.
"I think for probably the first time, nine months ago when OpenAI put out the ChatGPT, it [AI] sort of allowed an average person to kind of touch and feel it, to explore it, to try it," AI Scholar and CMU Professor Rayid Ghani told ABC News. "It wasn't something we could touch and feel and do. We weren't using it ourselves."
Abdullah-Hudson said she uses ChatGPT to check her work. "It's just like an extra helping tool."
And, OpenAI invites teachers to use its technology. The company's Teaching with AI page suggests prompts to help teachers come up with lesson plans. Embracing AI, Abdullah-Hudson believes it's here to stay.
"Living in the world today, there's no way around it," Abdullah-Hudson said. "So there's no way to avoid the AI, might as well learn to use it instead of being afraid of it."
Related Topics
- Artificial Intelligence
Trending Reader Picks
Trump campaign office burglarized
- Aug 12, 7:03 PM
Trump loses 3rd bid to remove judge from case
- Aug 14, 8:21 AM
Bear mauls 3-year-old girl in tent at campground
- Aug 13, 11:18 AM
Escaped murderer of toddler at large for 2nd day
- Aug 14, 5:11 PM
Harris and Trump tied in 538's new polling average
- Aug 2, 10:28 AM
ABC News Live
24/7 coverage of breaking news and live events
Become an Insider
Sign up today to receive premium content.
How K–12 Schools Can Use Artificial Intelligence in Education
Alexandra Frost is a Cincinnati-based freelance journalist, content marketing writer, copywriter and editor published in The Washington Post , Glamour , Shape , Today’s Parent , Reader’s Digest , Parents , Women’s Health and Insider. Alex has a Master of Arts in Teaching degree and a bachelor’s degree in mass communications/journalism, both from Miami University near Cincinnati. She also taught high school for 10 years, specializing in media education.
What role will artificial intelligence play in the future of education? For educators, AI can feel like an exciting development — or a terrifying unknown.
AI technology is advancing quickly and creating solutions once thought impossible. It’s widely available in various technologies and, in many places, already being integrated into the classroom . The pandemic spurred the development of educational technology out of necessity, including the development of AI. Suddenly, educators needed ways to obtain more information virtually.
“We were starting to work on AI during the pandemic, but it sped up because there was a huge demand for it,” says Mike Tholfsen, principal group product manager at Microsoft Education . “All these things were happening online, and teachers were saying. ‘I don’t know what’s happening in my classroom anymore.’”
Click the banner below to be the first to learn about new trends in K–12 educational technology.
With educators busier than ever, Tholfsen says, the greatest benefit AI can offer them is time. AI programs can gather data teachers would traditionally have to gather themselves manually.
What Is Artificial Intelligence?
Trying to define artificial intelligence is a bit like asking about the meaning of life: You will get a slightly different answer from everyone. At its core, AI is an area of computer science addressing the simulation of intelligent behavior in computers.
Michelle Zimmerman , a classroom teacher, researcher and school leader at Renton Prep Christian School in Washington state and author of the book Teaching AI: Exploring New Frontiers for Learning , notes that psychologists and neurologists in the field don’t even agree on what counts as human intelligence.
The definition also changes over time. Not too long ago, simple calculators were considered AI, while the term now is associated with a variety of innovative technologies, such as those that power content filtering and endpoint security .
EXPLORE THE TREND: Here are three ways classroom use of artificial intelligence will grow in 2022.
Artificial Intelligence vs. Machine Learning: What’s the Difference?
Though not all AI involves machine learning, it is a popular subcategory of the technology. Machine learning refers to machines that process vast amounts of data and also have the capacity to get better at it the more they “learn,” Zimmerman says.
“You can train models with machine learning to improve things. An example is speech-to-text technology,” Tholfsen says.
“Machine learning needs a lot of data to train it to look for patterns and understand what it is looking for. The more data, the more refined or accurate the results. The results, though, are only as good as the data included,” Zimmerman says.
READ THE WHITE PAPER: Achieve effective data analytics.
How Can AI Be Used in K–12 Education?
AI is already playing a role in many classrooms and has promising benefits that can be integrated now and in the future.
Intelligent tutors: What if an AI program could play the role of a teacher or coach, leading students through lessons and even motivating them? Nancye Black , founder of the Block Uncarved and project lead for ISTE ’s AI Explorations program, says AI can support learners in a variety of ways. As a Columbia University researcher, she’s exploring how avatar interactions impact students. “There is some really promising research around the use of AI agents supporting girls and students of color, who are able to — in a lower-risk situation — ask for help and have social learning, even when they are learning independently,” Black says.
Reading workshops: If educators could host reading workshops around the classroom with each individual student, they would. Instead, AI-powered products such as Microsoft’s Immersive Reader can help educators focus on improving education for the 1 in 7 learners who have a disability, Tholfsen says. The product uses text decoding solutions to individualize instruction.
Michelle Zimmerman Classroom Teacher, Researcher and School Leader, Renton Prep Christian School
Translation capabilities: Translation technology is improving quickly, and these tools include more dialects and language nuances every day. A teacher in New York, for example, used AI technology to host a virtual parent night for families who speak multiple different languages, Tholfsen says. Microsoft Translate allows the teacher to generate a code, which broadcasts to everyone connecting to the stream. It translates the speaker’s language into listeners’ languages without the necessity of a human interpreter. “Listeners can type or speak back in their languages, and it cross-translates, so when you type back in Spanish, it goes to me in English, translates to Mike in Italian, and to the person speaking Arabic or Chinese,” Tholfsen says. “It’s like the Star Trek universal translator.”
Low-vision accessibility: Accessibility checkers are helping educators increase access for low-vision students . “We use AI and computer vision to identify what is in an image and generate a caption,” Tholfsen says. “It’s a massive timesaver to do auto-captioning on images, so people are much more likely to make their content accessible.”
The implementation of AI tools won’t replace educators but will instead help them save time. The tech can be customized to fit any classroom, putting educators in control of the AI tools — not the other way around.
- Personalized Learning
- Data Analytics
- Digital Workspace
- Artificial Intelligence
- Machine Learning
Related Articles
CDW Education Events
Find out what's happening in your area.
Copyright © 2024 CDW LLC 200 N. Milwaukee Avenue , Vernon Hills, IL 60061 Do Not Sell My Personal Information
Advertisement
Supported by
A.I. Here, There, Everywhere
Many of us already live with artificial intelligence now, but researchers say interactions with the technology will become increasingly personalized.
- Share full article
By Craig S. Smith
This article is part of our new series, Currents , which examines how rapid advances in technology are transforming our lives.
I wake up in the middle of the night. It’s cold.
“Hey, Google, what’s the temperature in Zone 2,” I say into the darkness. A disembodied voice responds: “The temperature in Zone 2 is 52 degrees.” “Set the heat to 68,” I say, and then I ask the gods of artificial intelligence to turn on the light.
Many of us already live with A.I., an array of unseen algorithms that control our Internet-connected devices, from smartphones to security cameras and cars that heat the seats before you’ve even stepped out of the house on a frigid morning.
But, while we’ve seen the A.I. sun, we have yet to see it truly shine.
Researchers liken the current state of the technology to cellphones of the 1990s: useful, but crude and cumbersome. They are working on distilling the largest, most powerful machine-learning models into lightweight software that can run on “the edge,” meaning small devices such as kitchen appliances or wearables. Our lives will gradually be interwoven with brilliant threads of A.I.
Our interactions with the technology will become increasingly personalized. Chatbots, for example, can be clumsy and frustrating today, but they will eventually become truly conversational, learning our habits and personalities and even develop personalities of their own. But don’t worry, the fever dreams of superintelligent machines taking over, like HAL in “2001: A Space Odyssey,” will remain science fiction for a long time to come; consciousness, self-awareness and free will in machines are far beyond the capabilities of science today.
Privacy remains an issue, because artificial intelligence requires data to learn patterns and make decisions. But researchers are developing methods to use our data without actually seeing it — so-called federated learning , for example — or encrypt it in ways that currently can’t be hacked .
We are having trouble retrieving the article content.
Please enable JavaScript in your browser settings.
Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.
Thank you for your patience while we verify access.
Already a subscriber? Log in .
Want all of The Times? Subscribe .
share this!
August 13, 2024
This article has been reviewed according to Science X's editorial process and policies . Editors have highlighted the following attributes while ensuring the content's credibility:
fact-checked
trusted source
Study explores the transformation of educational system with the advent of AI
by University of Kansas
The advent of artificial intelligence (AI) presents several new and exciting opportunities for improving the quality of education. While several ways of integrating AI into schooling have been explored, only a few of them consider changing the traditional school operations and educational practices.
In a recent article that was published on 23 July 2024 in the ECNU Review of Education , Professor Yong Zhao from the University of Kansas explored more radical changes that may be applied to traditional schooling to fully utilize the potential of AI technology for benefiting students.
"Numerous publications have appeared, all trying to suggest, recommend, and predict the future of AI uses in education. However, most of the discussions, regardless of their scholarly quality, are primarily focused on using AI in the traditional arrangement of schools," explains Professor Zhao.
"The assumption is that everything the traditional school has operated with shall remain the same. AI tools, according to most of the advice, are to be incorporated into teaching by teachers just like previous technologies."
By investigating a broader view of possible changes that can be applied to the current schooling system, the article not only suggested ways of utilizing the potential of AI technology better but also dealt with leveraging AI to support personalized learning. The article notes that although personalized learning has clearly demonstrated benefits, it has not been widely implemented in schools in its true sense. AI tools present a unique opportunity to implement personalized learning, customized to individual student needs harnessing their unique talents and potential.
Traditional schooling systems aim to create members of the workforce. However, AI has disrupted the job market, eliminated traditional career roles and created new roles. In the article, Professor Zhao noted that focusing and building on each child's innate talents and unique strengths is essential for them to be successful in any career of their choice. Since any potential talent when sufficiently honed is valuable in the age of AI, he argued that educational systems must focus on students' strengths rather than their weaknesses.
Furthermore, the article suggested that the traditional curricula might need to change to make way for personalized education. Students could use AI and other resources to follow their interests and passion. This might also eliminate the requirement for age-based classes and promote learning with tools, resources, experts and peers aligned with the same interests rather than age.
Besides personalized education, AI can also be effective in facilitating project-based learning. The article noted that AI tools can help schools inculcate skills such as problem solving and independent thinking in students, transforming them into individuals with critical thinking and analytical skills.
Integrating AI into the education system will also transform the role of teachers. They would become coaches and mentors who would work with the students to help them identify their strengths and potential and guide them to become the best versions of themselves. They would also need to be updated with the AI tools and help students utilize AI as a learning partner.
Traditional educational systems resist change, and now, with the advent of AI, there are several more incentives for changing how schools operate. In the article, Professor Zhao examines the question of whether and how educational systems can change in the age of AI. By considering broad-based changes to the schooling system, he suggests that the true potential of AI in learning can be unlocked.
"AI is no doubt a powerful technology, but it is easy to underestimate its power. Uses in the traditional classroom to assist students and teachers in learning and teaching helps, but they also minimize the transformative power of AI," Professor Zhao observes.
"Schools could be transformed with the advancement of technology , especially generative AI. The changes should start with student -driven personalized learning and problem-oriented pedagogy," he concludes.
Provided by University of Kansas
Explore further
Feedback to editors
Blind cavefish have extraordinary taste buds that increase with age, research reveals
42 minutes ago
'Mercury bomb' threatens millions as Arctic temperatures rise, study warns
Team develops method for control over single-molecule photoswitching
2 hours ago
X-ray irradiation technique helps to control cancer-causing poison in corn
Physicists uncover new phenomena in fractional quantum Hall effects
3 hours ago
Researchers observe 'locked' electron pairs in a superconductor cuprate
Scientists discover superbug's rapid path to antibiotic resistance
Why do researchers often prefer safe over risky projects? Explaining risk aversion in science
4 hours ago
As human activities expand in Antarctica, scientists identify crucial conservation sites
Beer in space: Researchers study microgravity's effect on fermentation
5 hours ago
Relevant PhysicsForums posts
Incandescent bulbs in teaching, free abstract algebra curriculum in urdu and hindi.
Aug 14, 2024
Sources to study basic logic for precocious 10-year old?
Jul 21, 2024
Kumon Math and Similar Programs
Jul 19, 2024
AAPT 2024 Summer Meeting Boston, MA (July 2024) - are you going?
Jul 4, 2024
How is Physics taught without Calculus?
Jun 25, 2024
More from STEM Educators and Teaching
Related Stories
Roles and responsibilities of health and physical education teachers in teaching Japanese language learners
Jul 31, 2024
Is there a 'right way' to teach? Recent debates suggest yes, but students and schools are much more complex
Jul 25, 2024
What's next with AI in higher education?
Jul 11, 2024
Online schooling is not just for lockdowns. Could it work for your child?
Apr 19, 2024
Students' awareness of their cognitive processes facilitates the learning of math, finds study
Jun 19, 2024
Integrating engineering philosophy into medical education to empower future physicians
Feb 20, 2024
Recommended for you
Statistical analysis can detect when ChatGPT is used to cheat on multiple-choice chemistry exams
Larger teams in academic research worsen career prospects, study finds
The 'knowledge curse': More isn't necessarily better
Aug 7, 2024
Visiting an art exhibition can make you think more socially and openly—but for how long?
Aug 6, 2024
Autonomy boosts college student attendance and performance
Study reveals young scientists face career hurdles in interdisciplinary research
Jul 29, 2024
Let us know if there is a problem with our content
Use this form if you have come across a typo, inaccuracy or would like to send an edit request for the content on this page. For general inquiries, please use our contact form . For general feedback, use the public comments section below (please adhere to guidelines ).
Please select the most appropriate category to facilitate processing of your request
Thank you for taking time to provide your feedback to the editors.
Your feedback is important to us. However, we do not guarantee individual replies due to the high volume of messages.
E-mail the story
Your email address is used only to let the recipient know who sent the email. Neither your address nor the recipient's address will be used for any other purpose. The information you enter will appear in your e-mail message and is not retained by Phys.org in any form.
Newsletter sign up
Get weekly and/or daily updates delivered to your inbox. You can unsubscribe at any time and we'll never share your details to third parties.
More information Privacy policy
Donate and enjoy an ad-free experience
We keep our content available to everyone. Consider supporting Science X's mission by getting a premium account.
E-mail newsletter
LIVE @ ISTE 2024: Exclusive Coverage
Benefits of Artificial Intelligence in Education
Ai stands as a catalyst, reshaping the landscape of k-12 education for a more inclusive, engaging, and effective learning journey.
Key points:
- AI in education makes for a more dynamic and effective educational experience for students and educators alike
- The impact of artificial intelligence in education and on students is profound
- Discover more about why AI in education is essential for learning
Unlocking a new era of education, AI in education offers myriad benefits. From personalized learning to innovative content creation, AI transforms traditional classrooms. Explore how these advancements promise to enhance the educational experience, preparing students for a future driven by technology and adaptability.
How is artificial intelligence used in teaching?
Artificial intelligence is increasingly integrated into K-12 teaching, revolutionizing traditional approaches to instruction via AI tools for education. Adaptive learning platforms leverage AI algorithms to tailor educational content, accommodating diverse learning styles and pacing. These platforms analyze student data to provide personalized feedback, fostering a more engaging and effective learning experience.
AI-driven virtual tutors complement classroom instruction by offering immediate support and clarification on various topics. These systems can adapt to individual students’ needs, providing additional resources and challenges based on their progress.
Content creation is enriched by AI, with tools generating interactive learning materials. Virtual labs, simulations, and educational games developed with AI captivate students’ interest, making learning more enjoyable and impactful.
AI is also employed in automating administrative tasks, streamlining processes such as grading and attendance tracking. This efficiency allows educators to allocate more time to direct student interaction and instructional creativity.
Furthermore, AI contributes to data-driven decision-making. Educators can analyze patterns in student performance, identify areas for improvement, and refine teaching strategies accordingly.
Collaborative learning environments benefit from AI-driven tools that facilitate communication and teamwork. Virtual assistants powered by AI can answer student queries, offering support beyond traditional classroom hours.
AI in K-12 teaching enhances personalized learning, provides virtual tutoring support, enriches content creation, streamlines administrative tasks, facilitates data-driven decisions, and fosters collaborative learning environments, collectively contributing to a more dynamic and effective educational experience for students and educators alike.
What are the benefits of artificial intelligence in the education sector?
The benefits of artificial intelligence in education include a plethora of improvements, transforming traditional teaching methods and enriching the learning experience. Personalized learning is a key advantage, with AI algorithms analyzing individual student data to tailor educational content. This adaptability ensures that students can learn at their own pace, enhancing understanding and engagement.
Virtual tutors powered by AI offer real-time feedback and support, supplementing traditional classroom instruction. These systems foster independent learning, critical thinking, and problem-solving skills. Content creation is revolutionized by AI, enabling the development of interactive learning materials such as simulations and virtual labs, making education more engaging and effective.
Administratively, AI automates routine tasks, such as grading and attendance tracking, freeing up educators to focus on more interactive and creative teaching strategies. Data-driven decision-making is enhanced through AI analytics, providing valuable insights into student performance and allowing for more informed adjustments to teaching methodologies.
Additionally, AI promotes inclusivity by catering to diverse learning styles and addressing individual needs. It also facilitates early intervention strategies, identifying learning gaps and providing timely support to prevent students from falling behind.
Furthermore, AI contributes to the development of adaptive assessments that provide a nuanced understanding of students’ capabilities. Collaborative learning environments benefit from AI-driven tools that facilitate communication and teamwork.
The benefits of AI in education include personalized learning, virtual tutoring, content innovation, administrative efficiency, data-driven decision-making, inclusivity, early intervention, and enhanced collaborative learning, collectively reshaping the educational landscape for a more adaptive and effective future.
What is the impact of artificial intelligence on students?
The impact of artificial intelligence in education and on students is profound, influencing various aspects of their learning experiences and personal development. One significant impact is personalized learning. AI algorithms analyze individual student data to tailor educational content, accommodating diverse learning styles and fostering a more customized and engaging educational journey.
AI facilitates a more inclusive learning environment. By addressing individual needs and providing adaptive learning experiences, AI contributes to leveling the educational playing field, ensuring that students with diverse abilities and backgrounds receive appropriate support.
AI’s impact extends beyond academics. It nurtures skills crucial for the future workforce, such as problem-solving, adaptability, and technological literacy. Exposure to AI-driven tools and technologies equips students with valuable insights into the digital landscape, preparing them for the challenges and opportunities of an increasingly technology-driven world.
However, challenges exist, including concerns about data privacy, ethical considerations, and the potential for overreliance on technology. Striking a balance between leveraging the benefits of AI and addressing these challenges is essential to ensure a positive and transformative impact on students’ educational experiences and overall development.
What is the role of artificial intelligence in teaching and learning?
The role of AI in teaching and learning extends beyond tutoring and administrative tasks, encompassing a range of innovative applications that enhance the overall educational experience. Among examples of artificial intelligence in teaching and learning is that of content creation. AI-driven tools facilitate the development of interactive and adaptive learning materials, such as virtual labs, simulations, and educational games. These resources make learning more engaging, fostering a deeper understanding of complex subjects.
Additionally, AI contributes to the creation of adaptive assessments. By analyzing students’ responses and performance patterns, AI can tailor assessments to individual learning levels, providing a more nuanced evaluation of their capabilities.
Moreover, AI enhances the development of collaborative learning environments. AI-driven tools facilitate communication and teamwork among students, promoting interactive discussions and group projects. This collaborative aspect helps students develop crucial interpersonal and communication skills.
Furthermore, AI enables the analysis of vast amounts of educational data to identify learning trends. Educators can use these insights to refine curriculum design, adapt teaching strategies, and address areas where students may be struggling.
AI in teaching and learning is instrumental in content creation, adaptive assessments, collaborative learning environments, and data-driven decision-making. These applications collectively contribute to a more dynamic, engaging, and effective educational landscape, preparing students for success in a rapidly evolving world.
What are the potential benefits of AI in education?
The importance of artificial intelligence in education cannot be overstated. One key advantage is the facilitation of adaptive learning environments. AI can analyze students’ progress and dynamically adjust content to their individual needs, promoting a personalized and effective learning experience. This adaptability ensures that students can explore subjects at their own pace, reinforcing understanding and engagement.
Furthermore, AI contributes to content enrichment by generating interactive learning materials. Virtual labs, simulations, and educational games created with AI make education more immersive and enjoyable, fostering a deeper comprehension of complex topics.
AI-driven data analytics provide valuable insights for educators. By analyzing trends in student performance, educators can make data-informed decisions, refining teaching strategies and adapting curricula to better meet the needs of diverse learners.
In terms of assessment, AI can support the development of dynamic and adaptive evaluation methods. These assessments go beyond traditional testing, offering a more comprehensive understanding of students’ cognitive abilities, critical thinking skills, and problem-solving capabilities.
Moreover, AI facilitates the cultivation of 21st-century skills, such as creativity and collaboration. Virtual assistants and chatbots powered by AI can enhance communication and teamwork, preparing students for the demands of the modern workplace.
In essence, the potential benefits of AI in education encompass personalized learning, content enrichment, data-driven decision-making, adaptive assessments, and the cultivation of essential skills, collectively contributing to a more effective, engaging, and student-centric educational experience.
How will AI improve the role of education?
The future of AI in education will bring about significant improvements across various dimensions. Firstly, AI contributes to personalized learning by analyzing vast amounts of student data to tailor educational content based on individual learning styles, preferences, and capabilities. This adaptability ensures that each student receives a customized learning experience, promoting deeper understanding and engagement.
Secondly, AI enhances the efficiency of administrative tasks in education. Routine responsibilities like grading, attendance tracking, and resource management can be automated, allowing educators to dedicate more time to interactive teaching and fostering meaningful student interactions.
Furthermore, AI stimulates innovation in content creation. Virtual tutors, educational games, and interactive simulations powered by AI create immersive and engaging learning experiences, making education more appealing and accessible to students.
Additionally, AI aids in early intervention and support by continuously monitoring student performance. Educators can identify learning gaps or areas of difficulty, allowing for timely interventions and personalized support to prevent students from falling behind.
Moreover, AI provides valuable insights through data analytics. By analyzing patterns in student performance, AI assists in making informed decisions, refining teaching methods, and identifying areas for improvement in the education system.
In summary, AI improves education by offering personalized learning experiences, automating administrative tasks, fostering content innovation, enabling early intervention, and providing data-driven insights. These enhancements collectively create a more adaptive, efficient, and engaging educational environment, preparing students for success in a rapidly evolving world.
What AI helps in education
AI education tools play a pivotal role in transforming education by offering a wide range of benefits. One of AI’s primary contributions is personalized learning. AI leverages advanced algorithms to analyze individual student data, such as learning styles and preferences, enabling the creation of tailored educational content. This personalized approach ensures that students receive instruction and materials that align with their unique needs, enhancing comprehension and engagement.
AI also facilitates the automation of administrative tasks, streamlining the workload for educators. Tasks like grading, attendance tracking, and resource management can be efficiently handled by AI systems, allowing teachers to allocate more time to direct student interaction, feedback, and instructional creativity.
Content creation is another area where AI brings innovation. Intelligent tutoring systems and virtual assistants powered by AI generate interactive and adaptive learning materials. These may include virtual labs, educational games, and simulations, offering students dynamic and immersive educational experiences that go beyond traditional methods.
Moreover, AI supports early intervention strategies by continuously monitoring student performance. It can identify learning gaps or areas of difficulty, enabling educators to provide timely support and personalized interventions to ensure that students stay on track.
Furthermore, AI enhances data-driven decision-making for educators. By analyzing patterns in student performance, educators gain valuable insights that inform instructional strategies, curriculum development, and overall improvements to the educational system.
In summary, AI in education helps in personalizing learning experiences, automating administrative tasks, fostering content innovation, enabling early intervention, and providing valuable data-driven insights. These contributions collectively enhance the educational landscape, creating a more adaptive, efficient, and student-centric environment.
How does AI help teachers?
AI serves as a valuable ally for teachers, offering support and enhancing their roles in various ways beyond mere automation. Looking at AI in education examples, one significant contribution is personalized assistance. AI-driven tools can analyze individual student data, helping teachers understand students’ learning styles, strengths, and weaknesses. This enables educators to tailor their teaching methods and materials to better meet the diverse needs of their students.
Moreover, AI assists in content creation. Virtual tutors and intelligent systems powered by AI generate interactive and adaptive learning materials. These resources, such as educational games, simulations, and virtual labs, not only engage students but also provide teachers with innovative tools to make lessons more dynamic and captivating.
AI also aids in data analysis. By processing vast amounts of student performance data, AI offers insights into learning trends and areas that may require additional attention. This data-driven approach enables teachers to make informed decisions about their instructional strategies, allowing for continuous improvement.
Furthermore, AI can be a valuable partner in professional development. It can recommend relevant resources, suggest effective teaching strategies based on data analysis, and provide insights into emerging educational trends. This empowers teachers to stay updated with the latest advancements in education and refine their practices.
Additionally, AI enhances communication and collaboration. Virtual assistants and chatbots powered by AI can handle routine queries, allowing teachers more time for meaningful interactions with students. AI-driven collaborative tools also facilitate communication among educators, fostering the exchange of ideas and best practices.
In essence, AI is not just about automating tasks but actively supporting and empowering teachers. By offering personalized insights, aiding in content creation, assisting in data analysis, facilitating professional development, and improving communication, AI becomes a valuable tool that enhances the overall teaching experience.
The benefits of AI in K-12 education are transformative, fostering personalized learning, adaptive content creation, and efficient administrative processes. AI’s capacity to provide data-driven insights supports educators in refining teaching strategies and addressing individual student needs. Moreover, innovative tools driven by AI enhance collaboration, communication, and professional development. This amalgamation of personalized, adaptive, and efficient elements not only enhances the overall educational experience but also equips students and educators alike with the tools necessary for success in a dynamic and technology-driven future. AI stands as a catalyst, reshaping the landscape of K-12 education for a more inclusive, engaging, and effective learning journey.
Sign up for our K-12 newsletter
- Recent Posts
- Classroom Learning - April 5, 2024
- Advantages and Disadvantages of Classroom Management - April 5, 2024
- What are Disadvantages of Classroom Management? - April 5, 2024
Want to share a great resource? Let us know at [email protected] .
Username or Email Address
Remember Me
" * " indicates required fields
eSchool News uses cookies to improve your experience. Visit our Privacy Policy for more information.
- Tech & Innovation
- Artificial Intelligence
Students Worry Overemphasis on AI Could Devalue Education
Report stresses that AI is “new standard” and universities need to better communicate policies to learners.
By Juliette Rowsell for Times Higher Education
You have / 5 articles left. Sign up for a free account or log in.
Demaerre/iStock
Rising use of AI in higher education could cause students to question the quality and value of education they receive, a report warns.
This year’s Digital Education Council Global AI Student Survey, of more than 3,800 students from 16 countries, found that more than half (55 percent) believed overuse of AI within teaching devalued education, and 52 percent said it negatively impacted their academic performance.
Courses primarily created and delivered by AI were less favorably perceived by students, with only 18 percent saying they are more valuable than traditional courses.
“Students do not want to become over-reliant on AI, and they do not want their professors to do so either. Most students want to incorporate AI into their education, yet also perceive the dangers of becoming over-reliant on AI,” the report says.
Despite this, significant numbers of students admitted to using such technology. Some 86 percent said they “regularly” used programs such as ChatGPT in their studies, 54 percent said they used it on a weekly basis, and 24 percent said they used it to write a first draft of a submission.
However, Danny Bielik, president of the Digital Education Council , said he expected this figure to be higher, and he believes many students are too nervous to admit using AI in their essays because of stigmatization and uncertainty over whether AI is permitted at their university.
The survey found 86 percent of students said they were not fully aware of the AI guidelines at their university, and Bielik said students often receive “conflicting information,” highlighting the need for greater communication between students, staff and wider university ecosystems.
“Often communication happens at the faculty level. So, students might be in one class being told one thing about what they are and aren’t allowed to use generative AI for, and then they go into another class and they’re told something completely different,” Bielik said.
The report comes as universities are stepping up their efforts to detect AI in student essays as AI drafting becomes increasingly prevalent, with Turnitin claiming 11 percent of 200 million submissions it received featured “at least” 20 percent AI-drafted content, and a recent study found undetected ChatGPT-generated exam answers scored more highly than real students .
Respondents said they felt they lacked sufficient AI knowledge and skills (reported by 58 percent of students), and the report adds that they increasingly expect training to be incorporated into their studies. Consequently, training staff members on AI is a “baseline expectation” before integrating it into courses.
“The application and use of AI in education is becoming the new standard, and universities will need to take steps to ensure that they have appropriate measures and preparations in place to guide AI integration in their organizations,” the report says.
“Students are as conflicted as the rest of us,” Bielik said. “Communication is more than just writing policies and publishing them on your website.”
Editors’ Picks
- Indiana Argues Professors Lack First Amendment Rights in Public Classrooms
- Inside an HBCU’s Big Endowment Push
- State Grants Spread Thin
Helping Students to Not Snub Each Other in Class
Share this article, more from artificial intelligence.
Use AI to Build Course Materials? Earn $1,000
In a new pilot project, faculty are encouraged to use generative artificial intelligence to create, or build on, open
Success Program Launch: Chat Bot Flags At-Risk Students Throughout the Year
The College of Charleston implemented an AI chat bot to text with students, gathering data on the student experience
How to Get Actionable AI Data at Your Institution
Chris Hausmann writes that colleges, divisions and departments need to be collecting their own data on students’ AI u
- Become a Member
- Sign up for Newsletters
- Learning & Assessment
- Diversity & Equity
- Career Development
- Labor & Unionization
- Shared Governance
- Academic Freedom
- Books & Publishing
- Financial Aid
- Residential Life
- Free Speech
- Physical & Mental Health
- Race & Ethnicity
- Sex & Gender
- Socioeconomics
- Traditional-Age
- Adult & Post-Traditional
- Teaching & Learning
- Digital Publishing
- Data Analytics
- Administrative Tech
- Alternative Credentials
- Financial Health
- Cost-Cutting
- Revenue Strategies
- Academic Programs
- Physical Campuses
- Mergers & Collaboration
- Fundraising
- Research Universities
- Regional Public Universities
- Community Colleges
- Private Nonprofit Colleges
- Minority-Serving Institutions
- Religious Colleges
- Women's Colleges
- Specialized Colleges
- For-Profit Colleges
- Executive Leadership
- Trustees & Regents
- State Oversight
- Accreditation
- Politics & Elections
- Supreme Court
- Student Aid Policy
- Science & Research Policy
- State Policy
- Colleges & Localities
- Employee Satisfaction
- Remote & Flexible Work
- Staff Issues
- Study Abroad
- International Students in U.S.
- U.S. Colleges in the World
- Intellectual Affairs
- Seeking a Faculty Job
- Advancing in the Faculty
- Seeking an Administrative Job
- Advancing as an Administrator
- Beyond Transfer
- Call to Action
- Confessions of a Community College Dean
- Higher Ed Gamma
- Higher Ed Policy
- Just Explain It to Me!
- Just Visiting
- Law, Policy—and IT?
- Leadership & StratEDgy
- Leadership in Higher Education
- Learning Innovation
- Online: Trending Now
- Resident Scholar
- University of Venus
- Student Voice
- Academic Life
- Health & Wellness
- The College Experience
- Life After College
- Academic Minute
- Weekly Wisdom
- Reports & Data
- Quick Takes
- Advertising & Marketing
- Consulting Services
- Data & Insights
- Hiring & Jobs
- Event Partnerships
4 /5 Articles remaining this month.
Sign up for a free account or log in.
- Sign Up, It’s FREE
- Election 2024
- Entertainment
- Newsletters
- Photography
- AP Buyline Personal Finance
- AP Buyline Shopping
- Press Releases
- Israel-Hamas War
- Russia-Ukraine War
- Global elections
- Asia Pacific
- Latin America
- Middle East
- Delegate Tracker
- AP & Elections
- 2024 Paris Olympic Games
- Auto Racing
- Movie reviews
- Book reviews
- Financial Markets
- Business Highlights
- Financial wellness
- Artificial Intelligence
- Social Media
Classes across the country help seniors interact with a world altered by AI
Classes across the country are working to teach seniors about AI’s ability to transform how they interact with the world – and the threat the technology poses. (AP video: Teresa Crawford, Sharon Johnson, Nathan Ellgren)
Barbara Winston uses a computer at her home in Northbrook, Ill., on Sunday, June 30, 2024, several days after taking an introduction to artificial intelligence class at a local senior center. “I saw ice boxes turn into refrigerators, that is how long I have been around, ... And I think [AI] is probably the greatest technical revolution that I will see in my lifetime,” she says. (AP Photo/Teresa Crawford)
- Copy Link copied
People listen to an artificial intelligence seminar at the Forsyth County Senior Center, Tuesday, June 25, 2024, in Cumming, Ga. (AP Photo/Mike Stewart)
Seema Nagrani, left, and Caroline Key, listen to a seminar about artificial intelligence at the Forsyth County Senior Center, Tuesday, June 25, 2024, in Cumming, Ga. (AP Photo/Mike Stewart)
A woman listens to an artificial intelligence seminar at the Forsyth County Senior Center, Tuesday, June 25, 2024, in Cumming, Ga. (AP Photo/Mike Stewart)
Barbara Winston uses a computer at her home in Northbrook, Ill., on Sunday, June 30, 2024, several days after taking an introduction to artificial intelligence class at a local senior center. (AP Photo/Teresa Crawford)
Barbara Winston, 89, sits for a portrait at her home in Northbrook, Ill., on Sunday, June 30, 2024. When she got home from an artificial intelligence class, the retired professor downloaded books on the technology, researched the platforms she wanted to use from her kitchen table and eventually queried ChatGPT about how to treat a personal medical ailment. (AP Photo/Teresa Crawford)
Barbara Winston uses a computer and a smartphone at her home in Northbrook, Ill., on Sunday, June 30, 2024, several days after taking an introduction to artificial intelligence class at a local senior center. (AP Photo/Teresa Crawford)
NORTHFIELD, Ill. (AP) — The students — most with gray hair, some with canes, all at least in their 60s — couldn’t believe what they were hearing.
“Oh, my God,” whispered a retired college professor.
“Does it come with viruses?” wondered a bewildered woman scribbling notes in the second row.
A 79-year-old in a black-and-white floral shirt then asked the question on many minds: “How do you know if it is fake or not?”
This is how older adults — many of whom lived through the advent of refrigeration, the transition from radio to television and the invention of the internet — are grappling with artificial intelligence: taking a class. Sitting in a classroom in an airy senior center in a Chicago suburb, the dozen students were learning about the latest — and possibly greatest — technological leap in their lives.
And they are not alone. Across the country, scores of such classes have sprung up to teach seniors about AI’s ability to transform their lives and the threats the technology poses.
“I saw ice boxes turn into refrigerators, that is how long I have been around,” said Barbara Winston, 89, who paid to attend the class put on at the North Shore Senior Center in Northfield. “And I think this is probably the greatest technical revolution that I will see in my lifetime.”
Older adults find themselves in a unique moment with technology. Artificial intelligence offers significant benefits for seniors, from the ability to curb loneliness to making it easier for them to get to medical appointments .
But it also has drawbacks that are uniquely threatening to this older group of Americans: A series of studies have found that senior citizens are more susceptible to both scams perpetrated using artificial intelligence and believing the types of misinformation that are being supercharged by the technology. Experts are particularly concerned about the role deepfakes and other AI-produced misinformation could play in politics.
Winston left the class to start her own AI journey, even if others remained skeptical. When she got home, the retired professor downloaded books on the technology, researched the platforms she wanted to use from her kitchen table and eventually queried ChatGPT about how to treat a personal medical ailment.
“This is the beginning of my education,” she said, her floral cup of coffee nearby. “I’m not worried about protecting myself. I’m too old to worry about that.”
Classes like these aim to familiarize aging early adopters with the myriad ways the technology could better their lives but also encourage skepticism about how artificial intelligence can distort the truth.
Balanced skepticism, say experts on the technology, is critical for seniors who plan to interact with AI.
“It’s tricky,” said Michael Gershbein, the instructor of the class in Northfield. “Overall, the suspicion that is there on the part of seniors is good but I don’t want them to become paralyzed from their fears and not be willing to do anything online.”
The questions in his class outside Chicago ranged from the absurd to the practical to the academic. Why are so many new shoes no longer including shoelaces? Can AI create a multiday itinerary for a visit to Charleston, South Carolina? What are the geopolitical implications of artificial intelligence?
Gershbein, who teaches classes on a range of technological topics, said interest in AI has ballooned in the last nine months. The 52-year-old teaches an AI course once or twice a week, he said, and aims to create a “safe space where (seniors) can come in and we can discuss all the issues they may be hearing bits and pieces of but we can put it all together and they can ask questions.”
During a 90-minute-long session on a June Thursday, Gershbein discussed deepfakes — videos that use generative AI to make it appear someone said something they did not. When he played a few deepfakes, the seniors sat agog. They could not believe how real the fakes seemed. There are widespread concerns that such videos could be used to trick voters, especially seniors .
The threats to seniors go beyond politics, however, and range from basic misinformation on social media sites to scams that use voice-cloning technology to trick them. An AARP report published last year said that Americans over 60 lose $28.3 billion annually to financial extortion schemes, some assisted by AI.
Experts from the National Council on Aging, an organization established in 1950 to advocate for seniors, said classes on AI at senior centers have increased in recent years and are at the forefront of digital literacy efforts.
“There’s a myth out there that older adults don’t use technology. We know that that’s not true,” said Dianne Stone, associate director at the National Council on Aging who ran a senior center in Connecticut for over two decades. Such courses, she said, are meant to foster a “healthy skepticism” in what the technology can do, arming older Americans with the knowledge “that not everything you hear is true, it’s good to get the information, but you have to kind of sort it out for yourself.”
Striking that balance, said Siwei Lyu, a University at Buffalo professor, can be difficult, and classes tend to either promote AI’s benefits or focus on its dangers.
“We need this kind of education for seniors, but the approach we take has to be very balanced and well-designed,” said Lyu, who has lectured to seniors and other groups.
Seniors who have taken such AI classes said they came away with a clear understanding of AI’s benefits and pitfalls.
“It’s only as good as the people who program it, and the users need to understand that. You really have to question it,” said Linda Chipko, a 70-year-old who attended an AI class in June in suburban Atlanta.
Chipko said she took the class because she wanted to “understand” AI, but on her way out said, “It’s not for me.”
Others have even embraced it. Ruth Schneiderman, 77, used AI to help illustrate a children’s book she was writing, and that experience sparked her interest in taking the Northfield class to learn more about the technology.
“My mother lived until she was 90,” Schneiderman said, “and I learned from her if you want to survive in this world, you have to adjust to the change. Otherwise you are left behind.”
The Associated Press receives financial assistance from the Omidyar Network to support coverage of artificial intelligence and its impact on society. AP is solely responsible for all content. Find AP’s standards for working with philanthropies, a list of supporters and funded coverage areas at AP.org
IMAGES
COMMENTS
As a reporter who covers education technology, I have closely followed how generative artificial intelligence has upended education.
Alex Kotran, the chief executive of the AI Education Project, a nonprofit that helps schools adopt A.I., told me that teachers needed to spend time using generative A.I. themselves to appreciate ...
Listen to the article Advances in artificial intelligence (AI) could transform education systems and make them more equitable. It can accelerate the long overdue transformation of education systems towards inclusive learning that will prepare young people to thrive and shape a better future. At the same time, teachers can use these technologies to enhance their teaching practice and ...
AI Will Transform Teaching and Learning. Let's Get it Right. At the recent AI+Education Summit, Stanford researchers, students, and industry leaders discussed both the potential of AI to transform education for the better and the risks at play.
Artificial Intelligence and Education: A Reading List. A bibliography to help educators prepare students and themselves for a future shaped by AI—with all its opportunities and drawbacks. The icon indicates free access to the linked research on JSTOR. How should education change to address, incorporate, or challenge today's AI systems ...
Today, the U.S. Department of Education's Office of Educational Technology (OET) released a new report, "Artificial Intelligence (AI) and the Future of Teaching and Learning: Insights and Recommendations" that summarizes the opportunities and risks for AI in teaching, learning, research, and assessment based on public input. This report is part of the Biden-Harris Administration's ongoing ...
Dan Ayoub March 4, 2020 Provided byMicrosoft Education Artificial intelligence (AI) is a major influence on the state of education today, and the implications are huge.
Artificial intelligence in education Artificial Intelligence (AI) has the potential to address some of the biggest challenges in education today, innovate teaching and learning practices, and accelerate progress towards SDG 4. However, rapid technological developments inevitably bring multiple risks and challenges, which have so far outpaced policy debates and regulatory frameworks. UNESCO is ...
This systematic review provides unique findings with an up-to-date examination of artificial intelligence (AI) in higher education (HE) from 2016 to 2022. Using PRISMA principles and protocol, 138 articles were identified for a full examination. Using a priori, and grounded coding, the data from the 138 articles were extracted, analyzed, and coded. The findings of this study show that in 2021 ...
The challenges and opportunities of Artificial Intelligence in education. Artificial Intelligence (AI) is producing new teaching and learning solutions that are currently being tested globally. These solutions require advanced infrastructures and an ecosystem of thriving innovators. How does that affect countries around the world, and ...
The artificial intelligence tool, made available for free late last year by the company OpenAI, can reply to simple prompts and generate essays and stories. It can also write code.
A new cross-disciplinary research initiative at MIT aims to promote the understanding and use of AI across all segments of society. The effort, called Responsible AI for Social Empowerment and Education (RAISE), will develop new teaching approaches and tools to engage learners in settings from preK-12 to the workforce.
Read and Share the latest and trending Education and Educational Technology News, Blurbs and Articles about Artificial Intelligence on EdSurge.com.
Higher education institutions use artificial intelligence in research by using tools to sort through large sets and amounts of data to identify patterns, build models, recommend relevant articles, and prepare manuscripts for publication. In doing so, faculty and staff are better equipped to make informed and effective decisions.
Find out how artificial intelligence could change education, inside the classroom and out.
As millions of students return to school this fall, ABC News spoke with students and professors learning to navigate the influence of generative artificial intelligence.
What role will artificial intelligence play in the future of education? For educators, AI can feel like an exciting development — or a terrifying unknown. AI technology is advancing quickly and creating solutions once thought impossible. It's widely available in various technologies and, in many ...
Many of us already live with artificial intelligence now, but researchers say interactions with the technology will become increasingly personalized.
BBC News. UK schools have been left confused by the fast rate of change in artificial intelligence (AI) and its impact on education, head teachers are warning. In a letter to the Times ...
Vice President and Education Secretary Sara Duterte has urged education policymakers and experts to address the challenges and "uncertainties" in the use of artificial intelligence and other ...
The advent of artificial intelligence (AI) presents several new and exciting opportunities for improving the quality of education. While several ways of integrating AI into schooling have been ...
One significant aspect of the impact of artificial intelligence on education is the personalization of learning experiences. AI algorithms analyze individual student data, allowing for tailored content delivery based on students' unique needs and learning styles. This adaptability ensures a more engaging and effective learning environment.
The benefits of artificial intelligence in education are reshaping things for a more inclusive, engaging, and effective learning journey.
Artificial Intelligence News. Everything on AI including futuristic robots with artificial intelligence, computer models of human intelligence and more.
Report stresses that AI is "new standard" and universities need to better communicate policies to learners. Rising use of AI in higher education could cause students to question the quality and value of education they receive, a report warns.
During the past few years, discussion surrounding artificial intelligence bringing academic dishonesty and data security has loomed over higher education institutions. This week, the University of ...
The state of California is partnering with tech giant Nvidia to help train California students, college faculty, developers and data scientists in artificial intelligence.
Older adults are grappling with how artificial intelligence is changing the world. The technology offers them significant benefits, from the ability to curb loneliness to making it easier to get to medical appointments.
Artificial Intelligence Courses and Programs Welcome to Stanford Online's hub for Artificial Intelligence education. Whether you're a seasoned professional or just beginning your journey, we have options for every level. Dive into the forefront of AI with industry insights, practical skills, and deep academic expertise of this transformative ...
In partnership with Miami Dade College (MDC) and Houston Community College, the Maricopa County Community College District (MCCCD), is proud to join the National Applied Artificial Intelligence Consortium (NAAIC), a groundbreaking initiative that aims to foster the growth of technician-level artificial intelligence (AI) professionals across the nation.