How technology is reinventing education

Stanford Graduate School of Education Dean Dan Schwartz and other education scholars weigh in on what's next for some of the technology trends taking center stage in the classroom.

artificial intelligence in education newspaper

Image credit: Claire Scully

New advances in technology are upending education, from the recent debut of new artificial intelligence (AI) chatbots like ChatGPT to the growing accessibility of virtual-reality tools that expand the boundaries of the classroom. For educators, at the heart of it all is the hope that every learner gets an equal chance to develop the skills they need to succeed. But that promise is not without its pitfalls.

“Technology is a game-changer for education – it offers the prospect of universal access to high-quality learning experiences, and it creates fundamentally new ways of teaching,” said Dan Schwartz, dean of Stanford Graduate School of Education (GSE), who is also a professor of educational technology at the GSE and faculty director of the Stanford Accelerator for Learning . “But there are a lot of ways we teach that aren’t great, and a big fear with AI in particular is that we just get more efficient at teaching badly. This is a moment to pay attention, to do things differently.”

For K-12 schools, this year also marks the end of the Elementary and Secondary School Emergency Relief (ESSER) funding program, which has provided pandemic recovery funds that many districts used to invest in educational software and systems. With these funds running out in September 2024, schools are trying to determine their best use of technology as they face the prospect of diminishing resources.

Here, Schwartz and other Stanford education scholars weigh in on some of the technology trends taking center stage in the classroom this year.

AI in the classroom

In 2023, the big story in technology and education was generative AI, following the introduction of ChatGPT and other chatbots that produce text seemingly written by a human in response to a question or prompt. Educators immediately worried that students would use the chatbot to cheat by trying to pass its writing off as their own. As schools move to adopt policies around students’ use of the tool, many are also beginning to explore potential opportunities – for example, to generate reading assignments or coach students during the writing process.

AI can also help automate tasks like grading and lesson planning, freeing teachers to do the human work that drew them into the profession in the first place, said Victor Lee, an associate professor at the GSE and faculty lead for the AI + Education initiative at the Stanford Accelerator for Learning. “I’m heartened to see some movement toward creating AI tools that make teachers’ lives better – not to replace them, but to give them the time to do the work that only teachers are able to do,” he said. “I hope to see more on that front.”

He also emphasized the need to teach students now to begin questioning and critiquing the development and use of AI. “AI is not going away,” said Lee, who is also director of CRAFT (Classroom-Ready Resources about AI for Teaching), which provides free resources to help teach AI literacy to high school students across subject areas. “We need to teach students how to understand and think critically about this technology.”

Immersive environments

The use of immersive technologies like augmented reality, virtual reality, and mixed reality is also expected to surge in the classroom, especially as new high-profile devices integrating these realities hit the marketplace in 2024.

The educational possibilities now go beyond putting on a headset and experiencing life in a distant location. With new technologies, students can create their own local interactive 360-degree scenarios, using just a cell phone or inexpensive camera and simple online tools.

“This is an area that’s really going to explode over the next couple of years,” said Kristen Pilner Blair, director of research for the Digital Learning initiative at the Stanford Accelerator for Learning, which runs a program exploring the use of virtual field trips to promote learning. “Students can learn about the effects of climate change, say, by virtually experiencing the impact on a particular environment. But they can also become creators, documenting and sharing immersive media that shows the effects where they live.”

Integrating AI into virtual simulations could also soon take the experience to another level, Schwartz said. “If your VR experience brings me to a redwood tree, you could have a window pop up that allows me to ask questions about the tree, and AI can deliver the answers.”

Gamification

Another trend expected to intensify this year is the gamification of learning activities, often featuring dynamic videos with interactive elements to engage and hold students’ attention.

“Gamification is a good motivator, because one key aspect is reward, which is very powerful,” said Schwartz. The downside? Rewards are specific to the activity at hand, which may not extend to learning more generally. “If I get rewarded for doing math in a space-age video game, it doesn’t mean I’m going to be motivated to do math anywhere else.”

Gamification sometimes tries to make “chocolate-covered broccoli,” Schwartz said, by adding art and rewards to make speeded response tasks involving single-answer, factual questions more fun. He hopes to see more creative play patterns that give students points for rethinking an approach or adapting their strategy, rather than only rewarding them for quickly producing a correct response.

Data-gathering and analysis

The growing use of technology in schools is producing massive amounts of data on students’ activities in the classroom and online. “We’re now able to capture moment-to-moment data, every keystroke a kid makes,” said Schwartz – data that can reveal areas of struggle and different learning opportunities, from solving a math problem to approaching a writing assignment.

But outside of research settings, he said, that type of granular data – now owned by tech companies – is more likely used to refine the design of the software than to provide teachers with actionable information.

The promise of personalized learning is being able to generate content aligned with students’ interests and skill levels, and making lessons more accessible for multilingual learners and students with disabilities. Realizing that promise requires that educators can make sense of the data that’s being collected, said Schwartz – and while advances in AI are making it easier to identify patterns and findings, the data also needs to be in a system and form educators can access and analyze for decision-making. Developing a usable infrastructure for that data, Schwartz said, is an important next step.

With the accumulation of student data comes privacy concerns: How is the data being collected? Are there regulations or guidelines around its use in decision-making? What steps are being taken to prevent unauthorized access? In 2023 K-12 schools experienced a rise in cyberattacks, underscoring the need to implement strong systems to safeguard student data.

Technology is “requiring people to check their assumptions about education,” said Schwartz, noting that AI in particular is very efficient at replicating biases and automating the way things have been done in the past, including poor models of instruction. “But it’s also opening up new possibilities for students producing material, and for being able to identify children who are not average so we can customize toward them. It’s an opportunity to think of entirely new ways of teaching – this is the path I hope to see.”

  • International edition
  • Australia edition
  • Europe edition

Illustration: Deena So Oteh

Yes, AI could profoundly disrupt education. But maybe that’s not a bad thing

Humans need to excel at things AI can’t do – and that means more creativity and critical thinking and less memorisation

E ducation strikes at the heart of what makes us human. It drives the intellectual capacity and prosperity of nations. It has developed the minds that took us to the moon and eradicated previously incurable diseases. And the special status of education is why generative AI tools such as ChatGPT are likely to profoundly disrupt this sector . This isn’t a reflection of their intelligence, but of our failure to build education systems that nurture and value our unique human intelligence.

We are being duped into believing these AI tools are far more intelligent than they really are. A tool like ChatGPT has no understanding or knowledge. It merely collates bits of words together based on statistical probabilities to produce useful texts. It is an incredibly helpful assistant.

But it is not knowledgable, or wise. It has no concept of how any of the words it produces relate to the real world. The fact that it can pass so many forms of assessment merely reflects that those assessments were not designed to test knowledge and understanding but rather to test whether people had collected and memorised information.

AI could be a force for tremendous good within education. It could release teachers from administrative tasks, giving them more opportunities to spend time with students. However, we are woefully equipped to benefit from the AI that is flooding the market. It does not have to be like this. There is still time to prepare, but we must act quickly and wisely.

AI has been used in education for more than a decade. AI-powered systems, such as Carnegie Learning or Aleks , can analyse student responses to questions and adapt learning materials to meet their individual needs. AI tools such as TeachFX and Edthena can also enhance teacher training and support. To reap the benefits of these technologies, we must design effective ways to roll out AI across the education system, and regulate this properly.

Staying ahead of AI will mean radically rethinking what education is for, and what success means. Human intelligence is far more impressive than any AI system we see today. We possess a rich and diverse intelligence, much of which is unrecognised by our current education system.

We are capable of sophisticated, high-level thinking, yet the school curriculum, particularly in England, takes a rigid approach to learning, prioritising the memorising of facts, rather than creative thinking. Students are rewarded for rote learning rather than critical thought. Take the English syllabus, for instance, which requires students to learn quotations and the rules of grammar. This time-consuming work encourages students to marshal facts, rather than interpret texts or think critically about language.

Our education system should recognise the unique aspects of human intelligence. At school, this would mean a focus on teaching high-level thinking capabilities and designing a system to supercharge our intelligence. Literacy and numeracy remain fundamental, but now we must add AI literacy. Traditional subject areas, such as history, science and geography, should become the context through which critical thinking, increased creativity and knowledge mastery are taught. Rather than teaching students only how to collate and memorise information, we should prize their ability to interpret facts and weigh up the evidence to make an original argument.

Failure to change isn’t an option. Now these technologies are here, we need humans to excel at what AI cannot do, so any workplace automation complements and enriches our lives and our intelligence.

This should be an amazing opportunity to use AI to become much smarter, but we must ensure that AI serves us, not the other way round. This will mean confronting the profit-driven imperatives of big tech companies and the illusionist tricks played by Silicon Valley. It will also mean carefully considering what types of tasks we’re willing to offload to AI.

Some aspects of our intellectual activity may be dispensable, but many are not. While Silicon Valley conjures up its next magic trick, we must prepare ourselves to protect what we hold dear – for ourselves and for future generations.

Rose Luckin is professor of learner-centred design at the UCL Knowledge Lab in London

  • Living with AI
  • Artificial intelligence (AI)

Most viewed

How AI can accelerate students’ holistic development and make teaching more fulfilling

artificial intelligence in education newspaper

AI can free up time in the classroom. Image:  Kenny Eliason on Unsplash

.chakra .wef-1c7l3mo{-webkit-transition:all 0.15s ease-out;transition:all 0.15s ease-out;cursor:pointer;-webkit-text-decoration:none;text-decoration:none;outline:none;color:inherit;}.chakra .wef-1c7l3mo:hover,.chakra .wef-1c7l3mo[data-hover]{-webkit-text-decoration:underline;text-decoration:underline;}.chakra .wef-1c7l3mo:focus,.chakra .wef-1c7l3mo[data-focus]{box-shadow:0 0 0 3px rgba(168,203,251,0.5);} Wendy Kopp

Bo stjerne thomsen.

artificial intelligence in education newspaper

.chakra .wef-9dduvl{margin-top:16px;margin-bottom:16px;line-height:1.388;font-size:1.25rem;}@media screen and (min-width:56.5rem){.chakra .wef-9dduvl{font-size:1.125rem;}} Explore and monitor how .chakra .wef-15eoq1r{margin-top:16px;margin-bottom:16px;line-height:1.388;font-size:1.25rem;color:#F7DB5E;}@media screen and (min-width:56.5rem){.chakra .wef-15eoq1r{font-size:1.125rem;}} Artificial Intelligence is affecting economies, industries and global issues

A hand holding a looking glass by a lake

.chakra .wef-1nk5u5d{margin-top:16px;margin-bottom:16px;line-height:1.388;color:#2846F8;font-size:1.25rem;}@media screen and (min-width:56.5rem){.chakra .wef-1nk5u5d{font-size:1.125rem;}} Get involved with our crowdsourced digital platform to deliver impact at scale

Stay up to date:, davos agenda.

Listen to the article

  • Advances in artificial intelligence (AI) could transform education systems and make them more equitable.
  • It can accelerate the long overdue transformation of education systems towards inclusive learning that will prepare young people to thrive and shape a better future.
  • At the same time, teachers can use these technologies to enhance their teaching practice and professional experience.

With the rapidly accelerating integration of artificial intelligence (AI) in our work, life, and classrooms, educators all over the world are re-evaluating the purpose of education in light of these outsized implications. At Teach For All and the LEGO Foundation, we see the potential of AI to accelerate the long overdue transformation of education systems towards inclusive learning that will prepare young people to thrive and shape a better future.

At the same time, we see huge opportunities for teachers to use these technologies to enhance their own teaching practice and professional experience.

Have you read?

Our children are growing up with ai. here's what you need to know, chatgpt and cheating: 5 ways to change how students are graded, here's what americans think about generative ai like chatgpt and dall-e, the future of jobs report 2023.

Dialogue on the future of work and education has long emphasized the importance of developing skills and values that are uniquely human and less likely to be replaced by technology. The rise of ChatGPT is yet another proof point. Most students and teachers agree that ChatGPT is “just another example of why we can’t keep doing things the old way for schools in the modern world”. (Although please note that this blog was written in the "old way" as an interactive collaboration between humans, and not generated by AI.)

How AI tools can help educators and students

Our hope is that the advent of AI will spur educators, students, parents, and policy-makers to come together to consider what skills our students really need to navigate uncertainty, solve complex challenges and shape meaningful futures in a changing economy. This means embracing the challenge to provide learning that fosters agency, awareness, critical thinking and problem-solving skills, connectedness, and well-being. We already see that AI tools deployed by well-trained and well-supported teachers can be invaluable for accelerating progress towards this vision.

AI can help foster the skills students will need to navigate and shape the future. Tools like ChatGPT, as Dr. Kathy Hirsh-Pasek and Elias Blinkoff argue , can help promote students’ critical thinking when used in sophisticated ways for deeper, more engaged learning.

Vriti Saraf, CEO and founder of K20 Educators, in New York, agrees: “The less students need educators to be the main source of knowledge, the more educators can focus on developing the ability to curate, guide, critically assess learning, and help students gain skills that are so much more important than memorizing information.”

Teachers work about 50 hours a week, spending less than half of the time in direct interaction with students.

Noting the increased importance of social-emotional learning, Henry May, CEO of Co-School in Bogota, Colombia, notes that teachers have an essential role in shaping childhood experiences away from screens. So, "teachers must be trained on how to educate students on ethical principles; how to use AI tools appropriately; and how to mitigate the potential risk of AI to reduce human connection and belonging and increase loneliness."

Another potential benefit is that AI can free up time in the classroom. Teachers often cite unmanageable administrative tasks as their greatest source of exhaustion and burnout. By automating routine administrative tasks, AI could help streamline teacher workflows, giving them more time to build relationships with students and foster their learning and development.

Technology can help teachers reallocate 20-30% of their time toward activities that support student learning. Source: McKinsey 2020.

A classroom view of AI tools

Quim Sabría, a former teacher in Barcelona, Spain and co-founder of Edpuzzle, says AI could improve teacher productivity across areas like lesson planning and differentiation, grading and providing quality feedback, teacher-parent communication, and professional development.

In Lagos, Nigeria, teachers are beginning to see the efficiency and ease that AI brings to their work. Oluwaseun Kayode, who taught in Lagos and founded Schoolinka, is currently seeing an increasing number of teachers from across West Africa using AI to identify children’s literacy levels, uncover where students are struggling, and deepen personalized learning experiences.

In the US state of Illinois, a similar pattern is seen where Diego Marin , an 8th-grade math teacher, likens ChatGPT to “a personalized 1:1 tutor that is super valuable for students.”

The Top 10 Emerging Technologies of 2023 report outlined the technologies poised to positively impact society in the next few years, from health technology to AI to sustainable computing.

The World Economic Forum’s Centre for the Fourth Industrial Revolution is driving responsible technology governance, enabling industry transformation, addressing planetary health, and promoting equity and inclusion.

Learn more about our impact:

  • Digital inclusion: Our EDISON Alliance is mobilizing leaders from across sectors to accelerate digital inclusion, having positively impacted the lives of 454 million people through the activation of 250 initiatives across 90 countries.
  • AI in developing economies: Our Centre for the Fourth Industrial Revolution Rwanda is promoting the adoption of new technologies in the country, enabling over 4,000 daily health consultations using AI.
  • Innovative healthcare: Our Medicine from the Sky initiative is using drones to deliver medicine to remote areas in India, completing over 950 successful drone flights.
  • AI for agriculture: We are working with the Government of India to scale up agricultural technology in the country, helping more than 7,000 farmers monitor the health of their crops and soil using AI.

Want to know more about our centre’s impact or get involved? Contact us .

Equitable use of AI in education

We see tremendous promise for AI to spur educators around the world to reorient their energy away from routine administrative tasks towards accelerating students’ growth and learning, thus making teaching more fulfilling.

But we need to stay vigilant to ensure that AI is a force for equity and quality. We’ve all heard stories of technology being used to create shortcuts for lesson planning, and we must keep fine-tuning AI so that it does not replicate existing biases.

AI tools can be a catalyst for the transformation of our education systems but only with a commitment to a shared vision for equitable holistic education that gives all children the opportunity to thrive. To ensure AI benefits all students, including the most marginalized, we recommend being mindful of the following principles:

  • Co-creation: Bring together ed-tech executives and equity-conscious educators from diverse communities to collaborate on applications of AI that reflect strong pedagogy, address local needs and cultural contexts, and overcome existing biases and inequities.
  • Easy entry points: Support teachers to access and apply technologies to reduce administrative burdens and provide more personalized learning by providing open access resources and collaborative spaces to help them integrate AI into their work.
  • Digital literacy: Invest in IT fundamentals and AI literacy to mitigate the growing digital divide, ensuring access to devices, bandwidth, and digital literacy development for teachers and students to overcome barriers.
  • Best practice: Collect and shareinspiring examples of teachers using technologies to support student voice, curiosity, and agency for more active forms of learning to help inspire other teachers to leverage AI in these ways.
  • Innovation and adaptation: Work with school leaders to support teacher professional development and foster a culture of innovation and adaptability. Recognize and reward teachers for new applications of AI and encourage peer-to-peer learning and specialized training.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:

The agenda .chakra .wef-n7bacu{margin-top:16px;margin-bottom:16px;line-height:1.388;font-weight:400;} weekly.

A weekly update of the most important issues driving the global agenda

.chakra .wef-1dtnjt5{display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;} More on Davos Agenda .chakra .wef-nr1rr4{display:-webkit-inline-box;display:-webkit-inline-flex;display:-ms-inline-flexbox;display:inline-flex;white-space:normal;vertical-align:middle;text-transform:uppercase;font-size:0.75rem;border-radius:0.25rem;font-weight:700;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;line-height:1.2;-webkit-letter-spacing:1.25px;-moz-letter-spacing:1.25px;-ms-letter-spacing:1.25px;letter-spacing:1.25px;background:none;padding:0px;color:#B3B3B3;-webkit-box-decoration-break:clone;box-decoration-break:clone;-webkit-box-decoration-break:clone;}@media screen and (min-width:37.5rem){.chakra .wef-nr1rr4{font-size:0.875rem;}}@media screen and (min-width:56.5rem){.chakra .wef-nr1rr4{font-size:1rem;}} See all

artificial intelligence in education newspaper

Why obesity is rising and how we can live healthy lives

Shyam Bishen

March 20, 2024

artificial intelligence in education newspaper

Global cooperation is stalling – but new trade pacts show collaboration is still possible. Here are 6 to know about

Simon Torkington

March 15, 2024

artificial intelligence in education newspaper

How messages of hope, diversity and representation are being used to inspire changemakers to act

Miranda Barker

March 7, 2024

artificial intelligence in education newspaper

AI, leadership, and the art of persuasion – Forum  podcasts you should hear this month

Robin Pomeroy

March 1, 2024

artificial intelligence in education newspaper

This is how AI is impacting – and shaping – the creative industries, according to experts at Davos

Kate Whiting

February 28, 2024

artificial intelligence in education newspaper

How deep-sea technology could transform life above water

Mattie Rodrigue and Diva Amon

February 23, 2024

artificial intelligence in education

Artificial intelligence in education

Artificial Intelligence (AI) has the potential to address some of the biggest challenges in education today, innovate teaching and learning practices, and accelerate progress towards SDG 4. However, rapid technological developments inevitably bring multiple risks and challenges, which have so far outpaced policy debates and regulatory frameworks. UNESCO is committed to supporting Member States to harness the potential of AI technologies for achieving the Education 2030 Agenda, while ensuring that its application in educational contexts is guided by the core principles of inclusion and equity.   UNESCO’s mandate calls inherently for a human-centred approach to AI . It aims to shift the conversation to include AI’s role in addressing current inequalities regarding access to knowledge, research and the diversity of cultural expressions and to ensure AI does not widen the technological divides within and between countries. The promise of “AI for all” must be that everyone can take advantage of the technological revolution under way and access its fruits, notably in terms of innovation and knowledge.

Furthermore, UNESCO has developed within the framework of the  Beijing Consensus  a publication aimed at fostering the readiness of education policy-makers in artificial intelligence. This publication,  Artificial Intelligence and Education: Guidance for Policy-makers , will be of interest to practitioners and professionals in the policy-making and education communities. It aims to generate a shared understanding of the opportunities and challenges that AI offers for education, as well as its implications for the core competencies needed in the AI era

0000386693

The UNESCO Courier, October-December 2023

0000387029

  • Plurilingual

0000368303

by Stefania Giannini, UNESCO Assistant Director-General for Education

International Forum on artificial intelligence and education

  • More information
  • Analytical report

International Forum on AI and Education banner

Through its projects, UNESCO affirms that the deployment of AI technologies in education should be purposed to enhance human capacities and to protect human rights for effective human-machine collaboration in life, learning and work, and for sustainable development. Together with partners, international organizations, and the key values that UNESCO holds as pillars of their mandate, UNESCO hopes to strengthen their leading role in AI in education, as a global laboratory of ideas, standard setter, policy advisor and capacity builder.   If you are interested in leveraging emerging technologies like AI to bolster the education sector, we look forward to partnering with you through financial, in-kind or technical advice contributions.   'We need to renew this commitment as we move towards an era in which artificial intelligence – a convergence of emerging technologies – is transforming every aspect of our lives (…),' said Ms Stefania Giannini, UNESCO Assistant Director-General for Education at the International Conference on Artificial Intelligence and Education held in Beijing in May 2019. 'We need to steer this revolution in the right direction, to improve livelihoods, to reduce inequalities and promote a fair and inclusive globalization.’'

Robot in Education system

Related items

  • Artificial intelligence

Artificial Intelligence and Education: A Reading List

A bibliography to help educators prepare students and themselves for a future shaped by AI—with all its opportunities and drawbacks.

Young black student studying at night at home, with a help of a laptop computer.

How should education change to address, incorporate, or challenge today’s AI systems, especially powerful large language models? What role should educators and scholars play in shaping the future of generative AI? The release of ChatGPT in November 2022 triggered an explosion of news, opinion pieces, and social media posts addressing these questions. Yet many are not aware of the current and historical body of academic work that offers clarity, substance, and nuance to enrich the discourse.

JSTOR Daily Membership Ad

Linking the terms “AI” and “education” invites a constellation of discussions. This selection of articles is hardly comprehensive, but it includes explanations of AI concepts and provides historical context for today’s systems. It describes a range of possible educational applications as well as adverse impacts, such as learning loss and increased inequity. Some articles touch on philosophical questions about AI in relation to learning, thinking, and human communication. Others will help educators prepare students for civic participation around concerns including information integrity, impacts on jobs, and energy consumption. Yet others outline educator and student rights in relation to AI and exhort educators to share their expertise in societal and industry discussions on the future of AI.

Nabeel Gillani, Rebecca Eynon, Catherine Chiabaut, and Kelsey Finkel, “ Unpacking the ‘Black Box’ of AI in Education ,” Educational Technology & Society 26, no. 1 (2023): 99–111.

Whether we’re aware of it or not, AI was already widespread in education before ChatGPT. Nabeel Gillani et al. describe AI applications such as learning analytics and adaptive learning systems, automated communications with students, early warning systems, and automated writing assessment. They seek to help educators develop literacy around the capacities and risks of these systems by providing an accessible introduction to machine learning and deep learning as well as rule-based AI. They present a cautious view, calling for scrutiny of bias in such systems and inequitable distribution of risks and benefits. They hope that engineers will collaborate deeply with educators on the development of such systems.

Jürgen Rudolph, Samson Tan, and Shannon Tan, “ ChatGPT: Bullshit Spewer or the End of Traditional Assessments in Higher Education? ” The Journal of Applied Learning and Teaching 6, no. 1 (January 24, 2023).

Jürgen Rudolph et al. give a practically oriented overview of ChatGPT’s implications for higher education. They explain the statistical nature of large language models as they tell the history of OpenAI and its attempts to mitigate bias and risk in the development of ChatGPT. They illustrate ways ChatGPT can be used with examples and screenshots. Their literature review shows the state of artificial intelligence in education (AIEd) as of January 2023. An extensive list of challenges and opportunities culminates in a set of recommendations that emphasizes explicit policy as well as expanding digital literacy education to include AI.

Emily M. Bender, Timnit Gebru, Angela McMillan-Major, and Shmargaret Shmitchell, “ On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜 ,” FAccT ’21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (March 2021): 610–623.

Student and faculty understanding of the risks and impacts of large language models is central to AI literacy and civic participation around AI policy. This hugely influential paper details documented and likely adverse impacts of the current data-and-resource-intensive, non-transparent mode of development of these models. Bender et al. emphasize the ways in which these costs will likely be borne disproportionately by marginalized groups. They call for transparency around the energy use and cost of these models as well as transparency around the data used to train them. They warn that models perpetuate and even amplify human biases and that the seeming coherence of these systems’ outputs can be used for malicious purposes even though it doesn’t reflect real understanding.

The authors argue that inclusive participation in development can encourage alternate development paths that are less resource intensive. They further argue that beneficial applications for marginalized groups, such as improved automatic speech recognition systems, must be accompanied by plans to mitigate harm.

Erik Brynjolfsson, “ The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence ,” Daedalus 151, no. 2 (2022): 272–87.

Erik Brynjolfsson argues that when we think of artificial intelligence as aiming to substitute for human intelligence, we miss the opportunity to focus on how it can complement and extend human capabilities. Brynjolfsson calls for policy that shifts AI development incentives away from automation toward augmentation. Automation is more likely to result in the elimination of lower-level jobs and in growing inequality. He points educators toward augmentation as a framework for thinking about AI applications that assist learning and teaching. How can we create incentives for AI to support and extend what teachers do rather than substituting for teachers? And how can we encourage students to use AI to extend their thinking and learning rather than using AI to skip learning?

Kevin Scott, “ I Do Not Think It Means What You Think It Means: Artificial Intelligence, Cognitive Work & Scale ,” Daedalus 151, no. 2 (2022): 75–84.

Brynjolfsson’s focus on AI as “augmentation” converges with Microsoft computer scientist Kevin Scott’s focus on “cognitive assistance.” Steering discussion of AI away from visions of autonomous systems with their own goals, Scott argues that near-term AI will serve to help humans with cognitive work. Scott situates this assistance in relation to evolving historical definitions of work and the way in which tools for work embody generalized knowledge about specific domains. He’s intrigued by the way deep neural networks can represent domain knowledge in new ways, as seen in the unexpected coding capabilities offered by OpenAI’s GPT-3 language model, which have enabled people with less technical knowledge to code. His article can help educators frame discussions of how students should build knowledge and what knowledge is still relevant in contexts where AI assistance is nearly ubiquitous.

Laura D. Tyson and John Zysman, “ Automation, AI & Work ,” Daedalus 151, no. 2 (2022): 256–71.

How can educators prepare students for future work environments integrated with AI and advise students on how majors and career paths may be affected by AI automation? And how can educators prepare students to participate in discussions of government policy around AI and work? Laura Tyson and John Zysman emphasize the importance of policy in determining how economic gains due to AI are distributed and how well workers weather disruptions due to AI. They observe that recent trends in automation and gig work have exacerbated inequality and reduced the supply of “good” jobs for low- and middle-income workers. They predict that AI will intensify these effects, but they point to the way collective bargaining, social insurance, and protections for gig workers have mitigated such impacts in countries like Germany. They argue that such interventions can serve as models to help frame discussions of intelligent labor policies for “an inclusive AI era.”

Todd C. Helmus, Artificial Intelligence, Deepfakes, and Disinformation: A Primer (RAND Corporation, 2022).

Educators’ considerations of academic integrity and AI text can draw on parallel discussions of authenticity and labeling of AI content in other societal contexts. Artificial intelligence has made deepfake audio, video, and images as well as generated text much more difficult to detect as such. Here, Todd Helmus considers the consequences to political systems and individuals as he offers a review of the ways in which these can and have been used to promote disinformation. He considers ways to identify deepfakes and ways to authenticate provenance of videos and images. Helmus advocates for regulatory action, tools for journalistic scrutiny, and widespread efforts to promote media literacy. As well as informing discussions of authenticity in educational contexts, this report might help us shape curricula to teach students about the risks of deepfakes and unlabeled AI.

William Hasselberger, “ Can Machines Have Common Sense? ” The New Atlantis 65 (2021): 94–109.

Students, by definition, are engaged in developing their cognitive capacities; their understanding of their own intelligence is in flux and may be influenced by their interactions with AI systems and by AI hype. In his review of The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do by Erik J. Larson, William Hasselberger warns that in overestimating AI’s ability to mimic human intelligence we devalue the human and overlook human capacities that are integral to everyday life decision making, understanding, and reasoning. Hasselberger provides examples of both academic and everyday common-sense reasoning that continue to be out of reach for AI. He provides a historical overview of debates around the limits of artificial intelligence and its implications for our understanding of human intelligence, citing the likes of Alan Turing and Marvin Minsky as well as contemporary discussions of data-driven language models.

Gwo-Jen Hwang and Nian-Shing Chen, “ Exploring the Potential of Generative Artificial Intelligence in Education: Applications, Challenges, and Future Research Directions ,” Educational Technology & Society 26, no. 2 (2023).

Gwo-Jen Hwang and Nian-Shing Chen are enthusiastic about the potential benefits of incorporating generative AI into education. They outline a variety of roles a large language model like ChatGPT might play, from student to tutor to peer to domain expert to administrator. For example, educators might assign students to “teach” ChatGPT on a subject. Hwang and Chen provide sample ChatGPT session transcripts to illustrate their suggestions. They share prompting techniques to help educators better design AI-based teaching strategies. At the same time, they are concerned about student overreliance on generative AI. They urge educators to guide students to use it critically and to reflect on their interactions with AI. Hwang and Chen don’t touch on concerns about bias, inaccuracy, or fabrication, but they call for further research into the impact of integrating generative AI on learning outcomes.

Weekly Newsletter

Get your fix of JSTOR Daily’s best stories in your inbox each Thursday.

Privacy Policy   Contact Us You may unsubscribe at any time by clicking on the provided link on any marketing message.

Lauren Goodlad and Samuel Baker, “ Now the Humanities Can Disrupt ‘AI’ ,” Public Books (February 20, 2023).

Lauren Goodlad and Samuel Baker situate both academic integrity concerns and the pressures on educators to “embrace” AI in the context of market forces. They ground their discussion of AI risks in a deep technical understanding of the limits of predictive models at mimicking human intelligence. Goodlad and Baker urge educators to communicate the purpose and value of teaching with writing to help students engage with the plurality of the world and communicate with others. Beyond the classroom, they argue, educators should question tech industry narratives and participate in public discussion on regulation and the future of AI. They see higher education as resilient: academic skepticism about former waves of hype around MOOCs, for example, suggests that educators will not likely be dazzled or terrified into submission to AI. Goodlad and Baker hope we will instead take up our place as experts who should help shape the future of the role of machines in human thought and communication.

Kathryn Conrad, “ Sneak Preview: A Blueprint for an AI Bill of Rights for Education ,” Critical AI 2.1 (July 17, 2023).

How can the field of education put the needs of students and scholars first as we shape our response to AI, the way we teach about it, and the way we might incorporate it into pedagogy? Kathryn Conrad’s manifesto builds on and extends the Biden administration’s Office of Science and Technology Policy 2022 “Blueprint for an AI Bill of Rights.” Conrad argues that educators should have input into institutional policies on AI and access to professional development around AI. Instructors should be able to decide whether and how to incorporate AI into pedagogy, basing their decisions on expert recommendations and peer-reviewed research. Conrad outlines student rights around AI systems, including the right to know when AI is being used to evaluate them and the right to request alternate human evaluation. They deserve detailed instructor guidance on policies around AI use without fear of reprisals. Conrad maintains that students should be able to appeal any charges of academic misconduct involving AI, and they should be offered alternatives to any AI-based assignments that might put their creative work at risk of exposure or use without compensation. Both students’ and educators’ legal rights must be respected in any educational application of automated generative systems.

Support JSTOR Daily! Join our new membership program on Patreon today.

JSTOR logo

JSTOR is a digital library for scholars, researchers, and students. JSTOR Daily readers can access the original research behind our articles for free on JSTOR.

Get Our Newsletter

More stories.

Saint Wilgefortis

Meet Saint Wilgefortis, the Bearded Virgin

https://commons.wikimedia.org/wiki/File:Benedictus_Spinoza._Line_engraving_by_W._Pobuda_after_(A._P._Wellcome_V0005578.jpg

Nice Guy Spinoza Finishes…First?

Police find bog body dated over 2,000 years in Bellaghy. Police Service of Northern Ireland

A Body in the Bog

Jamia Mosque in Nairobi

A Mughal Mosque in Kenya

Recent posts.

  • From Saint to Stereotype: A Story of Brigid
  • Christy’s Minstrels Go to Great Britain
  • Animals at Play, Ama Divers, and Nuclear Power
  • Missouri Compromise of 1820: Annotated
  • The Power of Pamphlets in the Anti-Slavery Movement

Support JSTOR Daily

Sign up for our weekly newsletter.

Your subscription makes our work possible.

We want to bridge divides to reach everyone.

globe

Get stories that empower and uplift daily.

Already a subscriber? Log in to hide ads .

Select free newsletters:

A selection of the most viewed stories this week on the Monitor's website.

Every Saturday

Hear about special editorial projects, new product information, and upcoming events.

Select stories from the Monitor that empower and uplift.

Every Weekday

An update on major political events, candidates, and parties twice a week.

Twice a Week

Stay informed about the latest scientific discoveries & breakthroughs.

Every Tuesday

A weekly digest of Monitor views and insightful commentary on major events.

Every Thursday

Latest book reviews, author interviews, and reading trends.

Every Friday

A weekly update on music, movies, cultural trends, and education solutions.

The three most recent Christian Science articles with a spiritual perspective.

Every Monday

AI in the classroom: Why some teachers are embracing it

  • Deep Read ( 4 Min. )
  • By Jackie Valley Staff writer

January 23, 2024

In Georgia, educators are thinking more about what students should be learning about artificial intelligence. Their vision has started to become a reality. The youngest learners are having conversations about household appliances, such as Amazon’s Alexa. High schoolers can now take classes that culminate in using AI to solve real-world problems.

“What we’re really trying to do is teach our students how to be critical thinkers,” says Sallie Holloway, director of artificial intelligence and computer science for Gwinnett County Public Schools. 

Why We Wrote This

Educators are trying to balance concerns about artificial intelligence with how to prepare students for using it in the future. What does teaching look like when AI is part of the curriculum?

School-based computer labs once seemed futuristic. Now the internet is everywhere, and the next frontier – AI – is seeping into learning environments. ChatGPT’s debut more than a year ago brought ongoing concerns about cheating and the tool’s ability to mislead students. But some educators also see it as an opportunity for innovation.

Everything from smartphone apps to collision-avoidance systems on cars use AI, with experts saying that it will soon be embedded in daily life. 

“It’s not just like textbook work,” says Carter Watkins, a senior at Gwinnett County’s Seckinger High School. “You’re putting your thinking cap on.”

What if instead of being wary about artificial intelligence, schools were embracing it?

In Georgia, educators in a district near Atlanta are thinking more about what students should be learning about AI. Their vision has started to become a reality. The youngest learners are having conversations about household appliances, such as Amazon’s Alexa. High schoolers can now take classes that culminate in using AI to solve real-world problems.

School-based computer labs once seemed futuristic. Now the internet is everywhere and the next frontier – AI – is seeping into learning environments. ChatGPT’s debut more than a year ago brought ongoing concerns about cheating and the tool’s ability to mislead students. But some educators also see it as an opportunity for innovation.

David Touretzky chairs the Artificial Intelligence for K-12 Initiative, which began in 2018 after he realized computer science standards had barely mentioned the burgeoning AI field. The group has been developing instructional guidelines, establishing what students should know about AI, and how they should be able to use it.

Everything from smartphone apps to collision-avoidance systems on cars uses AI. And with the emergence of large language models like ChatGPT, which rely on vast amounts of data, AI will become even more embedded in daily life, Mr. Touretzky says, explaining the need for related education.

The Artificial Intelligence for K-12 Initiative has been piloting an AI elective course in some Georgia middle schools that includes topics such as autonomous robots and vehicles and how computers understand language and make decisions.

“The way kids relate to AI is going to depend a lot on how large language models find their way into our society,” says Mr. Touretzky, who is a research professor in computer science at Carnegie Mellon University. “We don’t understand yet what that’s going to look like.”

Preparing for the future

In October, the Center on Reinventing Public Education found that only two U.S. states – California and Oregon – had issued specific guidance surrounding AI education. 

artificial intelligence in education newspaper

Officials from Gwinnett County Public Schools liken the academic imperative to driving a car. Before people get behind the wheel, they need to know the rules of the road and the basics of vehicle maintenance. Artificial intelligence necessitates a similar education, they say.

An AI-ready student, as defined by the district, would be exposed to, if not well versed in, the areas of data science, mathematical reasoning, creative problem-solving, ethics, applied experiences, and programming. At a pilot group of schools, the district has embedded this learning framework in kindergarten through 12th grade classrooms.

“Students are hearing every day about how pervasive AI is and how it is in every industry,” says Ms. Holloway of the work she coordinates. “So I think they are interested to know more and really prepare themselves for their futures.”

Carter Watkins, a senior at Seckinger High School in Buford, part of the Gwinnett district, says the inclusion of AI throughout his core subjects has made him more eager to attend school. He particularly enjoyed a lab activity in a physics class, in which students leaned on AI to help determine which remote control car moved faster with different batteries.

“It’s not just like textbook work,” he says. “You’re putting your thinking cap on.”

As an aspiring business major who wants to go into sports marketing, Carter says the AI skills are helping him become a “more well-rounded individual.”

“AI is not all bad”

With or without official guidance, some educators are already displaying how it can be used for good.

At Highlands Ranch High School outside of Denver, Principal Chris Page Jr. used artificial intelligence to develop clues for a homecoming-related scavenger hunt. He drew inspiration from AI and tinkered with the suggestions.

After students solved the mystery, he let them in on his secret. Dr. Page says he wants his staff and students to know there are benefits to AI if it is used correctly – just like prior technology that seemed unnerving at first.

“AI is not all bad,” he says. “What we know to be true is there are always educational shifts, and a lot of those are technological shifts in education.”

Experts say weaving AI into lesson plans will be a gradual process, partly because teachers need to learn the technology, too. An Education Week survey  conducted in the fall found that two-thirds of teachers had not used AI tools in the classroom. Of those, more than a third had no plans to use it, while about another third envision using it soon.

When educators at Dr. Page’s school use AI in the classroom, the lessons mostly familiarize students with its capabilities and drawbacks, he says. Teachers, for instance, may have AI generate a historical narrative and then have students dissect it to determine what’s true and false.

The “research-minded” project, he says, allows students to “truly become historians because that’s what historians do.”

In the Los Angeles area, a charter network called Da Vinci Schools has deployed AI in the form of Project Leo. The platform melds instructional standards with student interests, producing project ideas that personalize learning, says Steve Eno, who teaches mechanical engineering and is the director of Project Leo.

He says one student is devising a “trick analyzer” using AI technology that will offer recommendations about how skateboarders can improve their form. 

“I don’t think every kid is going to be interested in using AI to build,” Mr. Eno says. “I do think students need to know how it works and what’s possible with it.”

Sometimes Project Leo spits out ideas either that are bad or that students don’t like. That’s OK, he says, because it’s showing them the imperfections surrounding AI. But when an AI-generated project idea sparks a student’s imagination, it leads to more engagement and, ultimately, a portfolio of work that can lead to future careers, Mr. Eno says.

“They need to be excited,” he says. “They need to be joyful, and then they’ll actually learn the stuff.” 

Help fund Monitor journalism for $11/ month

Already a subscriber? Login

Mark Sappenfield illustration

Monitor journalism changes lives because we open that too-small box that most people think they live in. We believe news can and should expand a sense of identity and possibility beyond narrow conventional expectations.

Our work isn't possible without your support.

Unlimited digital access $11/month.

Monitor Daily

Digital subscription includes:

  • Unlimited access to CSMonitor.com.
  • CSMonitor.com archive.
  • The Monitor Daily email.
  • No advertising.
  • Cancel anytime.

artificial intelligence in education newspaper

Related stories

The explainer ‘chatgpt, tell me a story’: ai gets literary, the explainer ai may disrupt math and computer science classes. is there an upside, the explainer readers write: from cursive writing to chatgpt, share this article.

Link copied.

Dear Reader,

About a year ago, I happened upon this statement about the Monitor in the Harvard Business Review – under the charming heading of “do things that don’t interest you”:

“Many things that end up” being meaningful, writes social scientist Joseph Grenny, “have come from conference workshops, articles, or online videos that began as a chore and ended with an insight. My work in Kenya, for example, was heavily influenced by a Christian Science Monitor article I had forced myself to read 10 years earlier. Sometimes, we call things ‘boring’ simply because they lie outside the box we are currently in.”

If you were to come up with a punchline to a joke about the Monitor, that would probably be it. We’re seen as being global, fair, insightful, and perhaps a bit too earnest. We’re the bran muffin of journalism.

But you know what? We change lives. And I’m going to argue that we change lives precisely because we force open that too-small box that most human beings think they live in.

The Monitor is a peculiar little publication that’s hard for the world to figure out. We’re run by a church, but we’re not only for church members and we’re not about converting people. We’re known as being fair even as the world becomes as polarized as at any time since the newspaper’s founding in 1908.

We have a mission beyond circulation, we want to bridge divides. We’re about kicking down the door of thought everywhere and saying, “You are bigger and more capable than you realize. And we can prove it.”

If you’re looking for bran muffin journalism, you can subscribe to the Monitor for $15. You’ll get the Monitor Weekly magazine, the Monitor Daily email, and unlimited access to CSMonitor.com.

Subscribe to insightful journalism

Subscription expired

Your subscription to The Christian Science Monitor has expired. You can renew your subscription or continue to use the site without a subscription.

Return to the free version of the site

If you have questions about your account, please contact customer service or call us at 1-617-450-2300 .

This message will appear once per week unless you renew or log out.

Session expired

Your session to The Christian Science Monitor has expired. We logged you out.

No subscription

You don’t have a Christian Science Monitor subscription yet.

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

Additional menu

Khan Academy Blog

Top AI-in-Education Moments of 2023: The Year Artificial Intelligence Dominated Education News

posted on January 20, 2024

By Head of Khan Academy’s Digital Marketing Team

Stephanie Yamkovenko

Khanmigo with a speech bubble

What was the timeline of AI in education in 2023? The year 2023 started with school districts scrambling to respond to a newly launched free app called ChatGPT and ended with more than 40 school districts and 28,000 students and teachers piloting Khan Academy’s AI-powered tutor and teaching assistant, Khanmigo, in the classroom. AI and large language models dominated headlines throughout the year, and knowledge of generative AI became even more mainstream. Take a look at a timeline of AI in education and explore the top moments in 2023, including the number of times Khan Academy made headlines with our AI-powered tutor and teaching assistant, Khanmigo. 

2023 Timeline of AI in Education

Jump to a specific month in the timeline of AI in education moments: January | February | March | April | May | June | July | August | September | October | November | December

January 2023

NYC education department banned ChatGPT on education department devices and internet networks . Districts and schools across the country grappled with how to respond to generative AI. 

The Khan Academy team was working furiously to make Khanmigo the best possible AI-powered tutor and teaching assistant. Our Chief Learning Officer, Kristen DiCerbo, described our process in this Linkedin post .

“In September 2022, I didn’t know the term ‘prompt engineering’ but I wrote to our contacts at OpenAI that it needed ‘the right instructions’ to act like a tutor. We quickly learned how to write (‘engineer’) the instructions (‘prompts’) to the model to get it to ask questions. Even with those instructions, it really defaulted to answering questions…But we got better at prompt engineering. And we got new versions of the model, and they got better at following our instructions…I went back to two of the real experts in the field and read papers by Micki Chi and Art Graesser. Their work analyzing the moves of human tutors and this paper that summarizes the AutoTutor work over 17 years were foundational. We thought a lot about the back-and-forth between the tutor and the student, creating over 100 examples of the kinds of interactions we wanted.”

February 2023

ChatGPT was estimated to have reached 100 million monthly active users in late January, making it the fastest-growing consumer application in history. In February, OpenAI announced a monthly premium subscription.

Google launched AI chatbot Bard , Meta unveiled LLaMA, Microsoft integrated GPT into their Bing search engine, and Snapchat launched My AI.

OpenAI released a new model, GPT-4, and as a launch partner, Khan Academy had early access to the model. 

On March 14, Khan Academy released our new AI-powered tutor and teaching assistant, Khanmigo , in a limited pilot period . Founder and CEO Sal Khan spoke to NBC News about our pilot. 

Khan Academy released a free course entitled AI for education , which featured practical tips and strategies for using generative AI to create engaging learning experiences and improve learning outcomes.

Educators embraced AI in the classroom. Instead of running away from AI and banning it in schools, the administrators and teachers who tested Khanmigo were excited about the possibilities and understood its benefits. “Hopefully we are showing how positive this can be,” said Sal Khan in a Washington Post interview . “Most people see the power here; they just want reasonable guardrails.”  

U.S. Department of Commerce Secretary Raimondo convened a task force to be competitive on the technologies that would become key to economic and national security over the coming decades. Sal Khan and six other individuals would serve on the U.S. Section of the U.S.-EU Trade and Technology Council (TTC) Talent for Growth Task Force.

Sal Khan delivered a TED Talk at TED 2023 in Vancouver on how AI can save (not destroy) education . “We’re at the cusp of using AI for probably the biggest positive transformation that education has ever seen,” he said in his talk. TED announced that Sal Khan’s talk was one of the top 10 most popular in 2023.

In May, the U.S. Department of Education released a report on artificial intelligence highlighting opportunities and possible risks of using AI to improve teaching and learning.

UNESCO brought together 40 education ministers from around the world to create a coordinated response to generative AI tools and to share policy approaches and plans on how best to integrate these tools into education.

Several school districts piloted Khanmigo with students and educators during the last few months of the 2022-23 school year . “It has been a game changer,” said a high school teacher at a partner district in Hobart, Indiana . “I’ve been able to differentiate in ways I haven’t been able to before.” A reporter from the New York Times witnessed Khan Academy’s AI-powered tutor in action at Khan Lab School and wrote that “automated study aides could usher in a profound shift in classroom teaching and learning.” 

At one of the year’s biggest education technology conferences, 40 sessions, posters and presentations focused on AI in the classroom. Education leaders, teachers, coaches, and more shared everything from tips on using AI creatively in the classroom to how to make art, poetry, and music with AI. The Khan Academy team attended the conference and had a booth where educators were able to interact with live demos of Khanmigo .

Sal Khan met with President Biden and a group of AI experts on June 20 to discuss artificial intelligence’s benefits and risks . Sal shared how AI can benefit education by providing a tutor for every student and an assistant for every teacher.

Khan Academy reduced the price of Khanmigo and removed the waitlist. Thousands of parents, teachers, and adult learners signed up for access to the AI-powered tutor and teaching assistant .

President Biden announced that seven leading AI companies had voluntarily committed to managing the risks posed by AI . Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI committed to working with the White House to move toward the safe, secure, and transparent development of AI technology.

Indiana Secretary of Education Katie Jenner announced Indiana’s AI-Powered Platform Pilot Grant to help schools and districts explore opportunities for AI to help support students and teachers. Indiana school districts were invited to apply for grants, and more than $2 million was awarded to 36 recipients (a total of 112 schools). These grants made it possible for more than 45,000 students statewide to interact with an AI-powered platform during the remainder of the 2023-2024 school year.

Thousands of educators attended Khan Academy’s webinar about how to use AI in the classroom. Alongside a group of other educators, Kristen DiCerbo, Khan Academy’s Chief Academic Officer, presented a webinar to teachers that answered questions about using AI to design lesson plans, integrating AI into the classroom routine, and using AI to save time.

August 2023

Khan Academy collaborated with Code.org, ETS, and ISTE to bring an AI 101 course to teachers , a free professional learning series that instructs every teacher on how to use AI safely and ethically in their classrooms.

Bill Gates interviewed Sal Khan on the Unconfuse Me with Bill Gates podcast to discuss how AI can help close the education gap. “I’ve been a fan and supporter of Sal Khan’s work for a long time and recently had him on my podcast to talk about education and AI,” Gates said on Linkedin . “For decades, I’ve been excited about all the ways that software would make teachers’ jobs easier and help students learn. These changes are finally starting to happen in a dramatic way. The current state of the art is Khanmigo, a text-based bot created by Khan Academy.”

IBM released a survey about the massive need to reskill the workforce. More than half of the 3,000 executives surveyed estimated that 40% of their workforce would need to be reskilled as their businesses implemented AI and automation over the next three years. Millions of workers raced to acquire new AI-related skills.

September 2023

Gallup released a survey citing fears among 22% of American workers that technology would make their jobs obsolete in the near future. The fear of tech grew more in the past two years than at any time since Gallup started measuring the trend in 2017. Other job concerns remained stable.

October 2023

Business Insider released its AI 100 list , which included people who were not only pushing the boundaries of the field but who were also trying to ensure that AI developed steadily and responsibly. Sal Khan was included on the list for launching Khanmigo and “embracing new technology again in the form of generative AI.”

Khan Academy announced free new courses in digital literacy . The courses were designed to help learners stay informed and learn digital literacy skills as they navigated the digital world. The courses included those on AI, internet safety, and social media literacy.

TeachAI released the AI Guidance for Schools Toolkit geared toward school district administrators, state education officials, and global education leaders. The toolkit provided useful guidance for setting policies and spotlighted examples of exemplary practices.

November 2023

Khan Academy further reduced the price of Khanmigo thanks to significant engineering strides that reduced the high computational costs of AI . As a nonprofit, Khan Academ’y’s goal has always been to ensure that cutting-edge AI educational tools like Khanmigo were not a luxury but instead a resource available to all. For $4/month or $44/year , parents, educators, and adult learners can enjoy unlimited access to Khanmigo. Khan Academy also reduced the district price from $60 per student/per year to $35 per student/per year. During this school year, more than 40 school districts and 28,000 students and teachers are piloting Khanmigo in the classroom. Learn more about our district Khanmigo offerings .

Common Sense Media launched inaugural AI product ratings so parents could better understand how AI works and find out which tools were the most trustworthy. Khanmigo received an overall 4-star rating from Common Sense Media , making it one of their top-rated AI-for-education tools, above others like ChatGPT and Bard.

Good Morning America showcased how Khanmigo is transforming the classroom experience and bridging the tutoring gap. The segment took viewers inside First Avenue Elementary School in Newark, New Jersey, to show Khanmigo in action.

Only two states—California and Oregon—have issued policy guidance for schools on AI platforms , according to a report by The Center on Reinventing Public Education (CRPE) . Eleven other states are developing guidance: Arizona, Connecticut, Maine, Mississippi, Nebraska, New York, Ohio, Pennsylvania, Virginia, Vermont, and Washington.

December 2023

A research study by Stanford University found that AI tools have not boosted the overall cheating rates in high schools in the U.S., despite the widespread fears in the beginning of the year as school administrators and educators grappled with the popularization of AI-powered chatbots.

Fortune hosted an AI conference to examine new business cases for AI with C-suite executives, leading academics, and prominent policymakers. Sal Khan spoke to the audience about why Khan Academy decided not to run away from AI but, instead, vowed to be the trusted brand that puts the right guardrails in place.

“In hindsight, it was a blessing that ChatGPT launched when it did,” Sal Khan said at the Fortune conference . “Because it allowed everyone to process a very imperfect tool and, frankly, by about March 2023, they started saying, ‘If only someone we could trust would put the right guardrails around this, so it avoids cheating, makes it transparent, helps solve the bias.’ Then we’re like, ‘Here you go.’”

Bill Gates hails AI education tools in year-end letter — calls Khanmigo ‘mind blowing.’ In his annual end-of-year letter, Bill Gates praised Khanmigo and other advancements in AI-powered education. In his own words he calls Khanmigo and other AI learning tools “mind-blowing because they are tailored to each individual learner.” Gates emphasized how Khanmigo is “already remarkable” and shared he has confidence Khanmigo and similar platforms will only get better in the years to come.

What will the timeline of AI in 2024 look like?

We hope you enjoyed this look back of the top moments from 2023. Looking ahead, it’s likely that the timeline of AI in education has only just begun. The year 2023 has been a pivotal chapter, showcasing how AI can enhance learning and teaching. The journey along the AI timeline is just beginning, holding immense promise for the future of education.

Get Khanmigo

The best way to learn and teach with AI is here. Ace the school year with our AI-powered guide, Khanmigo. 

For learners     For teachers     For parents

Subscribe or renew today

Every print subscription comes with full digital access

Science News

How chatgpt and similar ai will disrupt education.

Teachers are concerned about cheating and inaccurate information

Students are turning to ChatGPT for homework help. Educators have mixed feeling about the tool and other generative AI.

Glenn Harvey

Share this:

By Kathryn Hulick

April 12, 2023 at 7:00 am

“We need to talk,” Brett Vogelsinger said. A student had just asked for feedback on an essay. One paragraph stood out. Vogelsinger, a ninth grade English teacher in Doylestown, Pa., realized that the student hadn’t written the piece himself. He had used ChatGPT.

The artificial intelligence tool, made available for free late last year by the company OpenAI, can reply to simple prompts and generate essays and stories. It can also write code.

Within a week, it had more than a million users. As of early 2023, Microsoft planned to invest $10 billion into OpenAI , and OpenAI’s value had been put at $29 billion, more than double what it was in 2021.

It’s no wonder other tech companies have been racing to put out competing tools. Anthropic, an AI company founded by former OpenAI employees, is testing a new chatbot called Claude. Google launched Bard in early February, and the Chinese search company Baidu released Ernie Bot in March.

A lot of people have been using ChatGPT out of curiosity or for entertainment. I asked it to invent a silly excuse for not doing homework in the style of a medieval proclamation. In less than a second, it offered me: “Hark! Thy servant was beset by a horde of mischievous leprechauns, who didst steal mine quill and parchment, rendering me unable to complete mine homework.”

But students can also use it to cheat. ChatGPT marks the beginning of a new wave of AI, a wave that’s poised to disrupt education.

When Stanford University’s student-run newspaper polled students at the university, 17 percent said they had used ChatGPT on assignments or exams at the end of 2022. Some admitted to submitting the chatbot’s writing as their own. For now, these students and others are probably getting away with it. That’s because ChatGPT often does an excellent job.

“It can outperform a lot of middle school kids,” Vogelsinger says. He might not have known his student had used it, except for one thing: “He copied and pasted the prompt.”

The essay was still a work in progress, so Vogelsinger didn’t see it as cheating. Instead, he saw an opportunity. Now, the student and AI are working together. ChatGPT is helping the student with his writing and research skills.

“[We’re] color-coding,” Vogelsinger says. The parts the student writes are in green. The parts from ChatGPT are in blue. Vogelsinger is helping the student pick and choose a few sentences from the AI to expand on — and allowing other students to collaborate with the tool as well. Most aren’t turning to it regularly, but a few kids really like it. Vogelsinger thinks the tool has helped them focus their ideas and get started.

This story had a happy ending. But at many schools and universities, educators are struggling with how to handle ChatGPT and other AI tools.

In early January, New York City public schools banned ChatGPT on their devices and networks. Educators were worried that students who turned to it wouldn’t learn critical-thinking and problem-solving skills. They also were concerned that the tool’s answers might not be accurate or safe. Many other school systems in the United States and around the world have imposed similar bans.

Keith Schwarz, who teaches computer science at Stanford, said he had “switched back to pencil-and-paper exams,” so students couldn’t use ChatGPT, according to the Stanford Daily .

Yet ChatGPT and its kin could also be a great service to learners everywhere. Like calculators for math or Google for facts, AI can make writing that often takes time and effort much faster. With these tools, anyone can generate well-formed sentences and paragraphs. How could this change the way we teach and learn?

Who said what?

When prompted, ChatGPT can craft answers that sound surprisingly like those from a student. We asked middle school and high school students from across the country, all participants in our Science News Learning education program , to answer some basic science questions in two sentences or less. The examples throughout the story compare how students responded with how ChatGPT responded when asked to answer the question at the same grade level.

illustration of circuitry

What effect do greenhouse gases have on the Earth?

Agnes b. | grade 11, harbor city international school, minn..

Greenhouse gases effectively trap heat from dissipating out of the atmosphere, increasing the amount of heat that remains near Earth in the troposphere.

Greenhouse gases trap heat in the Earth’s atmosphere, causing the planet to warm up and leading to climate change and its associated impacts like sea level rise, more frequent extreme weather events and shifts in ecosystems.

illustration of circuitry

The good, bad and weird of ChatGPT

ChatGPT has wowed its users. “It’s so much more realistic than I thought a robot could be,” says Avani Rao, a sophomore in high school in California. She hasn’t used the bot to do homework. But for fun, she’s prompted it to say creative or silly things. She asked it to explain addition, for instance, in the voice of an evil villain.

Given how well it performs, there are plenty of ways that ChatGPT could level the playing field for students and others working in a second language or struggling with composing sentences. Since ChatGPT generates new, original material, its text is not technically plagiarism.

Students could use ChatGPT like a coach to help improve their writing and grammar, or even to explain subjects they find challenging. “It really will tutor you,” says Vogelsinger, who had one student come to him excited that ChatGPT had clearly outlined a concept from science class.

Educators could use ChatGPT to help generate lesson plans, activities or assessments — perhaps even personalized to address the needs or goals of specific students.

Xiaoming Zhai, an expert in science education at the University of Georgia in Athens, tested ChatGPT to see if it could write an academic paper . He was impressed with how easy it was to summarize knowledge and generate good writing using the tool. “It’s really amazing,” he says.

All of this sounds wonderful, but really big problems exist. Most worrying, ChatGPT and other similar tools can often get things very wrong. They don’t pull facts from databases. Rather, they are trained to generate new text that sounds natural. They remix language without understanding it, which can lead to glaring mistakes.

The news website CNET came under fire earlier this year for using AI to churn out dozens of articles, many of them packed with errors. In an early advertisement for the Bard chatbot, it made a factual error about the James Webb Space Telescope, incorrectly claiming that it had taken the very first picture of an exoplanet. And ChatGPT said in a conversation posted on Twitter that the fastest marine mammal was the peregrine falcon. A falcon, of course, is a bird and doesn’t live in the ocean.

ChatGPT is “confidently wrong,” says Casey Fiesler, an expert in the ethics of technology at the University of Colorado Boulder. “There are mistakes and bad information.” She has made multiple TikTok videos about the pitfalls of ChatGPT .

Most of ChatGPT’s training data come from before September 2021, and it does not provide sources for its information. If asked for sources, it makes them up, Fiesler revealed in one video . Zhai, who sees the tool as an assistant, discovered the exact same thing. When he asked ChatGPT for citations, it gave him sources that looked correct. But they didn’t actually exist.

Define the term biodiversity.

Dante a. | grade 10, clark magnet high school, calif..

Biodiversity refers to the variety of living species and ecosystems that exist in a particular region or on the planet as a whole. It encompasses the different genes, species and ecosystems that make up the natural world and the relationships between them.

Biodiversity refers to the variety of living organisms that inhabit the Earth and the ecosystems they form. It includes the diversity of species, genes and ecosystems, and is important for maintaining the balance of nature and sustaining life on our planet.

How ChatGPT works

ChatGPT’s mistakes make sense if you know how it works. “It doesn’t reason. It doesn’t have ideas. It doesn’t have thoughts,” explains Emily M. Bender, a computational linguist at the University of Washington in Seattle.

ChatGPT was developed using at least two types of machine learning. The primary type is a large language model based on an artificial neural network. Loosely inspired by how neurons in the brain interact, this computing architecture finds statistical patterns in vast amounts of data.

A language model learns to predict what words will come next in a sentence or phrase by churning through vast amounts of text. It places words and phrases into a multidimensional map that represents their relationships to one another. Words that tend to come together, like peanut butter and jelly, end up closer together in this map.

The size of an artificial neural network is measured in parameters. These internal values get tweaked as the model learns. In 2020, OpenAI released GPT-3. At the time, it was the biggest language model ever, containing 175 billion parameters. It had trained on text from the internet as well as digitized books and academic journals. Training text also included transcripts of dialog, essays, exams and more, says Sasha Luccioni, a Montreal-based researcher at Hugging Face, a company that builds AI tools.

OpenAI improved upon GPT-3 to create GPT-3.5. In early 2022, the company released a fine-tuned version of GPT-3.5 called InstructGPT. This time, OpenAI added a new type of machine learning. Called reinforcement learning with human feedback, it puts people into the training process. These workers check the AI’s output. Responses that people like get rewarded. Human feedback can also help reduce hurtful, biased or inappropriate responses. This fine-tuned language model powers freely available ChatGPT. As of March, paying users receive answers powered by GPT-4, a bigger language model.

During ChatGPT’s development, OpenAI added extra safety rules to the model. It will refuse to answer certain sensitive prompts or provide harmful information. But this step raises another issue: Whose values are programmed into the bot, including what it is — or is not — allowed to talk about?

OpenAI is not offering exact details about how it developed and trained ChatGPT. The company has not released its code or training data. This disappoints Luccioni because it means the tool can’t benefit from the perspectives of the larger AI community. “I’d like to know how it works so I can understand how to make it better,” she says.

When asked to comment on this story, OpenAI provided a statement from an unnamed spokesperson. “We made ChatGPT available as a research preview to learn from real-world use, which we believe is a critical part of developing and deploying capable, safe AI systems,” the statement said. “We are constantly incorporating feedback and lessons learned.” Indeed, some experimenters have gotten the bot to say biased or inappropriate things despite the safety rules. OpenAI has been patching the tool as these problems come up.

ChatGPT is not a finished product. OpenAI needs data from the real world. The people who are using it are the guinea pigs. Notes Bender: “You are working for OpenAI for free.”

What are black holes and where are they found?

Althea c. | grade 11, waimea high school, hawaii.

A black hole is a place in space where gravity is so strong that nothing, not even light, may come out.

Black holes are extremely dense regions in space where the gravity is so strong that not even light can escape, and they are found throughout the universe.

ChatGPT’s academic performance

How good is ChatGPT in an academic setting? Catherine Gao, a doctor and medical researcher at Northwestern University’s Feinberg School of Medicine in Chicago, is part of one team of researchers that is putting the tool to the test.

Gao and her colleagues gathered 50 real abstracts from research papers in medical journals and then, after providing the titles of the papers and the journal names, asked ChatGPT to generate 50 fake abstracts. The team asked people familiar with reading and writing these types of research papers to identify which were which .

“I was surprised by how realistic and convincing the generated abstracts were,” Gao says. The reviewers mistook roughly one-third of the AI-generated abstracts as human-generated.

In another study, Will Yeadon and colleagues tested whether AI tools could pass a college exam . Yeadon, a physics instructor at Durham University in England, picked an exam from a course that he teaches. The test asks students to write five short essays about physics and its history. Students have an average score of 71 percent, which he says is equivalent to an A in the United States.

Yeadon used the tool davinci-003, a close cousin of ChatGPT. It generated 10 sets of exam answers. Then Yeadon and four other teachers graded the answers using their typical standards. The AI also scored an average of 71 percent. Unlike the human students, though, it had no very low or very high marks. It consistently wrote well, but not excellently. For students who regularly get bad grades in writing, Yeadon says, it “will write a better essay than you.”

These graders knew they were looking at AI work. In a follow-up study, Yeadon plans to use work from the AI and students and not tell the graders whose is whose.

What is heat?

Precious a. | grade 6, canyon day junior high school, ariz..

Heat is the transfer of kinetic energy from one medium or object to another, or from an energy source to a medium or object through radiation, conduction and convection.

Heat is a type of energy that makes things warmer. It can be produced by burning something or through electricity.

Tools to check for cheating

When it’s unclear whether ChatGPT wrote something or not, other AI tools may help. These tools typically train on AI-generated text and sometimes human-generated text as well. They can tell you how likely it is that text was composed by an AI. Many of the existing tools were trained on older language models, but developers are working quickly to put out new, improved tools.

A company called Originality.ai sells access to a tool that trained on GPT-3. Founder Jon Gillham says that in a test of 10,000 samples of texts composed by models based on GPT-3, the tool tagged 94 percent of them correctly as AI-generated. When ChatGPT came out, his team tested a smaller set of 20 samples. Each only 500 words in length, these had been created by ChatGPT and other models based on GPT-3 and GPT-3.5. Here, Gillham says, the tool “tagged all of them as AI-generated. And it was 99 percent confident, on average.”

In late January 2023, OpenAI released its own free tool for spotting AI writing, cautioning that the tool was “not fully reliable.” The company is working to add watermarks to its AI text, which would tag the output as machine-generated, but doesn’t give details on how. Gillham describes one possible approach: Whenever it generates text, the AI ranks many different possible words for each position. If its developers told it to always choose the word ranked in third place rather than first place at specific points in its output, those words could act as a fingerprint, he says.

As AI writing tools improve, the tools to sniff them out will need to improve as well. Eventually, some sort of watermark might be the only way to sort out true authorship.

What is DNA and how is it organized?

Luke m. | grade 8, eastern york middle school, pa..

DNA, or deoxyribonucleic acid, is kept inside the cells of living things, where it holds instructions for the genetics of the organism it is inhabiting.

DNA is like a set of instructions that tells our cells what to do. It’s organized into structures called chromosomes, which contain all of the DNA in a cell.

ChatGPT and the future of writing

There’s no doubt we will soon have to adjust to a world in which computers can write for us. But educators have made these sorts of adjustments before. As high school student Rao points out, Google was once seen as a threat to education because it made it possible to look up facts instantly. Teachers adapted by coming up with teaching and testing materials that don’t depend as heavily on memorization.

Now that AI can generate essays and stories, teachers may once again have to rethink how they teach and test. Rao says: “We might have to shift our point of view about what’s cheating and what isn’t.”

Some teachers will prevent students from using AI by limiting access to technology. Right now, Vogelsinger says, teachers regularly ask students to write out answers or essays at home. “I think those assignments will have to change,” he says. But he hopes that doesn’t mean kids do less writing.

Teaching students to write without AI’s help will remain essential, agrees Zhai. That’s because “we really care about a student’s thinking,” he stresses. And writing is a great way to demonstrate thinking. Though ChatGPT can help a student organize their thoughts, it can’t think for them, he says.

Kids still learn to do basic math even though they have calculators (which are often on the phones they never leave home without), Zhai acknowledges. Once students have learned basic math, they can lean on a calculator for help with more complex problems.

In the same way, once students have learned to compose their thoughts, they could turn to a tool like ChatGPT for assistance with crafting an essay or story. Vogelsinger doesn’t expect writing classes to become editing classes, where students brush up AI content. He instead imagines students doing prewriting or brainstorming, then using AI to generate parts of a draft, and working back and forth to revise and refine from there.

Though he’s overwhelmed about the prospect of having to adapt his teaching to another new technology, he says he is “having fun” figuring out how to navigate the new tech with his students.

Rao doesn’t see AI ever replacing stories and other texts generated by humans. Why? “The reason those things exist is not only because we want to read it but because we want to write it,” she says. People will always want to make their voices heard.

More Stories from Science News on Tech

an illustration of robotic hand using a human brain as a puppet on strings while inside a person's head

AI learned how to sway humans by watching a cooperative cooking game

Two abstract heads look at each other. One has a computer brain and the other has a real human brain.

Why large language models aren’t headed toward humanlike understanding

pinkish meat-infused rice in a white bowl

Could a rice-meat hybrid be what’s for dinner?

A baby sits on a white couch reading a book next to a teddy bear.

How do babies learn words? An AI experiment may hold clues

A man's prosthetic hand hovers over metal cubes that he sorted into a red area for hot and blue area for cold. A sensor on the index finger of the prosthetic hand is connected to a box higher up on his arm where the nerve impulses to sense temperature originate. A thermal image on a laptop show that the cubes were sorted correctly.

A new device let a man sense temperature with his prosthetic hand

An astronaut on Mars looks at a tablet device with the Red Planet landscape, base station and rover in the background

How to build an internet on Mars

A photograph showing the inside of a computer hard drive

‘Nuts and Bolts’ showcases the 7 building blocks of modern engineering

An illustration of a smiley face with a green monster with lots of tentacles and eyes behind it. The monster's fangs and tongue are visible at the bottom of the illustration.

AI chatbots can be tricked into misbehaving. Can scientists stop it?

From the nature index.

Subscribers, enter your e-mail address for full access to the Science News archives and digital editions.

Not a subscriber? Become one now .

America's Education News Source

Copyright 2024 The 74 Media, Inc

  • Cyberattack
  • absenteeism
  • Future of High School
  • Artificial Intelligence
  • science of reading

7 Artificial Intelligence Trends That Could Reshape Education in 2024

The revolutionary role of ai in education has a major impact on teaching and learning. here’s what to watch next year..

artificial intelligence in education newspaper

Untangle Your Mind!

Sign up for our free newsletter and start your day with clear-headed reporting on the latest topics in education.

artificial intelligence in education newspaper

74 Million Reasons to Give

Support The 74’s year-end campaign with a tax-exempt donation and invest in our future.

Most Popular

Why is a grading system touted as more accurate, equitable so hard to implement, in tennessee, the microschooling movement shows no signs of slowing down, nyc public high school students challenge ineffectual teacher — and win, on-the-job training prevails as students’ disinterest in college grows, cute: watch a 4th grader explain why thursday is both ‘dress for stem’ & pi day.

Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter

The future of education has never looked more creative and promising.

Since making its public debut last year, ChatGPT has profoundly impacted my perspective on generative AI in education. As a writer and former high school English teacher, I experienced an existential crisis watching the chatbot effortlessly generate lesson plans and rubrics — tasks that would have taken me hours to accomplish. 

Generative AI allows educators to move beyond traditional learning systems and provide a more responsive, personalized learning experience in which students demonstrate mastery, not just passing grades. 

“The future of AI in education is not just about adopting new technologies; it’s about reshaping our approach to teaching and learning in a way that is as dynamic and diverse as the students we serve,” XQ Institute Senior Advisor Laurence Holt said. He also formerly worked in the education, business and technology sectors. Through AI, we can also transcend the limitations of the Carnegie Unit — a century-old system in which a high school diploma is based on how much time students spend in specific subject classes. 

Changing that rigid system is our mission at XQ . We work with schools and communities to transform high school learning so it’s more relevant and engaging while also preparing students to succeed in college, career and real life. We recently co-convened a two-day summit with the Emerson Collective, in partnership with MeshEd and Betaworks, to bring educators and innovators together in a collaborative space — envisioning ways to use AI technology for transforming high school redesign. Those ideas and insights are available to explore on this beta wiki page .

After a year’s worth of conversations and observations with educators, our AI convening and EdTech Week 2023 , there is much to share with educators to help them make the most of the rapidly evolving ecosystem of artificial intelligence. Here are seven AI in education trends to be aware of next year.

1. Professional Development 

Throughout 2023, demand for AI professional development for educators remained high. In 2024, we should see an avalanche of districts and schools providing their educators with AI professional development materials to integrate these tools into their teaching practices.

At PSI High , an XQ school in Sanford, Florida, MIT’s STEP Lab ’s Sarah Wharton visited to present interesting ways to think about AI in the context of the school. 

“We looked at ChatGPT as a possible tutor, personal assistant, creative tool and research assistant,” said PSI High School Coordinator Angela Daniel. “In our PD session, we considered how these cool applications could be used in classrooms as learning tools that accelerate learning and teach the tool simultaneously.”

Daniel explained that teaching students how to use AI is a first step that will change things for students going forward.“But to really get at the heart of that question, we need to understand how generative AI can change our processes and resources right now,” she added. For the team at PSI, that means learning how to use generative AI effectively with ongoing support as the application continues to evolve.

Workshops, online courses and collaborative learning communities are also increasingly popular for providing educators with hands-on experience in AI.

Want to stay on top of trends to help you rethink high school? Check out the XQ Xtra, a newsletter for educators that comes out twice a month. Sign up here .

2. Formal AI Policy 

Integrating AI in classrooms is no longer a matter of “if” but “how,” making it imperative for educators and policymakers to navigate this terrain with informed and responsible strategies. However, the landscape of AI policy development — especially regarding education — has been dynamic, if not lagging.

The Council of Europe has continued providing critical insights for equitable policy and practice, an area where U.S. schools have been seeking guidance . New York City Public Schools, after initially banning ChatGPT, is now collaborating with academics, experts and school districts in the AI Policy Lab , focusing on issues such as privacy and cybersecurity. Recently, the Biden administration issued an executive order to guide the U.S. in leveraging artificial intelligence. This directive emphasized AI safety, privacy, equity and responsible use, signaling a shift in how AI is integrated into sectors like education. However, it is likely that AI policy in education will develop on a location-by-location basis first.

3. Open-Source Tool Development

Concerns about AI’s ethical implications and biases are sure to shape policy goals. One way to alleviate those pressures is the expansion and increased use of open-sourced tools — programs where the code is accessible and can be modified. The potential for AI to perpetuate biases is significant , however, expect the conversation to focus less on the output of AI tools and more on the kind of data it’s trained on .

Ensuring AI tools are equitable and inclusive goes beyond technical challenges — it requires continuous dialogue among educators, technologists and policymakers. This conversation is essential for addressing data privacy, surveillance and ethical use of student data. With a democratized, open-source marketplace, we could see the market promote open-sourced tools as they grow in popularity.

4. Frameworks for Teaching AI

Before the start of the 2023-24 academic year, educators and schools were waiting for a framework to guide their integration of AI tools . As policy moves forward in 2024 and more institutions develop professional development materials to train and support educators, expect AI curricula frameworks to finally emerge. Frameworks like TeachAI are being developed to guide the integration of AI in education. These frameworks focus on aligning AI applications with educational goals and promoting equitable access to technology, ensuring that AI complements and enhances student learning experiences.

5. AI Literacy, Competencies and Standards

With AI becoming more prevalent in various sectors, including education, there’s a growing need to integrate AI literacy goals and specific learning outcomes into school curricula. This involves teaching students how to use AI tools and understand the basics of AI technology, its applications and its implications.

At the Purdue Polytechnic High School network, an XQ partner with three campuses in Indiana , CEO Keeanna Warren explained how equipping staff and students with the knowledge and skills to harness AI’s potential promotes effective and responsible use of AI to enhance learning experiences.

“We firmly believe that our students’ innate curiosity drives their desire to learn, and we trust their integrity,” she said. “If AI can be used for cheating, it reflects a flaw in the assessment, not in our students’ character.”

The challenge lies in integrating AI literacy into an already packed curriculum. However, the opportunity to foster critical thinking, problem-solving and ethical reasoning skills through AI education is entirely possible.

6. AI-Powered Adaptive Learning Systems 

One of the more exciting pathways with AI is that student learning experiences will become more uniquely adaptive and personalized with a quicker turnaround. But creating effective programs requires training these systems on some level of student data -– a delicate balance.

As policy formalizes how student data gets implemented into these programs, AI-driven adaptive learning systems will emerge to shift instructional practice. Expect these programs to appear prominently in assessments and curriculum packages before evolving into real-time feedback systems that can inform teachers even during a lesson.

7. Custom GPTs Built By Educators

While all these advancements are promising and exciting, the marketplace for AI-driven ed tech tools will become incredibly crowded quickly. Recently, OpenAI’s maker space for building and using custom GPTs, which both use and are built by ChatGPT, is guaranteed to be a massive disruptor.

Ty Boyland, school-based enterprise coordinator and music production teacher at Crosstown High, designed a custom GPT. ( Crosstown High is another XQ school in Memphis, Tennessee .) Boyland’s students use Dall-E, an AI system for generating images, with GPT-4 to create designs and prints for student-driven projects. 

“But how do you create a project combining culinary and music production?” Boyland wondered. His customized GPT pairs XQ’s competencies with Tennessee State Standards to build a new project.

It will be interesting to see what educators create in this space to resolve pain points teachers and schools are intimately familiar with and what gets made to help schools achieve their vision and mission.  

The Bottom Line for Educators

From policy shifts emphasizing equity and privacy to the emergence of AI-driven curricula, the transformation is palpable. We’ve seen how AI can revolutionize and disrupt classroom practices, empower educators through professional development, and create inclusive, personalized student learning experiences. But the burgeoning AI ed tech market demands discernment. Educators must navigate this space wisely , choosing tools that genuinely enhance learning and align with ethical standards.

As we enter 2024, educators and stakeholders face a challenge: keeping pace with AI and engaging with it thoughtfully to catalyze educational excellence instead of just putting a new face on old practices. It’s the primary reason we at XQ convened so many educators and innovators into one space— to rethink high school by harnessing the potential of our AI-powered future. We look forward to sharing more with you in the coming year. 

Do you want to learn more about how to rethink high school? The XQ Xtra is a newsletter for educators that comes out twice a month. Sign up here .

Disclosure: The XQ Institute is a financial supporter of The 74 .

Edward Montalvo is the senior education writer at The XQ Institute. He was previously Dean of Students at PSI High in Sanford, Florida.

  • College to Career
  • Education AI
  • future of high school
  • Indianapolis

We want our stories to be shared as widely as possible — for free.

Please view The 74's republishing terms.

By Edward Montalvo

artificial intelligence in education newspaper

This story first appeared at The 74 , a nonprofit news site covering education. Sign up for free newsletters from The 74 to get more like this in your inbox.

On The 74 Today

Artificial Intelligence

Photo collage of computer with pixelated image of girl.

News  >   Topics

Artificial intelligence, edsurge podcast, could ai give civics education a boost, mar 26  · jeffrey r. young.

Could AI Give Civics Education a Boost?

Unlocking the Power of Creativity and AI: Preparing Students for the Future Workforce

Mar 20  · abbie misha.

Unlocking the Power of Creativity and AI: Preparing Students for the Future Workforce

Teaching and Learning

Learning designers call for more user testing of edtech products and teaching materials, mar 19  · jeffrey r. young.

Learning Designers Call for More User Testing of Edtech Products and Teaching Materials

Education Workforce

Can ai aid the early education workforce, mar 18  · emily tate sullivan.

Can AI Aid the Early Education Workforce?

When Bots Go to Class

Mar 7  · jeffrey r. young.

When Bots Go to Class

How Teachers Are Pondering the Ethics of AI

Feb 13  · daniel mollenkamp.

How Teachers Are Pondering the Ethics of AI

AI Is Disrupting Professions That Require College Degrees. How Should Higher Ed Respond?

Feb 13  · jeffrey r. young.

AI Is Disrupting Professions That Require College Degrees. How Should Higher Ed Respond?

How a Holden Caulfield Chatbot Helped My Students Develop AI Literacy

Feb 9  · mike kentz.

artificial intelligence in education newspaper

Teaching Fellows Guide Educators in Integrating AI for Enhanced Student Engagement

Feb 7  · sebastian basualto.

Teaching Fellows Guide Educators in Integrating AI for Enhanced Student Engagement

Collaborating for the Future of Teaching and Learning With Technology

Jan 31  · anthony baker.

Collaborating for the Future of Teaching and Learning With Technology

Inside the Push to Bring AI Literacy to Schools and Colleges

Jan 23  · jeffrey r. young.

Inside the Push to Bring AI Literacy to Schools and Colleges

A Technologist Spent Years Building an AI Chatbot Tutor. He Decided It Can’t Be Done.

Jan 22  · jeffrey r. young.

A Technologist Spent Years Building an AI Chatbot Tutor. He Decided It Can’t Be Done.

Anthropomorphism of AI in Learning Environments: Risks of Humanizing the Machine

Jan 15  · pati ruiz.

Anthropomorphism of AI in Learning Environments: Risks of Humanizing the Machine

Top 10 EdSurge Podcast Episodes of 2023

Jan 9  · jeffrey r. young.

Top 10 EdSurge Podcast Episodes of 2023

To Understand ChatGPT’s Impact on Higher Education, Think Like a Scientist

Jan 5  · adam brown.

To Understand ChatGPT’s Impact on Higher Education, Think Like a Scientist

Looking Back on the Biggest Education Trends of 2023

Jan 2  · jeffrey r. young.

Looking Back on the Biggest Education Trends of 2023

Can AI in Education Foster Human-Centric Learning?

Dec 20, 2023  · annie ning.

Can AI in Education Foster Human-Centric Learning?

Dec 20, 2023

Can AI in Education Foster Human-Centric Learning?

After Transforming a College With Online Offerings, a President Steps Down to Tackle AI

Dec 19, 2023  · jeffrey r. young.

After Transforming a College With Online Offerings, a President Steps Down to Tackle AI

Dec 19, 2023

After Transforming a College With Online Offerings, a President Steps Down to Tackle AI

Education and Artificial Intelligence: Navigating the Path to Transformation

Dec 18, 2023  · abbie misha.

Education and Artificial Intelligence: Navigating the Path to Transformation

Dec 18, 2023

Education and Artificial Intelligence: Navigating the Path to Transformation

Powerful Learning With Artificial Intelligence For Educators

Dec 13, 2023  · pati ruiz.

Powerful Learning With Artificial Intelligence For Educators

Dec 13, 2023

Powerful Learning With Artificial Intelligence For Educators

How an AI-Powered Tool Accelerated Student Writing

Nov 27, 2023  · abbie misha.

How an AI-Powered Tool Accelerated Student Writing

Nov 27, 2023

How an AI-Powered Tool Accelerated Student Writing

How AI Could Bring Big Changes to Education — And How to Avoid Worst-Case Scenarios

Nov 14, 2023  · jeffrey r. young.

How AI Could Bring Big Changes to Education — And How to Avoid Worst-Case Scenarios

Nov 14, 2023

How AI Could Bring Big Changes to Education — And How to Avoid Worst-Case Scenarios

Is AI a Pathway to Better Teaching and Learning?

Nov 13, 2023  · daniel emmerson.

Is AI a Pathway to Better Teaching and Learning?

Nov 13, 2023

Is AI a Pathway to Better Teaching and Learning?

How to Navigate the Nuances of Anonymous and De-Identified Data in AI-Driven Classrooms

Nov 8, 2023  · julie neisler.

How to Navigate the Nuances of Anonymous and De-Identified Data in AI-Driven Classrooms

Nov 8, 2023

How to Navigate the Nuances of Anonymous and De-Identified Data in AI-Driven Classrooms

What I Learned From an Experiment to Apply Generative AI to My Data Course

Nov 3, 2023  · wendy castillo.

What I Learned From an Experiment to Apply Generative AI to My Data Course

Nov 3, 2023

What I Learned From an Experiment to Apply Generative AI to My Data Course

Teens Need Parent Permission to Use ChatGPT. Could That Slow Its Use in Schools?

Nov 2, 2023  · jeffrey r. young.

Teens Need Parent Permission to Use ChatGPT. Could That Slow Its Use in Schools?

Nov 2, 2023

Teens Need Parent Permission to Use ChatGPT. Could That Slow Its Use in Schools?

How Can Teachers Prepare Students for an AI-Driven Future?

Nov 1, 2023  · abbie misha.

How Can Teachers Prepare Students for an AI-Driven Future?

Nov 1, 2023

How Can Teachers Prepare Students for an AI-Driven Future?

4 Ways Edtech Entrepreneurs Can Earn Trust and Unlock New Opportunities With Education Customers

Oct 23, 2023  · alec chalmers.

4 Ways Edtech Entrepreneurs Can Earn Trust and Unlock New Opportunities With Education Customers

Oct 23, 2023

4 Ways Edtech Entrepreneurs Can Earn Trust and Unlock New Opportunities With Education Customers

Unlocking the Power of Personalized Learning With Trustworthy AI and Advanced Analytics

Oct 20, 2023  · abbie misha.

Unlocking the Power of Personalized Learning With Trustworthy AI and Advanced Analytics

Oct 20, 2023

Unlocking the Power of Personalized Learning With Trustworthy AI and Advanced Analytics

AI in Higher Ed: Using What We Already Know About Good Teaching Practices

Oct 13, 2023  · laura dumin.

AI in Higher Ed: Using What We Already Know About Good Teaching Practices

Oct 13, 2023

AI in Higher Ed: Using What We Already Know About Good Teaching Practices

Popular Topics

Coronavirus, data privacy, digital learning, diversity and equity, personalized learning, remote learning, view all topics.

Journalism that ignites your curiosity about education.

EdSurge is an editorially independent project of and

  • Product Index
  • Write for us
  • Advertising

FOLLOW EDSURGE

© 2024 All Rights Reserved

eSchool News

Impact of Artificial Intelligence in Education

Ai's potential benefits are immense, but there are also ethical considerations and concerns about data privacy.

Key points:

  • AI is a powerful catalyst for enhancing teaching effectiveness in many ways
  • AI augments teaching by offering real-time insights, and fostering PD
  • Discover more about why AI in education is essential for learning

Artificial intelligence is reshaping the landscape of education, ushering in a new era of innovation and transformation. AI technologies are reshaping traditional educational models, offering innovative tools that adapt to individual student needs, streamline administrative tasks, and provide valuable insights through data analytics. From intelligent tutoring systems to immersive virtual reality experiences, AI is transforming how knowledge is imparted and acquired. While the potential benefits are immense, there are also ethical considerations, concerns about data privacy, and challenges associated with equitable access.

This dynamic interplay between technological advancement and educational evolution underscores the significance of understanding and harnessing the impact of AI to create a more adaptive, inclusive, and effective learning environment.

How artificial intelligence in education influences teaching effectiveness

To analyze the benefits of artificial intelligence in education, we must acknowledge that AI is a powerful catalyst for enhancing teaching effectiveness in numerous ways. Firstly, AI enables personalized learning experiences by analyzing individual student data, allowing educators to tailor instruction to diverse learning styles and pace. This adaptability fosters a deeper understanding of subjects among students.

AI supports teachers in administrative tasks, such as grading and assessment, freeing up valuable time. Automated grading systems powered by AI streamline routine tasks, enabling educators to focus on interactive teaching methods, mentorship, and targeted interventions for struggling students.

Intelligent tutoring systems, driven by AI, provide real-time feedback and insights into student performance. This allows teachers to identify learning gaps promptly and address them proactively. AI also offers data-driven analytics, empowering educators with valuable insights into student progress and areas that require attention.

AI contributes to professional development by providing teachers with innovative tools and resources. It facilitates continuous learning and updates on best practices, ensuring educators stay informed about evolving educational methodologies.

In essence, AI augments teaching effectiveness by providing personalized support, automating routine tasks, offering real-time insights, and fostering ongoing professional development. As a collaborative partner, AI empowers educators to create dynamic and engaging learning environments, ultimately benefiting both teachers and students in the educational journey.

How will artificial intelligence impact the way we teach?

AI is poised to revolutionize the way we teach by introducing innovative approaches that enhance educational outcomes. One significant aspect of the impact of artificial intelligence on education is the personalization of learning experiences. AI algorithms analyze individual student data, allowing for tailored content delivery based on students’ unique needs and learning styles. This adaptability ensures a more engaging and effective learning environment.

AI facilitates a shift from a one-size-fits-all approach to personalized learning plans. Intelligent tutoring systems provide real-time feedback, identifying areas where students may struggle and offering targeted assistance. This not only addresses individual learning gaps but also promotes a self-paced and mastery-based approach to education.

Moreover, AI introduces data-driven insights into teaching practices. Educators can leverage analytics to assess student progress, identify effective teaching strategies, and make informed decisions to optimize learning outcomes.

As AI becomes more integrated into education, teachers will transition into roles that emphasize mentorship, creativity, and personalized guidance. The human touch in teaching, coupled with the capabilities of AI, will create a symbiotic relationship, ensuring that education remains a collaborative and enriching experience. Embracing these advancements, educators will become facilitators of personalized learning journeys, preparing students for success in an ever-evolving, technology-driven world.

Does AI benefit or hurt the field of education?

The pros and cons of AI in education are nuanced, with many factors influencing the educational landscape.

Positive aspects:

  • Personalized learning: AI enables personalized learning experiences, tailoring educational content to individual student needs and learning styles. This adaptability can enhance comprehension and engagement.
  • Efficiency and time saving: Automated grading systems powered by AI streamline administrative tasks, allowing educators to allocate more time to interactive teaching methods, mentorship, and targeted interventions for students.
  • Global accessibility: AI facilitates online education, providing access to quality learning resources and courses globally. This inclusivity is particularly beneficial for students in remote or underserved areas.

Negative aspects:

  • Bias and inequity: If AI algorithms are trained on biased data, they may perpetuate existing inequities in education. This raises concerns about fairness and equitable access to educational opportunities.
  • Loss of human connection: Critics argue that an overreliance on AI might lead to a diminished human element in education. The emotional and social aspects of learning, crucial for holistic development, may be compromised.
  • Privacy concerns: The collection and analysis of extensive student data raise privacy issues. Ensuring robust safeguards for the ethical and secure handling of sensitive information is crucial.

The impact of AI on education depends on how it is implemented and integrated. When used responsibly, AI has the potential to enhance personalized learning, streamline administrative tasks, and increase educational accessibility. However, addressing concerns related to bias, privacy, and the potential loss of human connection is essential for ensuring that AI contributes positively to the educational experience.

How AI is shaping the future of education

The impact of artificial intelligence in education is far-reaching, shaping the future of K-12 education by introducing transformative changes that enhance learning experiences, improve efficiency, and foster personalized education:

  • Personalized learning: AI-powered adaptive learning platforms analyze individual student data to customize educational content, catering to diverse learning styles and paces. This personalization ensures that students receive tailored instruction, promoting a deeper understanding of subjects.
  • Data-driven insights: AI provides educators with valuable data-driven insights into student performance and learning patterns. This information helps in making informed decisions, refining teaching strategies, and identifying areas that may require additional attention.
  • Global accessibility: AI facilitates online education, enabling students to access educational resources and courses from anywhere in the world. This inclusivity expands educational opportunities, especially for students in remote or underserved areas.
  • Preparation for future skills: AI-driven educational tools, including virtual tutors and gamified learning platforms, foster the development of critical thinking, problem-solving, and digital literacy skills, preparing students for the evolving demands of the future workforce.

AI is shaping the future of K-12 education by creating a more personalized, efficient, and globally accessible learning environment. As technology continues to advance, the thoughtful integration of AI holds the potential to optimize educational outcomes, equip students with essential skills, and prepare them for success in an increasingly digital and interconnected world.

How can AI disrupt education?

AI has the potential to disrupt education significantly, challenging traditional paradigms and reshaping the learning landscape. Let’s look at what might accompany the future of AI in education:

  • Personalized learning revolution: AI can tailor educational content to individual student needs, preferences, and learning styles, challenging the one-size-fits-all model. This disruption fosters a more personalized and adaptive learning environment, addressing diverse learning needs.
  • Shift in pedagogical approaches: AI-driven intelligent tutoring systems can challenge traditional pedagogical approaches. These systems offer real-time feedback and insights, influencing a move towards student-centric, data-driven teaching methodologies.
  • Global accessibility and inclusivity: AI facilitates online education, disrupting geographical barriers and expanding access to learning resources globally. This inclusivity challenges traditional notions of education delivery, making quality education accessible beyond traditional classrooms.
  • Emergence of new educational models: AI’s influence extends to the emergence of new educational models, such as virtual classrooms, adaptive learning platforms, and gamified learning experiences. These innovations disrupt traditional classroom structures and methodologies.
  • Evolving teacher roles: AI’s introduction may lead to a redefinition of the teacher’s role. Educators may transition from traditional lecturers to facilitators of personalized learning journeys, emphasizing mentorship, creativity, and emotional support.

While these disruptions offer immense potential for positive transformation, they also raise concerns about equity, privacy, and the ethical use of data. Managing these challenges responsibly is crucial to harness the full benefits of AI in education and ensure a positive and inclusive learning future.

How does AI help teachers?

AI serves as a valuable ally for educators, offering a range of tools and capabilities that enhance teaching effectiveness and streamline various aspects of the educational process. Here’s a sample of how AI for teachers can improve instructional approaches:

  • Personalized learning: AI analyzes individual student data to tailor educational content, addressing diverse learning styles. This personalized approach allows teachers to cater to the unique needs of each student, fostering a deeper understanding of subjects.
  • Real-time feedback and interventions: Intelligent tutoring systems powered by AI offer real-time feedback on student performance. Teachers can use this data to identify learning gaps promptly, allowing for timely interventions and targeted support.
  • Professional development: AI provides educators with continuous professional development opportunities. Access to innovative teaching resources, data-driven insights, and collaborative platforms empowers teachers to stay informed about evolving educational methodologies and refine their teaching practices.
  • Enhanced teaching strategies: AI-driven analytics offer insights into teaching strategies that are most effective for individual students or entire classrooms. This data-driven approach enables teachers to refine their methods, adapting to the evolving needs of their students.
  • Inclusive education: AI supports inclusive education by providing tools for early identification of learning challenges and offering targeted interventions. This proactive approach ensures that all students, including those with diverse learning needs, receive the support they require.

AI assists teachers by providing personalized learning experiences, automating administrative tasks, offering real-time feedback, facilitating professional development, enhancing teaching strategies, and promoting inclusivity in education. As a collaborative partner, AI contributes to a more dynamic and effective teaching environment, empowering educators to create impactful and personalized learning journeys for their students.

What are the disadvantages of artificial intelligence for children?

AI use in education for children presents potential disadvantages that warrant careful consideration. Among the negative effects of artificial intelligence in education is the risk of reinforcing biases inherent in training data, leading to discriminatory outcomes. Additionally, excessive reliance on AI could compromise the development of critical thinking and creativity by promoting a standardized approach to learning. Privacy issues emerge as AI systems collect and analyze extensive data, raising concerns about the security of children’s information.

Furthermore, the digital divide may exacerbate inequalities, as not all children have equal access to AI-driven educational tools. Striking a balance between the benefits and potential drawbacks of AI in education for children is essential to ensure responsible and equitable implementation.

What is the role of artificial intelligence in teaching and learning?

The role of AI in teaching and learning and of AI tools for education is transformative, reshaping traditional educational paradigms. In teaching, AI acts as a supportive tool, automating administrative tasks like grading, allowing educators to focus on interactive and personalized instruction. AI-driven tutoring systems offer tailored feedback and adapt to individual student needs, enhancing the learning experience. Moreover, AI facilitates the creation of dynamic and engaging educational content, catering to diverse learning styles.

In learning, AI provides personalized pathways, adapting to each student’s pace and preferences. Intelligent content delivery systems utilize data analytics to identify areas of strength and weakness, enabling a targeted approach to skill development. Virtual reality and simulations powered by AI offer immersive learning environments, making complex subjects more accessible and practical.

While AI streamlines educational processes, challenges include ethical concerns, potential biases in algorithms, and the need for responsible AI use. The evolving role of educators involves collaboration with AI tools, emphasizing the human touch in fostering critical thinking and creativity. Overall, AI in teaching and learning holds the promise of a more adaptive, personalized, and inclusive educational experience, preparing students for the complexities of the modern world.

Educators can embrace the transformative power of AI in education by taking collective action to shape a future-ready learning environment.

Educators, policymakers, parents, and stakeholders can unite in fostering responsible AI integration: Prioritize ongoing professional development for educators to harness AI’s potential. Advocate for equitable access to AI-driven resources, ensuring that all students benefit from this technological evolution. Industry leaders can invest in collaborative initiatives to develop innovative AI tools aligned with educational goals. Together, educators and stakeholders can unlock the full potential of AI, creating personalized, inclusive, and impactful learning experiences. A collaborative commitment will shape an education landscape where AI empowers learners, nurtures creativity, and prepares students for the challenges of an evolving world.

artificial intelligence in education newspaper

Sign up for our K-12 newsletter

  • Recent Posts
  • AI in Education - February 5, 2024
  • Impact of Artificial Intelligence in Education - February 5, 2024
  • What is the Impact of AI on Education? - February 5, 2024

Want to share a great resource? Let us know at [email protected] .

artificial intelligence in education newspaper

Username or Email Address

Remember Me

artificial intelligence in education newspaper

" * " indicates required fields

eSchool News uses cookies to improve your experience. Visit our Privacy Policy  for more information.

artificial intelligence in education newspaper

General Assembly adopts landmark resolution on artificial intelligence

A view of the General Assembly in session. (file)

Facebook Twitter Print Email

The UN General Assembly on Thursday adopted a landmark resolution on the promotion of “safe, secure and trustworthy” artificial intelligence (AI) systems that will also benefit sustainable development for all.

Adopting a United States-led draft resolution without a vote, the Assembly also highlighted the respect, protection and promotion of human rights in the design, development, deployment and the use of AI.

The text was “co-sponsored” or backed by more than 120 other Member States .

The General Assembly also recognized AI systems’ potential to accelerate and enable progress towards reaching the 17 Sustainable Development Goals .

It represents the first time the Assembly has adopted a resolution on regulating the emerging field. The US National Security Advisor reportedly said earlier this month that the adoption would represent an “historic step forward” for the safe use of AI.

UN_News_Centre

Same rights, online and offline

The Assembly called on all Member States and stakeholders “to refrain from or cease the use of artificial intelligence systems that are impossible to operate in compliance with international human rights law or that pose undue risks to the enjoyment of human rights.”

“The same rights that people have offline must also be protected online, including throughout the life cycle of artificial intelligence systems,” it affirmed.

The Assembly also urged all States, the private sector, civil society, research organizations and the media, to develop and support regulatory and governance approaches and frameworks related to safe, secure and trustworthy use of AI.

Closing the digital divide

The Assembly further recognized the “varying levels” of technological development between and within countries , and that developing nations face unique challenges in keeping up with the rapid pace of innovation.

It urged Member States and stakeholders to cooperate with and support developing countries so they can benefit from inclusive and equitable access, close the digital divide, and increase digital literacy.

Hope for other sectors

Speaking before the adoption, Linda Thomas-Greenfield, US Ambassador and Permanent Representative to the UN, introduced the draft resolution.

She expressed hope that the “inclusive and constructive dialogue that led to this resolution would serve as a model for future conversations on AI challenges in other arenas, for example, with respect to peace and security and responsible military use of AI autonomy.”

Ms. Thomas-Greenfield noted that the resolution was designed to amplify the work already being done by the UN, including the International Telecommunication Union (ITU), the UN Educational, Scientific and Cultural Organization (UNESCO) and the Human Rights Council .

“We intend for it to complement future UN initiatives, including negotiations toward a global digital compact and the work of the Secretary-General’s high-level advisory body on artificial intelligence,” she said.

We govern AI

Ms. Thomas-Greenfield also highlighted the opportunity and the responsibility of the international community “ to govern this technology rather than let it govern us ”.

“So let us reaffirm that AI will be created and deployed through the lens of humanity and dignity, safety and security, human rights and fundamental freedoms,” she said.

“Let us commit to closing the digital gap within and between nations and using this technology to advance shared priorities around sustainable development.”

  • artificial intelligence
  • General Assembly
  • Share full article

Advertisement

Supported by

A.I. Here, There, Everywhere

Many of us already live with artificial intelligence now, but researchers say interactions with the technology will become increasingly personalized.

By Craig S. Smith

This article is part of our new series, Currents , which examines how rapid advances in technology are transforming our lives.

I wake up in the middle of the night. It’s cold.

“Hey, Google, what’s the temperature in Zone 2,” I say into the darkness. A disembodied voice responds: “The temperature in Zone 2 is 52 degrees.” “Set the heat to 68,” I say, and then I ask the gods of artificial intelligence to turn on the light.

Many of us already live with A.I., an array of unseen algorithms that control our Internet-connected devices, from smartphones to security cameras and cars that heat the seats before you’ve even stepped out of the house on a frigid morning.

But, while we’ve seen the A.I. sun, we have yet to see it truly shine.

Researchers liken the current state of the technology to cellphones of the 1990s: useful, but crude and cumbersome. They are working on distilling the largest, most powerful machine-learning models into lightweight software that can run on “the edge,” meaning small devices such as kitchen appliances or wearables. Our lives will gradually be interwoven with brilliant threads of A.I.

Our interactions with the technology will become increasingly personalized. Chatbots, for example, can be clumsy and frustrating today, but they will eventually become truly conversational, learning our habits and personalities and even develop personalities of their own. But don’t worry, the fever dreams of superintelligent machines taking over, like HAL in “2001: A Space Odyssey,” will remain science fiction for a long time to come; consciousness, self-awareness and free will in machines are far beyond the capabilities of science today.

Privacy remains an issue, because artificial intelligence requires data to learn patterns and make decisions. But researchers are developing methods to use our data without actually seeing it — so-called federated learning , for example — or encrypt it in ways that currently can’t be hacked .

Our homes and our cars will increasingly be watched over with A.I.-integrated sensors. Some security cameras today use A.I.-enabled facial recognition software to identify frequent visitors and detect strangers. But soon, networks of overlapping cameras and sensors will create a mesh of “ ambient intelligence ,” that will be available to monitor us all the time, if we want it. Ambient intelligence could recognize changes in behavior and prove a boon to older adults and their families.

“Intelligent systems will be able to understand the daily activity patterns of seniors living alone, and catch early patterns of medically relevant information,” said Fei-Fei Li , a Stanford University computer science professor and a co-director of the Stanford Institute for Human-Centered Artificial Intelligence who was instrumental in sparking the current A.I. revolution . While she says much work remains to be done to address privacy concerns, such systems could detect signs of dementia, sleep disorders, social isolation, falls and poor nutrition, and notify caretakers.

Streaming services such as Netflix or Spotify already use A.I. to learn your preferences and feed you a steady diet of enticing entertainment. Google Play uses A.I. to recommend mood music that matches the time and weather. A.I. is being used to bring old films into focus and bring black-and-white into color and even add sound to silent movies . It’s also improving streaming speed and consistency. Those spinning animations that indicate a computer is stuck on something may soon be a relic of the past that people will recall with fondness, the way many of us do with TV “ snow ” today.

Increasingly, more of the media we consume will actually be generated by A.I. Google’s open-source Magenta project has created an array of applications that make music indistinguishable from human composers and performers.

The research institute OpenAI has created MuseNet , which uses artificial intelligence to blend different styles of music into new compositions. The institute also has Jukebox , which creates new songs when given a genre, artist and lyrics, which in some cases are co-written by A.I.

These are early efforts, achieved by feeding millions of songs into networks of artificial neurons, made from strings of computer code, until they internalize patterns of melody and harmony, and can recreate the sound of instruments and voices.

Musicians are experimenting with these tools today and a few start-ups are already offering A.I.-generated background music for podcasts and video games.

Artificial intelligence is as abstract as thought, written in computer code, but people imagine A.I. embodied in humanoid form. Robotic hardware has a lot of catching up to do, however. Realistic, A.I.-generated avatars will have A.I.-generated conversations and sing A.I.-generated songs, and even teach our children. Deepfakes also exist, where the face and voice of one person, for example, is transposed onto a video of another. We’ve also seen realistic A.I.-generated faces of people who don’t exist .

Researchers are working on combining the technologies to create realistic 2D avatars of people who can interact in real time, showing emotion and making context-relevant gestures. A Samsung-associated company called Neon has introduced an early version of such avatars, though the technology has a long way to go before it is practical to use.

Such avatars could help revolutionize education. Artificial intelligence researchers are already developing A.I. tutoring systems that can track student behavior, predict their performance and deliver content and strategies to both improve that performance and prevent students from losing interest. A.I. tutors hold the promise of truly personalized education available to anyone in the world with an Internet-connected device — provided they are willing to surrender some privacy.

“Having a visual interaction with a face that expresses emotions, that expresses support, is very important for teachers,” said Yoshua Bengio , a professor at the University of Montreal and the founder of Mila , an artificial intelligence research institute. Korbit , a company founded by one of his students, Iulian Serban , and Riiid , based in South Korea, are already using this technology in education, though Mr. Bengio says it may be a decade or more before such tutors have natural language fluidity and semantic understanding.

There are seemingly endless ways in which artificial intelligence is beginning to touch our lives, from discovering new materials to new drugs — A.I. has already played a role in the development of Covid-19 vaccines by narrowing the field of possibilities for scientists to search — to picking the fruit we eat and sorting the garbage we throw way . Self-driving cars work, they’re just waiting for laws and regulations to catch up with them.

Artificial intelligence is even starting to write software and may eventually write more complex A.I. Diffblue , a start-up out of Oxford University, has an A.I. system that automates the writing of software tests, a task that takes up as much as a third of expensive developers’ time. Justin Gottschlich, who runs the machine programming research group at Intel Labs, envisions a day when anyone can create software simply by telling an A.I. system clearly what they want the software to do.

“I can imagine people like my mom creating software,” he said, “even though she can’t write a line of code.”

Craig S. Smith is a former correspondent for The Times and hosts the podcast “ Eye on A.I. ”

  • Election 2024
  • Entertainment
  • Newsletters
  • Photography
  • Personal Finance
  • AP Buyline Personal Finance
  • Press Releases
  • Israel-Hamas War
  • Russia-Ukraine War
  • Global elections
  • Asia Pacific
  • Latin America
  • Middle East
  • March Madness
  • AP Top 25 Poll
  • Movie reviews
  • Book reviews
  • Personal finance
  • Financial Markets
  • Business Highlights
  • Financial wellness
  • Artificial Intelligence
  • Social Media

The UN adopts a resolution backing efforts to ensure artificial intelligence is safe

The General Assembly approved the first United Nations resolution on artificial intelligence Thursday, giving global support to an international effort to ensure the powerful new technology benefits all nations. (AP production: Javier Arciga)

FILE - U.S. Ambassador Linda Thomas-Greenfield addresses a meeting of the United Nations Security Council on maintenance of international peace and security nuclear disarmament and non-proliferation, Monday, March 18, 2024, at U.N. headquarters. The General Assembly is set to vote Thursday, March 21, on what would be the first U.N. resolution on artificial intelligence. Thomas-Greenfield told The Associated Press last week that the resolution “aims to build international consensus on a shared approach to the design, development, deployment and use of AI systems,” particularly to support the 2030 U.N. goals. (AP Photo/Eduardo Munoz Alvarez, File)

FILE - U.S. Ambassador Linda Thomas-Greenfield addresses a meeting of the United Nations Security Council on maintenance of international peace and security nuclear disarmament and non-proliferation, Monday, March 18, 2024, at U.N. headquarters. The General Assembly is set to vote Thursday, March 21, on what would be the first U.N. resolution on artificial intelligence. Thomas-Greenfield told The Associated Press last week that the resolution “aims to build international consensus on a shared approach to the design, development, deployment and use of AI systems,” particularly to support the 2030 U.N. goals. (AP Photo/Eduardo Munoz Alvarez, File)

  • Copy Link copied

UNITED NATIONS (AP) — The General Assembly approved the first United Nations resolution on artificial intelligence Thursday, giving global support to an international effort to ensure the powerful new technology benefits all nations, respects human rights and is “safe, secure and trustworthy.”

The resolution, sponsored by the United States and co-sponsored by 123 countries, including China, was adopted by consensus with a bang of the gavel and without a vote, meaning it has the support of all 193 U.N. member nations.

U.S. Vice President Kamala Harris and National Security Advisor Jake Sullivan called the resolution “historic” for setting out principles for using artificial intelligence in a safe way. Secretary of State Antony Blinken called it “a landmark effort and a first-of-its-kind global approach to the development and use of this powerful emerging technology.”

“AI must be in the public interest – it must be adopted and advanced in a way that protects everyone from potential harm and ensures everyone is able to enjoy its benefits,” Harris said in a statement.

AP AUDIO: The UN adopts a resolution backing efforts to ensure artificial intelligence is safe.

AP correspondent Ed Donahue reports on a global effort to monitor artificial intelligence.

At last September’s gathering of world leaders at the General Assembly, President Joe Biden said the United States planned to work with competitors around the world to ensure AI was harnessed “for good while protecting our citizens from this most profound risk.”

Country music star Luke Bryan speaks during a bill signing ceremony with Gov. Bill, right, on Thursday, March 21, 2024, in Nashville, Tenn. The legislation is designed to protect songwriters, performers and other music industry professionals against the potential dangers of artificial intelligence. The signing took place in Robert's Western World, an historic honky tonk in downtown Nashville. (AP Photo/Mark Humphrey)

Over the past few months, The United States worked with more than 120 countries at the United Nations — including Russia, China and Cuba — to negotiate the text of the resolution adopted Thursday.

“In a moment in which the world is seen to be agreeing on little, perhaps the most quietly radical aspect of this resolution is the wide consensus forged in the name of advancing progress,” U.S. Ambassador Linda Thomas-Greenfield told the assembly just before the vote.

“The United Nations and artificial intelligence are contemporaries, both born in the years following the Second World War,” she said. “The two have grown and evolved in parallel. Today, as the U.N. and AI finally intersect we have the opportunity and the responsibility to choose as one united global community to govern this technology rather than let it govern us.”

At a news conference after the vote, ambassadors from the Bahamas, Japan, the Netherlands, Morocco, Singapore and the United Kingdom enthusiastically supported the resolution, joining the U.S. ambassador who called it “a good day for the United Nations and a good day for multilateralism.”

Thomas-Greenfield said in an interview with The Associated Press that she believes the world’s nations came together in part because “the technology is moving so fast that people don’t have a sense of what is happening and how it will impact them, particularly for countries in the developing world.”

“They want to know that this technology will be available for them to take advantage of it in the future, so this resolution gives them that confidence,” Thomas-Greenfield said. “It’s just the first step. I’m not overplaying it, but it’s an important first step.”

The resolution aims to close the digital divide between rich developed countries and poorer developing countries and make sure they are all at the table in discussions on AI. It also aims to make sure that developing countries have the technology and capabilities to take advantage of AI’s benefits, including detecting diseases, predicting floods, helping farmers and training the next generation of workers.

The resolution recognizes the rapid acceleration of AI development and use and stresses “the urgency of achieving global consensus on safe, secure and trustworthy artificial intelligence systems.”

It also recognizes that “the governance of artificial intelligence systems is an evolving area” that needs further discussions on possible governance approaches. And it stresses that innovation and regulation are mutually reinforcing — not mutually exclusive.

Big tech companies generally have supported the need to regulate AI, while lobbying to ensure any rules work in their favor.

European Union lawmakers gave final approval March 13 to the world’s first comprehensive AI rules , which are on track to take effect by May or June after a few final formalities.

Countries around the world, including the U.S. and China, and the Group of 20 major industrialized nations are also moving to draw up AI regulations. The U.N. resolution takes note of other U.N. efforts including by Secretary-General António Guterres and the International Telecommunication Union to ensure that AI is used to benefit the world. Thomas-Greenfield also cited efforts by Japan, India and other countries and groups.

Unlike Security Council resolutions, General Assembly resolutions are not legally binding but they are a barometer of world opinion.

The resolution encourages all countries, regional and international organizations, tech communities, civil society, the media, academia, research institutions and individuals “to develop and support regulatory and governance approaches and frameworks” for safe AI systems.

It warns against “improper or malicious design, development, deployment and use of artificial intelligence systems, such as without adequate safeguards or in a manner inconsistent with international law.”

A key goal, according to the resolution, is to use AI to help spur progress toward achieving the U.N.’s badly lagging development goals for 2030, including ending global hunger and poverty, improving health worldwide, ensuring quality secondary education for all children and achieving gender equality.

The resolution calls on the 193 U.N. member states and others to assist developing countries to access the benefits of digital transformation and safe AI systems. It “emphasizes that human rights and fundamental freedoms must be respected, protected and promoted through the life cycle of artificial intelligence systems.”

artificial intelligence in education newspaper

artificial intelligence in education newspaper

Generative AI @ Harvard

Generative AI tools are changing the way we teach, learn, research, and work. Explore Harvard's work on the frontier of GenAI.

Resources for the Harvard community

Teach with genai, learn with genai, research with genai, work with genai.

a broader scope

AI @ Harvard

Generative AI is only part of the fascinating world of artificial intelligence.

artificial intelligence in education newspaper

Rise of the machines?

artificial intelligence in education newspaper

Why virtual isn’t actual, especially when it comes to friends

artificial intelligence in education newspaper

AI may be just what the dentist ordered

artificial intelligence in education newspaper

A DEEPer (squared) dive into AI

artificial intelligence in education newspaper

An AI tool that can help forecast viral variants

artificial intelligence in education newspaper

Time for teachers to get moving on ChatGPT

artificial intelligence in education newspaper

If it wasn’t created by a human artist, is it still art?

The outsourced mind: ai, democracy, and the future of human control.

The pace of change in the development of Artificial Intelligence is breathtaking, and we are rapidly delegating more and more tasks to it. In this talk two philosophers explore some aspects of these trends: the role of AI in democratic decision making, and its role in a range of areas where human control has so far seemed essential, such as in the military and in criminal justice.

Harvard Health Systems Innovation Lab Hackathon 2024

The Harvard Health Systems Innovation Lab is organising its 5th Health Systems Innovation Hackathon. This year’s theme is "Building High-Value Health Systems: Harnessing Digital Health and Artificial Intelligence."

Kempner Institute Open House

An open house to celebrate the opening of the Kempner Institute's new space on the 6th floor of the SEC. Open to everyone in the Harvard community; please use your Harvard email address to register.

Evaluating the Science of Geospatial AI

Hosted by the Center for Geographic Analysis, Harvard University, this conference will bring together GIScientists with expertise and interest in AI to examine the current conditions, opportunities, and connections between Artificial Intelligence and Geographic Information Science.

Connect with us

Submit website feedback or sign up for updates

Artificial Intelligence Act: MEPs adopt landmark law  

Share this page:  .

  • Facebook  
  • Twitter  
  • LinkedIn  
  • WhatsApp  
  • Safeguards on general purpose artificial intelligence  
  • Limits on the use of biometric identification systems by law enforcement  
  • Bans on social scoring and AI used to manipulate or exploit user vulnerabilities  
  • Right of consumers to launch complaints and receive meaningful explanations  

Personal identification technologies in street surveillance cameras

On Wednesday, Parliament approved the Artificial Intelligence Act that ensures safety and compliance with fundamental rights, while boosting innovation.

The regulation, agreed in negotiations with member states in December 2023, was endorsed by MEPs with 523 votes in favour, 46 against and 49 abstentions.

It aims to protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI, while boosting innovation and establishing Europe as a leader in the field. The regulation establishes obligations for AI based on its potential risks and level of impact.

Banned applications

The new rules ban certain AI applications that threaten citizens’ rights, including biometric categorisation systems based on sensitive characteristics and untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases. Emotion recognition in the workplace and schools, social scoring, predictive policing (when it is based solely on profiling a person or assessing their characteristics), and AI that manipulates human behaviour or exploits people’s vulnerabilities will also be forbidden.

Law enforcement exemptions

The use of biometric identification systems (RBI) by law enforcement is prohibited in principle, except in exhaustively listed and narrowly defined situations. “Real-time” RBI can only be deployed if strict safeguards are met, e.g. its use is limited in time and geographic scope and subject to specific prior judicial or administrative authorisation. Such uses may include, for example, a targeted search of a missing person or preventing a terrorist attack. Using such systems post-facto (“post-remote RBI”) is considered a high-risk use case, requiring judicial authorisation being linked to a criminal offence.

Obligations for high-risk systems

Clear obligations are also foreseen for other high-risk AI systems (due to their significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law). Examples of high-risk AI uses include critical infrastructure, education and vocational training, employment, essential private and public services (e.g. healthcare, banking), certain systems in law enforcement, migration and border management, justice and democratic processes (e.g. influencing elections). Such systems must assess and reduce risks, maintain use logs, be transparent and accurate, and ensure human oversight. Citizens will have a right to submit complaints about AI systems and receive explanations about decisions based on high-risk AI systems that affect their rights.

Transparency requirements

General-purpose AI (GPAI) systems, and the GPAI models they are based on, must meet certain transparency requirements, including compliance with EU copyright law and publishing detailed summaries of the content used for training. The more powerful GPAI models that could pose systemic risks will face additional requirements, including performing model evaluations, assessing and mitigating systemic risks, and reporting on incidents.

Additionally, artificial or manipulated images, audio or video content (“deepfakes”) need to be clearly labelled as such.

Measures to support innovation and SMEs

Regulatory sandboxes and real-world testing will have to be established at the national level, and made accessible to SMEs and start-ups, to develop and train innovative AI before its placement on the market.

During the plenary debate on Tuesday, the Internal Market Committee co-rapporteur Brando Benifei (S&D, Italy) said: “We finally have the world’s first binding law on artificial intelligence, to reduce risks, create opportunities, combat discrimination, and bring transparency. Thanks to Parliament, unacceptable AI practices will be banned in Europe and the rights of workers and citizens will be protected. The AI Office will now be set up to support companies to start complying with the rules before they enter into force. We ensured that human beings and European values are at the very centre of AI’s development”.

Civil Liberties Committee co-rapporteur Dragos Tudorache (Renew, Romania) said: “The EU has delivered. We have linked the concept of artificial intelligence to the fundamental values that form the basis of our societies. However, much work lies ahead that goes beyond the AI Act itself. AI will push us to rethink the social contract at the heart of our democracies, our education models, labour markets, and the way we conduct warfare. The AI Act is a starting point for a new model of governance built around technology. We must now focus on putting this law into practice”.

The regulation is still subject to a final lawyer-linguist check and is expected to be finally adopted before the end of the legislature (through the so-called corrigendum procedure). The law also needs to be formally endorsed by the Council.

It will enter into force twenty days after its publication in the official Journal, and be fully applicable 24 months after its entry into force, except for: bans on prohibited practises, which will apply six months after the entry into force date; codes of practise (nine months after entry into force); general-purpose AI rules including governance (12 months after entry into force); and obligations for high-risk systems (36 months).

The Artificial Intelligence Act responds directly to citizens’ proposals from the Conference on the Future of Europe (COFE), most concretely to proposal 12(10) on enhancing EU’s competitiveness in strategic sectors, proposal 33(5) on a safe and trustworthy society, including countering disinformation and ensuring humans are ultimately in control, proposal 35 on promoting digital innovation, (3) while ensuring human oversight and (8) trustworthy and responsible use of AI, setting safeguards and ensuring transparency, and proposal 37 (3) on using AI and digital tools to improve citizens’ access to information, including persons with disabilities.

Contacts:  

Yasmina yakimova  .

  • Phone number: (+32) 2 28 42626 (BXL)  
  • Mobile number: (+32) 470 88 10 60  
  • E-mail: [email protected]  
  • Twitter account: @EP_SingleMarket  

Janne OJAMO  

  • Phone number: (+32) 2 284 12 50 (BXL)  
  • Mobile number: (+32) 470 89 21 92  
  • E-mail: [email protected]  
  • Twitter account: @EP_Justice  

Further information  

  • Link to adopted text (13.03.2024)  
  • Plenary debate (12.03.2024)  
  • Procedure file  
  • EP Research Service: compilation of studies on Artificial Intelligence  
  • Result of roll-call votes (13.03.2024)  
  • Committee on the Internal Market and Consumer Protection  
  • Committee on Civil Liberties, Justice and Home Affairs  

Product information  

IMAGES

  1. How Artificial Intelligence is changing education trends

    artificial intelligence in education newspaper

  2. Impact of Artificial Intelligence on Education and Learning

    artificial intelligence in education newspaper

  3. 7 Real-Life Examples of AI in Education

    artificial intelligence in education newspaper

  4. How Artificial Intelligence Improves Learning Experience in Education

    artificial intelligence in education newspaper

  5. 9 Ways AI Is Reforming The Education System

    artificial intelligence in education newspaper

  6. The Impact of Artificial Intelligence on the current Education system

    artificial intelligence in education newspaper

COMMENTS

  1. How technology is reinventing education

    New advances in technology are upending education, from the recent debut of new artificial intelligence (AI) chatbots like ChatGPT to the growing accessibility of virtual-reality tools that expand ...

  2. Yes, AI could profoundly disrupt education. But maybe that's not a bad

    Human intelligence is far more impressive than any AI system we see today. We possess a rich and diverse intelligence, much of which is unrecognised by our current education system.

  3. How AI can transform education for students and teachers

    Advances in artificial intelligence (AI) could transform education systems and make them more equitable. It can accelerate the long overdue transformation of education systems towards inclusive learning that will prepare young people to thrive and shape a better future.; At the same time, teachers can use these technologies to enhance their teaching practice and professional experience.

  4. Artificial intelligence in education

    Artificial Intelligence (AI) has the potential to address some of the biggest challenges in education today, innovate teaching and learning practices, and accelerate progress towards SDG 4. However, rapid technological developments inevitably bring multiple risks and challenges, which have so far outpaced policy debates and regulatory frameworks.

  5. How Schools Can Survive A.I.

    Step 1: Assume all students are going to use the technology. Last November, when ChatGPT was released, many schools felt as if they'd been hit by an asteroid. In the middle of an academic year ...

  6. Artificial Intelligence and Education: A Reading List

    The release of ChatGPT in November 2022 triggered an explosion of news, opinion pieces, and social media posts addressing these questions. ... "Exploring the Potential of Generative Artificial Intelligence in Education: Applications, Challenges, and Future Research Directions," Educational Technology & Society 26, no. 2 (2023).

  7. How teachers and students feel about A.I.

    Parnell said she still has concerns about the use of A.I. tools in schools, including issues of bias, privacy and academic honesty. But she believed the potential benefits outweighed the downsides ...

  8. U.S. Department of Education Shares Insights and Recommendations for

    Today, the U.S. Department of Education's Office of Educational Technology (OET) released a new report, "Artificial Intelligence (AI) and the Future of Teaching and Learning: Insights and Recommendations" that summarizes the opportunities and risks for AI in teaching, learning, research, and assessment based on public input. This report is part of the Biden-Harris Administration's ongoing ...

  9. UNESCO unveils new AI roadmap for classrooms

    UNESCO calls out to implement its recommendations on the ethics of artificial intelligence to avoid its misuse. The UN convened the first ever global meeting with education ministers from around the world to explore risks and rewards of using chatbots in classrooms, announcing on Friday a new roadmap to chart a safer digital path for all.

  10. How Technology Is Changing the Future of Higher Education

    How Technology Is Changing the Future of Higher Education. Labs test artificial intelligence, virtual reality and other innovations that could improve learning and lower costs for Generation Z and ...

  11. AI in the classroom: Why some teachers are embracing it

    The Artificial Intelligence for K-12 Initiative has been piloting an AI elective course in some Georgia middle schools that includes topics such as autonomous robots and vehicles and how computers ...

  12. Top AI-in-Education Moments of 2023: The Year Artificial Intelligence

    In May, the U.S. Department of Education released a report on artificial intelligence highlighting opportunities and possible risks of using AI to improve teaching and learning. UNESCO brought together 40 education ministers from around the world to create a coordinated response to generative AI tools and to share policy approaches and plans on ...

  13. How ChatGPT and similar AI will disrupt education

    ChatGPT marks the beginning of a new wave of AI, a wave that's poised to disrupt education. When Stanford University's student-run newspaper polled students at the university, 17 percent said ...

  14. 7 Artificial Intelligence Trends That Could Reshape Education in 2024

    1. Professional Development. Throughout 2023, demand for AI professional development for educators remained high. In 2024, we should see an avalanche of districts and schools providing their educators with AI professional development materials to integrate these tools into their teaching practices.

  15. Artificial Intelligence

    Ed-Tech Policy Webinar Artificial Intelligence in Practice: Building a Roadmap for AI Use in Schools. AI in education: game-changer or classroom chaos? Join our webinar & learn how to navigate ...

  16. Artificial Intelligence

    Personalized Learning. Remote Learning. Teaching and Learning. Keep up to date with our email newsletter Sign me up. Read and Share the latest and trending Education and Educational Technology News, Blurbs and Articles about Artificial Intelligence on EdSurge.com.

  17. EdNC features insights from Stephen Byrd on how educators are using AI

    Stephen Byrd, Associate Professor of Education. In her article "Three ways education professionals use AI to support learning differences," reporter Chantal Brown with educational news outlet EdNC talked with experienced educators who focus on students with learning differences to see how AI can be utilized. Among those Brown spoke with was ...

  18. Benefits of Artificial Intelligence in Education

    The benefits of artificial intelligence in education include a plethora of improvements, transforming traditional teaching methods and enriching the learning experience. Personalized learning is a key advantage, with AI algorithms analyzing individual student data to tailor educational content. This adaptability ensures that students can learn ...

  19. Artificial Intelligence in Education: Are we ready?

    However, AI in Education can enhance learning outcomes and provide more engaging learning experiences. AI tools can be handy in recurring processes such as evaluation, management, and operations in Education. There are three ways in which AI tools can assist students in making them better learners. AI-directed Learning: An AI-driven machine can ...

  20. Artificial Intelligence and education. Our perspective

    Human intelligence and artificial intelligence are powerfully complementary. It's no longer an either / or conversation. We're focused on teaching our students durable human skills that can't become algorithms. In our schools, this means developing students' creativity, teamwork, resilience, and confidence. Irrespective of AI, these are ...

  21. Schools bewildered by AI advances, say head teachers

    UK schools have been left confused by the fast rate of change in artificial intelligence (AI) and its impact on education, head teachers are warning. In a letter to the Times, educators from the ...

  22. Impact of Artificial Intelligence in Education

    Key points: AI is a powerful catalyst for enhancing teaching effectiveness in many ways. AI augments teaching by offering real-time insights, and fostering PD. Discover more about why AI in education is essential for learning. Artificial intelligence is reshaping the landscape of education, ushering in a new era of innovation and transformation.

  23. General Assembly adopts landmark resolution on artificial intelligence

    The UN General Assembly on Thursday adopted a landmark resolution on the promotion of "safe, secure and trustworthy" artificial intelligence (AI) systems that will also benefit sustainable development for all. Adopting a United States-led draft resolution without a vote, the Assembly also highlighted the respect, protection and promotion of ...

  24. A.I. Is Everywhere and Evolving

    A.I. Here, There, Everywhere. Many of us already live with artificial intelligence now, but researchers say interactions with the technology will become increasingly personalized. This article is ...

  25. UN adopts first resolution on artificial intelligence

    UNITED NATIONS (AP) — The General Assembly approved the first United Nations resolution on artificial intelligence Thursday, giving global support to an international effort to ensure the powerful new technology benefits all nations, respects human rights and is "safe, secure and trustworthy.". The resolution, sponsored by the United States and co-sponsored by 123 countries, including ...

  26. Baker Tilly's Insights Into Artificial Intelligence and Predictive

    NORTHAMPTON, MA / ACCESSWIRE / March 27, 2024 / Baker Tilly. Authored by: John Runte. Artificial intelligence (AI) is becoming more and more prevalent in higher education - not only in academics ...

  27. Generative AI @ Harvard

    The pace of change in the development of Artificial Intelligence is breathtaking, and we are rapidly delegating more and more tasks to it. In this talk two philosophers explore some aspects of these trends: the role of AI in democratic decision making, and its role in a range of areas where human control has so far seemed essential, such as in the military and in criminal justice.

  28. AI digital agents: sending computer agents on personal tasks

    This is not science fiction, rather it is science fact that we increasingly are sending artificial intelligence (AI) agents on our personal and professional tasks. We have discussed much about the development of AI (artificial intelligence) capabilities in this column. We have reached the point of near-vertical expansion of AI speed, capacity and scope of knowledge.

  29. Artificial Intelligence Act: MEPs adopt landmark law

    On Wednesday, Parliament approved the Artificial Intelligence Act that ensures safety and compliance with fundamental rights, while boosting innovation. The regulation, agreed in negotiations with member states in December 2023, was endorsed by MEPs with 523 votes in favour, 46 against and 49 abstentions. It aims to protect fundamental rights ...