AI in Education, Learning from a Flawed Product
Ignoring the warning signs for the sake of “technology progress”
The development and use of technology has long outpaced the rate at which social, economic, and legal systems can adapt to it. Though the World Wide Web opened to the public in 1991, it wasn’t until fifteen years later that the Internet Governance Forum was established to address internet-related policy concerns and needs at an international level. The proliferation of generative AI chatbots, or Large Language Models (LLMs), within just a few short years has pushed existing human societal systems into “uncharted territory.” Not only has AI derailed much of our progress to combat climate change (the data centers used to power just ChatGPT consume more electricity in one year than 117 countries and are expected to double their usage in the next five years), it has completely upended the education system with parents, teachers, and system leaders struggling to catch up. AI has provided a new avenue in which for-profit companies can target the education system, pushing for investments in the billions for a flawed product with questionable ethics.
Current iterations of LLMs can not think, nor can it individualize education curriculum as is often advertised. Instead it processes; it predicts. LLMs predict each word and each following word based on statistical patterns in vast amounts of data. This results in a predisposition to falsify information in a problematic phenomenon known as “hallucinations”. Ask an LLM a fake, but genuine-sounding question, and it will give you an equally fake, but genuine-sounding answer with words that exude confidence.
Popular usage of AI has put an uncomfortable spotlight on the weaknesses of the current education system, especially regarding how learning and proficiency is assessed. How education handles AI will set an important precedence and serve as a model for the rest of society. Fifteen years from the release of ChatGPT to the public will be far too late to make any necessary legal protections on an international scale. Already, 62% of compulsory education students and 86% of higher education students have used AI tools in their studies. However, more than half of the education institutions in the US do not have an AI strategy in place.
In the future, writing something like this may take certain skills that many students currently relying on generative AI will no longer have; a muscle atrophied from disuse. The question shouldn’t be, how can we utilize AI in education? It should be, what does AI tell us about our education system and where does it need to be changed or strengthened?
The AI lies (or more accurately, hallucinates)
Generative AI models are trained using vast quantities of data sourced from the internet. As the internet is full of biases, false, or misleading information; so too, are the technology tools that have been born from it. A high-profile legal case in 2023 showcased some of these flaws when the lawyer used ChatGPT to do his legal research. The resulting brief highlighted several relevant court decisions that—it turns out, didn’t exist. This happens because LLMs are programmed to predict the most likely words; words which don’t necessarily have to be true. And therein lies the problem. AI hallucinations have gone so far as to claim that a user had killed his own children, suggest sticking cheese to pizza using glue, and present falsified headlines as real news.
In just a few years, AI content has flooded the internet in quantities humans are unable to produce or stop. AI content already outnumbers human ones.
LLMs trained on the new data end up in “model collapse,” whereby generative AIs collapse into producing gibberish nonsense after consuming synthetic content. Over time, LLMs start losing the true underlying data distribution and a recent study found that this process is inevitable.
Knowing all of this, there is real concern for the ability of students to distinguish between real and falsified information. Younger digital natives have been shown to be more susceptible to misinformation. Early research found that trust in AI generated responses remained high even after users were warned and detected fake information late in the interaction. Students must be made aware early that the tool generates fake information to have any lasting effect. However, even prior to the proliferation of AI generated output on the internet, students struggled to evaluate social media content. They often failed to verify accuracy or authority outside of social media posts, trusted ‘evidence’ even when it was inaccurate, and did not recognize that images and videos could be edited.
AI usage in the classroom
Around the world, many governments are racing to implement AI into their education systems. South Korea is in the process of fully implementing AI-powered digital textbooks in all schools by 2028. Along the same vein, China aims to implement AI-supported education in all primary and secondary schools by 2030.
Private companies are jumping on the opportunity to profit from the “AI revolution”. In early November, Microsoft announced that they would be supplying AI tools and training to students and educators in the United Arab Emirates. Days later, Open AI partnered with a company in Kazakhstan to provide ChatGPT Edu in schools and universities across the country. The month after, Elon Musk’s xAI announced a project to develop an AI tutoring system for all the schools in El Salvador.
In the United States, an executive order in April of 2025 called for the incorporation of AI into education and teacher training. A task force was created to establish public-private partnerships for the provision of AI educational resources in schools.
As the technology currently stands, AI use is more dominant among students than teachers. Though a recent Stanford study claimed that more than 40% of teachers regularly used AI, the parameters for this categorization were wide. Any teacher who used AI between 8 and 49 days within a 90 day period were categorized as “regular users”. A different poll found that 32% of teachers reported using AI at least weekly. Teachers who used AI regularly were most likely to use AI to lesson plan (37%), create worksheets (33%), modify materials to meet student needs (28%), do administrative work (28%), and make assessments (25%).
While AI is often framed as an efficiency tool for teachers, LLMs do not “understand” student learning and risk producing output that is pedagogically unsound. Designing and modifying lesson plans, writing feedback to students, and assessing student understanding should not be seen as a logistical burden. Instead, they are personal and highly contextual pedagogical acts.
The loss of key skills
Though research is still in its preliminary stages, the implications of AI use in education are already being felt and noted upon by those in the field. Teachers argue that students have been using AI “as a way to outsource their thinking” and that their reliance on AI chatbots risks atrophying critical thinking muscles. A teacher in the US was stunned when one student panicked, unable to generate ideas about her personal life without the assistance of AI or Google. Another student admitted to her that he couldn’t “write even one sentence without Grammarly”. These observations are corroborated by a study that showed generative AI, when used improperly, “can and do[es] result in the deterioration of cognitive faculties”.
Experience tells us that disuse of cognitive skills leads to a decline in performance. Someone who doesn’t utilize a second language will forget up to 90% of their knowledge after 8 years. Cognitive disuse atrophy also occurs with the continuous use of support systems. For example, the common usage of GPS has led to the loss of skills such as map memorization or how to mentally map out routes. As such, skills critical to research and learning are at risk if students continuously outsource basic tasks to AI models. Use of AI has already led to over-reliance and diminished critical thinking skills. In the future, students may struggle on their own to find information sources, assess credibility, and analyze and synthesize information from multiple sources into one coherent argument.
Can we conceivably “ban” AI?
Programmes that claim to be able to detect AI are often as flawed as the AI models themselves, with one publicly stating, “No AI detector can conclusively determine whether AI was used to produce text”. AI detectors have also been proven to be biased against non-native English speakers, with more than half of the TOEFL essays written by Chinese language speakers being incorrectly flagged as AI-generated. When certain words, such as “delve,” or punctuation marks, such as hyphens and the em dash, are automatically flagged as AI by detectors and human readers alike — does that mean we should stop using them? Ultimately, there is no good answer and the struggle to ban or detect AI feels like a Sisyphean task.
How AI can help education with its shortcomings
A 2024 study found that ChatGPT could pass a post-secondary online course three times out of five without being identified. Contrast the earlier example of ChatGPT hallucinations being used in a court brief with the viral news that ChatGPT was able to pass the bar exam. If AI generators can pass tests and courses, but flounder in real life applications, we as educators need to ask and reconsider how we measure student knowledge. The modern data-driven education system means that students all over the world are asked to demonstrate their knowledge through data-friendly means. It’s no surprise that AI is efficacious at passing data-friendly assessments. If a skill is not easily measured, teachers and education leaders have little incentive to teach it. Rather than reading whole books, teachers have been pressured to shift to short informational passages, followed by the type of questions found in standardized reading comprehension tests. There is also less practical science in the classroom compared to just a few decades ago.
The education system currently rewards student (or AI) ability to regurgitate rote information. Alternatively, consider a system that prioritizes teaching and measuring students’ ability to apply what they have learned. They could combine the knowledge they have gained with their own reasoning skills to practice solving a problem, reflect on their own decision-making process, and consider future implications. All skills which current LLMs are notoriously bad at and are proven to improve knowledge acquisition.
Is there a place in education for AI?
The education system will not be ready for AI until it is able to proactively shape its relationship with the technology. Education leaders should not only be prepared to guide its use, they must learn from the flaws that AI has highlighted about our existing system. This means that all education actors must learn about AI, its uses, implications, and flaws. Education with AI doesn’t necessarily mean that we must use AI in every possible way. It could mean creating lessons and assignments where an LLM would struggle, and where students could showcase their understanding with a demonstration of their abilities.
Summary
This is part one in a three part series on AI in education. This first part summarizes the state of AI in education and privatization. The second part will focus more on AI’s role in equity (or lack of) in education. The final and third part will discuss privacy concerns, surveillance, power, and lack of regulation in the AI space.
Author: Dorothy W.
A previous fellow with PEHRC, Dorothy currently works as a consultant for UNESCO doing education policy research while continuing to teach on the side.
Views expressed in this blog are those of the author/s’ alone. Publication on this blog does not represent an endorsement by PEHRC of the opinions expressed.