Table of Contents
- Summary
- Recommendations
- Which artificial intelligence?
- Does it matter how correct the answer generated by artificial intelligence is?
- Artificial intelligence as an educational tool
- The empty promise of risk analysis
- Risks associated with mental and physical health
- Risks associated with the atrophy of certain skills
- Risks associated with misinformation and the reduction of complexity in the information ecosystem
- Risks associated with the collection of personal data by private entities with commercial interests
- Conclusions
I believe generative artificial intelligence software should not be used as an educational tool, in schools, high-schools, universities, pre-schools and so forth.
The Ministry of Education in Romania has published updated versions of the high-school curricula in November-December 2025. Around half of these proposals include references to using artificial intelligence as an educational tool. This proposal was followed by a call for comments, a so-called public debate, as is mandated, legally, in Romania. I drafted a personal position, standing firmly against the use of generative artificial intelligence software as an educational tool.
The document I sent to the Ministry is the culmination of a lot of reading, thinking, writing and advocacy on the subject of generative artificial intelligence.
I've translated my position document, from Romanian to English, and will reproduce it here. The original Romanian version is available as a PDF.
As an addendum, I also wrote about the only thing that genAI has been excellent at: deprofessionalization.
Summary
- Artificial intelligence should not be used as an educational tool, in Romanian classrooms. No curriculum should make using this technology, in the classroom, mandatory.
- Teachers should be trained, guided, and supported, by being offered guiding, explanatory materials that address the issues surrounding artificial intelligence technology. The explanations should help these professionals understand the risks of using the technology and how it works.
- The Ministry of Education must demonstrate a real commitment to reducing or eliminating the risks associated with the use of artificial intelligence.
Recommendations
- Eliminate artificial intelligence as a teaching tool from the school curriculum in all contexts where it is mentioned.
- Develop explanatory, guiding materials on artificial intelligence and, in particular, generative artificial intelligence, which present this technology in a critical light and problematize its use.
- Support students and teachers through significant investments in both salaries and compensation for educators and staff, as well as scholarships and financial support for students who have difficulty continuing their studies due to precarious circumstances.
The proposal for the high school curriculum, published by the Ministry of Education in November-December 2025, presents artificial intelligence as a teaching tool. For some subjects, this technology is closely linked to the skills that a student will acquire (as is the case with the subject titled information and communication technology). For other subjects, such as mathematics (taught in the mathematics-computer science specialization), the curriculum strongly recommends the use of artificial intelligence as a teaching tool.
The emphasis on the use of artificial intelligence in subjects such as mathematics gives the impression of a curriculum designed with the purpose to train students to reproduce the correct answers. This emphasis ignores the importance of the often repetitive and frustrating process of solving problems of any kind. Sometimes, a student arrives at the wrong answer, and subjects such as mathematics are environments in which verifying the answer is essential, as is the patience to try again if the answer proves to be wrong.
Education in the "hard sciences" is meant to make students familiar with a sometimes-difficult methodology of searching for answers or formulating hypotheses. In this sense, the process itself is much more important than obtaining the answer.
In the curriculum proposed by the Ministry of Education, generative artificial intelligence is most often presented as a tool and, much less frequently, as a technology that is problematized and whose usefulness and functioning are explored and explained in a critical manner.
Which artificial intelligence?
When mentioning artificial intelligence as a teaching tool, the curriculum proposals do not make a clear and explicit distinction regarding the algorithms or implementations of artificial intelligence to which they refer. Sometimes, clarification arises from the fact that software product names (Copilot, Grammarly, ChatGPT) are explicitly mentioned in the same context.
Artificial intelligence is an umbrella term that encompasses different algorithms with different purposes and uses. Both an the autopilot software used in airplanes and weather forecasting are based on artificial intelligence technologies. In most cases, the use of the term "artificial intelligence" is based on a marketing decision rather than a desire to rigorously describe a technology.
Here, we'll assume that the proposals by the Ministry of Education are talking about generative artificial intelligence, a technology that students and teachers interact with most often using software programs colloquially called chatbots. Some examples of these programs are ChatGPT, Copilot, Perplexity, Gemini, etc.
Does it matter how correct the answer generated by artificial intelligence is?
Machine learning algorithms, on which artificial intelligence systems are broadly based, aim to obtain the most accurate result by minimizing error. Most of the time, the deviation from the correct answer, expressed as "error," cannot be absolutely zero. If it were, then we would be talking about a classical computing system, which does not use a "learning" stage, but can obtain the exact result every time.
Artificial intelligence algorithms try to maximize how close to a correct answer the output is, by minimizing error, but without taking into account other contextual signals.
Most of the time, in the learning process, students learn much more than just what the correct answer is. Education trains the ability to verify an answer, as well as the emotional flexibility to give up a wrong answer and resume the process of searching for the correct answer. It also teaches cooperation with other people.
When the teaching tool prioritizes the correctness of the answer, it does not train any of the other aspects that a student needs to internalize, to develop the ability to solve problems, alone, or in cooperation with others.
Artificial intelligence as an educational tool
Any obligation or requirement to use generative artificial intelligence as a teaching tool is absurd.
When curriculum links student competence to the use of generative artificial intelligence, it makes the claim that if the student does not use this technology, they are not competent. In the software and hardware development industry, however, there are professionals who do not use artificial intelligence at all. Generative artificial intelligence is not necessary to be a competent programmer or engineer. Nor is it necessary to be a prolific researcher.
For some subjects, the curriculum proposes the use of generative artificial intelligence to correct or improve a text written by the student. In some cases, this use is formulated as a skill that the student learns. However, generative artificial intelligence cannot "correct" or "improve" a text. This technology only generates readable text. Software programs such as Grammarly do not implement the grammatical standards of a language, nor do they operate with concepts of aesthetics. The text generated by such software programs should not be presented as "correct" or "better" than the original version.
Generative artificial intelligence algorithms do not produce deterministic content. Students who formulate an identical prompt may receive completely different content from a chatbot. Underlying these differences is a chain of causality that is often complex, completely opaque, and impossible to audit and explain exhaustively. However, educational content in the classroom cannot be left to chance.
Periodically, companies that produce software using generative artificial intelligence release new versions of these products, which can significantly change the content generated. These companies do not guarantee continuity of models. Often, it is users who notice and trace the differences between two different versions of the same chatbot. The version of such software may change several times during a single school year.
For this reason, generative artificial intelligence cannot be used in any context where it is important for all students to receive identical, or at least sufficiently similar, content. If we imagine an exercise based on the curriculum published by the Ministry of Education, which proposes improving a text using the Grammarly chatbot, we must take into account that this exercise will be replicated thousands of times across the country over the course of several years. The answers will not be similar even on a single day, let alone throughout entire school years. But the purpose of such an exercise is to give all these students the same understanding of the grammar and style of a text.
When generative artificial intelligence is used to correct or improve language, it ends up propagating its own aesthetic, which has not resulted from refinement and from engagement with the real world. The words produced by a chatbot are chosen because they are, statistically speaking, very well suited to each other in a given context. So the resulting aesthetic emerges from the need for readability, a need expressed by minimizing "error" (unreadable expression).
Students are encouraged to experiment with different styles of expression and aesthetics: arguments, demonstrations, persuasive pleadings, analyses intended to raise issues, and even metaphorical styles and fictional writing. Therefore, depending on the context, the aesthetics that the student is invited to practice are completely different. Among these aesthetics, legibility is not necessarily the most important feature; in the case of fictional prose and poetry, legibility is even discouraged when the goal is to produce novel and interesting expressions.
Using the aesthetics of text produced by generative artificial intelligence as a benchmark for correct writing standardizes the aesthetics to which a student is exposed and which they are encouraged to adopt. The end result is undoubtedly a drastic reduction in the diversity of students' expression, thinking, and argumentation.
The empty promise of risk analysis
Some proposals in the program put forward by the Ministry of Education mention risks associated with the use of generative artificial intelligence, such as "risks related to data privacy and security," "challenges to academic integrity," "access issues," and "excessive dependence on AI." However, simply listing the risks without demonstrating how they are minimized or eliminated, without presenting the reason why students and teachers should bare these risks, is nothing more than an empty promise.
All of these risks should be considered unacceptable from the outset.
Each of the following categories of risk is harmful to the intellectual and emotional development of students. Furthermore, these risks also erode the safety of teachers. Educators would become vulnerable to penalties in their workplace if they refuse to integrate generative artificial intelligence as a teaching tool.
Risks associated with mental and physical health
The international press has published numerous investigations focusing on the tragic stories of young people who took their own lives while in a highly vulnerable mental state, amplified and fueled by repeated interactions with generative artificial intelligence. OpenAI alone is involved in 7 lawsuits filed in the wake of such tragic events.
The Raine v. OpenAI case brought to light a number of dangerous practices by OpenAI. In a young person's conversations with ChatGPT, messages that were detected as having a high probability of expressing self-harming desires were ignored by the company. In this lawsuit filed by the family of a young person who took their own life, the seemingly empathetic language generated by ChatGPT, an aesthetic chosen to prolong the use of the product, was identified as an element that creates addiction to technology.
Risks associated with the atrophy of certain skills
A study published in 2025, conducted by Microsoft experts and researchers, concluded that the constant use of generative artificial intelligence inhibits critical thinking and can lead to dependence on this technology, as well as a decline in problem-solving skills. In the study, participants who reported having increased trust in generative artificial intelligence were the same ones who were less likely to engage in thinking critically.
A study published by the Massachusetts Institute of Technology in 2025 concluded that participants who used generative artificial intelligence technology in the study, to write essays, did not critically evaluate the content they produced. They were unable to quote passages from the essays that they wrote using generative technology, thus demonstrating that their understanding of a subject had not been enriched or developed.
A 2024 study on cognitive atrophy induced by the use of chatbots concludes that the younger generation is most prone to cognitive decline as a result of interacting with chatbots. Young people who prioritize easy access to information using chatbots at the expense of reflection and deep understanding risk atrophying their critical thinking, problem-solving, and complex reasoning skills.
Risks associated with misinformation and the reduction of complexity in the information ecosystem
Generative artificial intelligence will always produce content that predominantly represents the perspective of the majority, those whose voices have been predominantly represented in literature, history, and society. Given that these algorithms can only reproduce content similar to the data they have been trained on, marginal perspectives in history, literature, or civic discourse will most likely be erased out of the results.
An extremely important lesson, both historically and socially, comes from the realization that certain voices are not represented in art, politics, and society. However, generative artificial intelligence does not have mechanisms to analyze its own data and obtain information about what is missing from its corpus.
Teachers are the ones who can bring historical reality to students' attention, revealing how certain voices in society have been silenced. They must be supported with resources and easy access to specialized information so that they can compile as wide and diverse a range of perspectives as possible to present systematically in class.
Risks associated with the collection of personal data by private entities with commercial interests
The most widely used generative artificial intelligence products will integrate personalized advertising. The companies that own these products have failed miserably to make a profit by selling subscriptions. Most of these same companies, however, also have an extremely profitable line of products in the personalized advertising ecosystem, such as Google (which owns Gemini), Meta (Meta AI), and Microsoft (Copilot and, by extension, ChatGPT).
OpenAI, the company that produces ChatGPT, will soon integrate personalized ads into the content it generates. Company representatives have spoken publicly about this possibility. References to ads and their delivery systems have been discovered in the application's source code (on the Android platform).
As a result, the content generated by chatbot software will contain personalized ads. It is unclear how the ads will be presented, but their integration will alter the message produced by the applications, which will interfere with the teaching process.
Software programs that use generative artificial intelligence collect personal data and create a personalized profile of users. Thus, every time a chatbot is used in the classroom, sensitive personal data about students and teachers is collected by private entities that operate with a clear commercial purpose. If these software programs are introduced in schools, data collection will be increased and extended nationwide.
Most technology companies that market widely used chatbot programs operate under the jurisdiction of the United States. Since Donald Trump became president, the US has become openly hostile to EU regulations on personal data protection and accountability for technology companies. The Ministry of Education should prioritize the safety of young people and teachers by treating companies under US jurisdiction as potentially hostile.
Conclusions
The use of generative artificial intelligence as a teaching tool should be removed from all proposals published by the Ministry of Education. The risks associated with its use are disproportionately greater than any possible benefits.
Teachers must be given support and resources to familiarize themselves with the functioning and effects of generative artificial intelligence, without exposing them to deprofessionalization by forcing them to use these tools.
The Ministry of Education must remain lucid and cautious in its relationship with the technological products offered by software companies. The priority of this institution must be the harmonious development and safety of students and teachers. Thus, it must ensure that any tools or technologies used in the classroom can be supervised and controlled directly by operators in the country, without depending on a commercial entity whose interests may be contrary to national interests.