- Mar 31
- 8 min read

The use of AI in education has caught traction over recent years and for me, the jury is still out. I have spent some time researching the use of AI in education, its purpose, its advantages and disadvantages in an attempt to evaluate my own perspective on its infiltration into the education system. The early 20th-century saw the development of AI with roots in mechanics and cybernetics. Since then, the development of AI has been hit and miss until more recent years with initial development halted by funding cuts and negative press. In the late 1990s and early 2000s development of AI systems again began to advance with developments in computing supporting this advancement. With the increase in sophisticated computing systems and data driven AI programmes, deep learning within AI systems has led to the current Generative AI and widespread adoption outside of mechanics and computing and within fields such as education. With AI embedded within many everyday systems such as Google Gemini and Microsoft CoPilot, there is no getting away from it and the questions remain; does AI have a place in education and should we embrace that? The UK government Education Hub is driving a belief that AI can transform education and states that while teachers can use AI to support planning, creating resources and marking work they also state that it is up to individual schools and colleges to decide if their students can use AI. This blog aims explore the use of AI in education and its benefits and challenges.
The use of AI in education
Schiff (2022) believed that the exploration of AI in education (AIED) received less attention that its use in fields such as health care or transportation and, where it was considered, it was brief and superficial. With Selwyn (2022) arguing that reports regarding AIED are dominated by commercial interests and hype without fully addressing limitations, harms and environmental costs. On the other hand (Alasadi and Baiz (2023) suggest that generative AIED has the potential to transform teaching, learning and research productivity, identifying that it can personalise and adapt learning materials, and give real-time feedback to assessments while being particularly supportive of non-native English speakers through translation and language assistance. Schiff (2022) identified that, at the time, AIED was being used by teachers for marking and feedback, by learners as learning tools and for admin staff for administrative tasks. Celik et al. (2022) reported that teachers are central contributors to AIED development such as training AI system algorithms. Teachers can provide marking rubrics, classroom practices, and performance data that support this process. Celik et al. (2022) also identified that the more teachers use AI for marking, it is validating AI judgements and leads to improvements in accuracy and reliability. Nguyen (2023) categorises the use of AIED into three categories, including Guidance AI which includes decision-support tools, Student AI which covers learning-focused tools, and Teacher AI, covering instruction-support tools. Nguyen (2023) believes that Guidance AI uses data to help students and teachers make better academic decisions, for example, identifying at-risk students and academic feedback that focusses on what students can do to improve. Student AI are tools that support student learning and engagement such as personalised challenges. Teacher AI covers systems that assist teachers by enhancing instructional tasks such as systems for marking and feedback and monitoring individual student progress (Nguyen, 2023). Zhai et al. (2021) offer another categorisation of the use of AIED. With five categories beginning with AIED being used for ‘Profiling and Prediction’ where AI is used to analyse student data to predict academic performance and identify at-risk students. A category of ‘Intelligent Tutoring and Adaptive Systems’ includes personalised learning, step-by-step feedback and altered challenge based on student performance. In the ‘Assessment and Evaluation’ category Zhai et al. (2021) believe AIED supports automated grading and feedback supported by analysis of students work patterns. In the penultimate category of ‘Content Generation and Recommendation’ AIED can recommend learning materials, generate practice questions and develop learning resources tailored to individual needs. The final category of ‘Classroom and Institutional Support’ explores how AIED is used to assist curriculum planning and improve administrative decision-making. While these examples of implementation of AIED look inviting to those in the sector, Schiff (2022) highlights that AIED policy is a way to serve national economic and technological goals and that education is not a sector to be transformed by AI.
The benefits, the challenges and ethical considerations
Research (Nguyen, 2023; Harry, 2023; Celik et al., 2022) identifies benefits for both students and teachers with the key benefits of AIED for students including the ability to personalise learning by adapting learning resources, adjusting difficulty levels and providing targeted feedback, with this individualisation having the potential to reduce achievement gaps. Support can be more responsive and more targeted to individual students through the use of Chatbots. Students’ engagement can be improved with interactive and adaptive online platforms. For teachers the research suggests that AIED can increase efficiency and reduce teacher workload at each stage of the teaching process including, planning, implementation and assessment. The research explored identified that AIED can provide student performance data analysis which can inform individualised planning. AI platforms such as Teachmate AI, Brisk Teaching, MagicSlides, Aila, Quizizz and Kahoot, amongst others, support teachers to plan lessons, create PowerPoints and develop quizzes and interactive activities as well as assessment activities. Platforms such as Teach Edge, Graide, SmartEducator UK, and Rubriq, amongst others, use AI to support teachers to create and mark assessments and develop feedback.
Schiff (2023) suggests that while AI is being championed in an economic and competitive manner that the challenges and possible harms of AIED are often overlooked. Schiff also identifies some worrying issues surrounding AIED including student data privacy, algorithmic bias in assessments, surveillance of students, impact on teacher roles and autonomy and the uneven implementation of AI currently being seen in education. A concern that Celik et al. (2022) identified is the limited reliability of AI grading and feedback as they are not always accurate or trustworthy. I have experiencedty this first hand when I experimented with a teacher AI grading programme which required me to enter the criterion in a rubric format, which all looked very promising. The speed at which it graded and provided individual feedback for 12 assignments was also very impressive. However, on inspection the programme had holistically graded the assignments which meant that the feedback was not consistent with the individual criteria. The programme did not detect where a student had not attempted a higher-grade criterion and provided feedback as if they had. My initial hope for the time saving first impressions were quickly dismissed and I remarked the assignments myself to provide my students with the reliable, fair and valid grades and feedback that they deserved. Celik et al. (2022) also believe that AI struggles with the complex, multimodal classroom context, a view supported by Nguyen (2023) who states that AI struggles to understand nuanced or new learning contexts. Celik et al. (2022) also raise the point that many teachers are not trained to understand or critically use AI, however the UK government have pledged to bring ‘the AI revolution to the classroom’ with budgets to support the development of more effective educational AI tools and training of teachers in the use of AI.
Harry (2023) proposes that students may doubt AI-generated grades and feedback and therefore trust between learners and teachers becomes an issue which doesn’t support a conducive learning environment. Harry (2023) also explains that bias cannot yet be fully mitigated as algorithms trained on biased data may treat students unfairly. Then there is the issue of human interaction. Where students depend on human interaction for their emotional, social and moral guidance from their teachers, this cannot be replaced with AI and an over reliance of AI may reduce the development of collaboration and interpersonal skills (Nguyen, 2023). Furthermore, this over reliance on AI could potentially lead to a loss of creativity and critical thinking (Alasadi and Baiz, 2023). Education is a very ‘human’ process as we engage with others while we share and develop ideas. AI systems struggle with capturing complex social realities of classrooms and have limited understanding of context, culture and relationships and cannot therefore replicate human judgement, empathy, emotion or moral reasoning (Selwyn, 2022). The other human element of the implementation of AI is the care that people have for their planet and as Zhuk (2023) identifies AI systems are highly energy-intensive, particularly during model training which contributes significantly to carbon emissions with some shocking statistics identified by The Sustainable Agency (Dhanani, 2026) in the infographic here:

as Zhuk (2023) explores, data centres are major energy consumers, often relying on non-renewable energy sources with rapid AI hardware innnovation leading to electronic waste, creating pollution and resource depletion. Selwyn (2022) argues that enthusiasm for AI rarely considers whether scaling up such technologies is ecologically sustainable in the long run.
Final Thoughts
AIED can meaningfully support teachers in planning, teaching and assessment and does have the potential to reduce workload but cannot replace teachers as it hugely depends on teacher expertise to become truly effective and reliable. Current systems are technically limited and not always reliable (Celik et al. 2022) and should therefor be used with caution. While AI can potentially support students to revisit content and revise, they should not be using it to create summative assessments for formal submission as this amounts to academic misconduct. Being in education requires learners to develop new knowledge and skills to progress in their education or employment. Effective learning requires taking in new information, repetition and application, with summative assessment enabling a student to showcase what they have learnt and for teachers to monitor this progress. If students are using AI to create their summative assessments it can be strongly argued that effective learning has not taken place. No one would want a midwife to deliver their baby that had used AI to write all of their assessments during university as they would arguable not really know what they were doing.
AI is complementary to human elements of education and cannot replace those human elements that make learning meaningful. Generative AI should be responsibly integrated into education and research through ethical guidelines, transparency, and pedagogical reform, ensuring it enhances rather than undermines learning and scientific integrity (Alasadi and Baiz, 2023). Considering the environmental impacts, AI threatens sustainable development unless ecological considerations become a core part of AI policy, design and regulation (Zhuk, 2023). Nguyen (2023) identifies a range of recommendations for the effective use of AI in education and suggests that AIED ethical and privacy guidelines should be enforced. It is also suggested that AI design should be based on educational research and not just technology trends. Research has clearly outlined that the training of teachers and students in digital and AI literacy is essential for the effective use of AIED. AIED should remain human-centred by supporting teachers rather than ever replacing them. While AIED is a relatively new area of research further research will continue to evaluate the effectiveness of its implementation.
References
Alasadi, E.A. and Baiz, C.R. (2023) ‘Generative AI in education and research: opportunities, concerns, and solutions’, Journal of Chemical Education, 100(8), pp. 2965–2971. DOI.org/10.1021/acs.jchemed.3c00323
Çelik, İ., Dindar, M., Muukkonen, H. and Järvelä, S. (2022) The promises and challenges of artificial intelligence for teachers: A systematic review of research. TechTrends, 66, pp. 616–630. DOI.org/10.1007/s11528-022-00715-y.
Dhanani, R. (2026) Environmental impact of generative AI – 30+ stats and facts. The Sustainable Agency. Available online: https://thesustainableagency.com/blog/environmental-impact-of-generative-ai/
Gov.uk (2025) AI in schools and colleges: what you need to know. The Education Hub. Available online: https://educationhub.blog.gov.uk/2025/06/artificial-intelligence-in-schools-everything-you-need-to-know/
Harry, A. (2023) Role of artificial intelligence in education. Interdisciplinary Journal and Humanities, 2(3), pp. 260–268.
Nguyen, A. (2023) Exploring the role of AI in education: Three categories of educational AI. AI and Ethics. DOI.org/10.1007/s43681-023-00294-3.
Schiff, D. (2022) Education for AI, not AI for education: The role of education and ethics in national AI policy strategies. International Journal of Artificial Intelligence in Education, 32, pp. 527–563. DOI.org/10.1007/s40593-021-00270-2.
Selwyn, N. (2022) The future of AI and education: Some cautionary notes. European Journal of Education, 57(4), pp. 620–631. DOI.org/10.1111/ejed.12532.
Zhai, X., Chu, X., Chai, C.S., Jong, M.S.-Y., Istenic, A., Spector, M., Liu, J.-B., Yuan, J. and Li, Y. (2021) A review of artificial intelligence (AI) in education from 2010 to 2020. Complexity, 2021, Article ID 8812542. DOI.org/10.1155/2021/8812542
Zhuk, A. (2023) ‘Artificial intelligence impact on the environment: hidden ecological costs and ethical-legal issues’, Journal of Digital Technologies and Law, 1(4), pp. 932–954. DOI.org/10.21202/jdtl.2023.40



