Editor's pick AI IN OUR MIDST
SNEAK PEEK INTO THE FULL 2026 SPRING EDITION
2026 Spring Forum (Editor's Pick) | AI in our midst
IN CONVERSATION WITH
Barbara Wasson
THE READING ROOM
Enjoyed this Editor's pick?
EAIE members can access all articles in the complete edition of Forum and more!
Log in here or become a member to unlock the full version.
By Marc Cheong, Shanton Chang, Eduardo Araujo OliveiraUnderstanding the three levels of digital inclusion
By Pii-Tuulia NikulaAI optimisation guidance to boost your institution's visibility
By Francesca FitzsimmonsHow to set adequate safeguards against recruitment biases
By Cato RoleaKeeping the narrative in human hands
By Cecilia AlbèA professor and AI expert on emerging opportunities and risks
By Sasha PeruginiHow international education can build the human skills AI can’t replace
By Winnie Sin Wai Pui, James UnderwoodRebalancing linguistic power through multilingualism
By Ransom Tanyu NgengeHEART model for human-centered education
By Callum PhilbinExploring how AI is redefining understanding and knowledge
Artificial intelligence has rapidly become embedded in the everyday lives of higher education professionals. Its growing presence is reshaping international education in profound and often contradictory ways – opening up new possibilities while simultaneously amplifying long-standing challenges. This Forum issue engages with AI in precisely this raw, unsettled space: not as a finished solution, but as a dynamic force whose implications are still unfolding.
The contributions in this issue are intentionally interconnected. Together, they guide the reader through lessons learned, practical insights, calls to action and unresolved questions. Moving between optimism and critique, the articles create a productive tension – almost like a pendulum in motion – inviting readers to reflect on where they themselves stand. Is AI primarily an opportunity, a challenge or inevitably both?
The issue opens with Cheon, Chang and Oliveira who foreground one of the most pressing concerns in AI adoption – illustrating how AI systems not only replicate existing inequalities but may actively deepen them, reinforcing structural disadvantage in which students and institutions with greater financial resources gain access to more powerful AI tools – and, consequently, greater advantage.
Nikula shifts the focus to applications of AI in international student recruitment and marketing, noting that large language model (LLM)–based information seeking may reduce reliance on traditional recruitment strategies and offer more cost-effective alternatives. At the same time, Nikula cautions against over-reliance on AI-generated information, highlighting persistent risks related to accuracy, transparency and trust.
Fitzsimmons extends this critical lens to admissions processes, warning that AI systems trained on historical admissions data may entrench exclusionary patterns. If past decisions favoured certain demographic groups, AI is likely to perpetuate inequities under the guise of objectivity.
Rolea offers a compelling intervention by turning attention to institutional readiness. While students increasingly arrive as “AI natives,” staff development in advanced AI and ethical literacy lags behind. Rolea argues convincingly for urgent, structured professional development and provides a concrete example of how institutions can begin to address this gap.
An interview with Barbara Wasson from the Centre for the Science of Learning & Technology at the University of Bergen exposes the importance of GDPR in connection to using AI and provides a hands-on perspective on what AI literacy is and how to develop it.
Several contributions caution against an overly technocentric understanding of innovation. Perugini challenges the prevailing emphasis on AI and technical digital skills, redirecting attention to the social and cultural competences fostered through immersive international education experiences. In a similar vein, Pui and Underwood examine the limitations of AI in language learning and translation, arguing that cultural intelligence and deep cultural contextual understanding remain beyond the reach of automated systems.
Balancing these critiques, Philbin highlights the robust transformative potential of AI in geospatial learning, demonstrating how new models can expand planetary awareness and enable learners to see beyond traditional cognitive and geographic horizons.
Finally, Ngenge closes the issue with a powerful reminder that the integration of AI into international education must be grounded in values where technological capability is balanced with humility, empathy, resilience and ethical responsibility.
Taken together, these contributions invite us to see complexity, to question assumptions and to engage thoughtfully with a technology that is already reshaping our field. This issue does not ask readers to choose between enthusiasm or resistance – but to develop a more nuanced, reflective, and humane engagement with AI as we move forward.
I would like to extend my sincere thanks to Arnim Heinemann for his assistance in reviewing the articles for this issue. His thoughtful feedback and careful reading, which he generously contributed, played an important role in shaping the final selection of articles presented here.
Eva Janebová, Editor
publications@eaie.org
Professor of International Teacher Education , NHL Stenden University of Applied Sciences
Head of Digital Innovation and AI, University of Southampton
Senior Lecturer, University of Melbourne
Content & Insights Marketing Manager, Keystone Education Group
Academic Director, Professional and Continuing Education, University of Cambridge
Associate Professor, Eastern Institute of Technology
Postdoctoral Research Fellow, Centre for Advanced Sustainability Studies (CASS), Jagiellonian University
Director, Syracuse University Florence
Professor, University of Melbourne
Course Director, Professional and Continuing Education, University of Cambridge
Christina Villarreal (Spain) Director of University Partnership and Development, Spanish Institute for Global Education
Heidi Soneson (USA) Affiliate, Gateway International Group
Kate Moore (USA) Principal and Co-Founder, Global Career Center
Published by European Association for International EducationPO Box 11189, 1001 GD Amsterdam, the NetherlandsE-mail: info@eaie.org, publications@eaie.orgwww.eaie.org
Editor: Eva JanebováPublications Committee: Eva Janebová (Chair), Ragnhild Solvi Berg, Queenie Lam, Arnim Heinemann, Sonja Knutson
Director, Knowledge Development and Research: Laura E. RumbleyHead of Marketing and Communications: Léa BasinKnowledge Development Coordinator: Cecilia AlbèDesigners: Nhu Nguyen, Maeghan Dunn
Copyright © 2026 by the EAIE All rights reserved. Extracts from Forum may be reproduced with permission of the EAIE. Unless stated otherwise, opinions expressed by contributors do not necessarily reflect the position of the EAIE.ISSN 1389-0808
ARTIFICIAL INTELLIGENCE (AI) The broad field of creating computer systems that can perform tasks typically requiring human intelligence, such as learning from data, recognising patterns and making decisions. AI encompasses subfields including machine learning, generative AI and natural language processing. Example: AI-powered tools that forecast international student enrolment trends by region.
AI AGENT An AI system that acts autonomously across multi-step tasks – planning actions, using tools and adapting based on results – with minimal human direction between steps. Unlike a chatbot responding to a single prompt, an agent can coordinate a full workflow. Example: A recruitment tool that researches prospects, drafts personalised outreach and schedules follow-ups on its own.
AI LITERACY A foundational understanding of what AI can and cannot do, including the ability to interpret its outputs critically and recognise ethical risks. Example: Staff who can evaluate whether an AI-generated response about visa requirements is accurate before sharing it with students.
ALGORITHMIC BIAS The tendency for AI systems to produce unfair outcomes because the data they were trained on reflects existing inequalities in how information was collected and structured. Example: An admissions scoring model that unintentionally ranks applicants from certain countries or educational systems lower.
AUTOMATION Using technology to perform tasks with minimal human intervention. Rule-based automation follows fixed instructions (auto-sorting emails); AI-powered automation makes intelligent decisions (routing student enquiries based on intent and context). Example: Automatically flagging incomplete international applications for follow-up versus simply sending a reminder on a set schedule.
CHATBOTS Automated agents that answer questions via text or voice. Older chatbots followed scripted rules; newer generative AI chatbots handle more natural conversations but introduce risks like hallucinations and bias. Example: A 24/7 chatbot helping prospective international students with programme and application enquiries across time zones.
DATA GOVERNANCE The policies, guidelines and classifications that guide how an organisation collects, stores and uses data responsibly. Example: Ensuring international student records comply with data protection regulations.
GENERATIVE AI A type of AI that uses a prediction engine to select the next best word, pixel, sound or frame, creating new text, images, video or audio based on patterns in its training data. Example: Drafting multilingual recruitment emails or generating social media content tailored to prospective students in specific regions.
HALLUCINATIONS Instances where AI generates information that sounds plausible but is factually incorrect or entirely invented. Example: A chatbot confidently providing wrong visa deadlines or fabricated credit transfer policies to an international student.
HUMAN-IN-THE-LOOP (HITL) An approach requiring human oversight on AI operations to validate outputs, verify accuracy and take responsibility for what is created. Example: An admissions counsellor reviewing AI-drafted evaluation summaries before they inform any decision about a student’s application.
LARGE LANGUAGE MODEL (LLM) An AI system trained on vast amounts of text data to understand and generate human-like language, powering tasks like summarisation, translation and drafting. Example: Tools that help international offices translate partner communications or draft multilingual web content.
MACHINE LEARNING (ML) A branch of AI in which systems improve their performance by learning from data rather than following explicit instructions. Example: A model trained on historical data that predicts which admitted international students are most likely to enrol, helping offices allocate resources.
PREDICTIVE ANALYTICS Using historical data and machine learning to forecast future outcomes. Example: Projecting application volumes by country or anticipating shifts in programme demand across regions.
PROMPT The instructions or information provided to an LLM to direct what it should produce. Example: “Draft a welcoming email for admitted students from Southeast Asia, highlighting support services and visa next steps.”
PROMPT ENGINEERING The practice of designing and refining prompts to get more accurate and useful AI responses. Example: Testing different prompt structures to generate culturally appropriate recruitment messaging for distinct regional audiences.
RESPONSIBLE AI A framework of principles ensuring AI systems are developed and used in ways that are fair, transparent and accountable. Example: An institution publishing clear guidelines on how AI is used in admissions decisions, what data is collected and how students can request human review.
This glossary was drafted with the support of Microsoft Copilot, using information sourced from Coursera’s AI terminology guide and vocabulary resources from Vocabmind, and reviewed with recommendations from Claude (Anthropic). Thanks to Brian Piper for providing (human) review.