The use of AI-powered educational technologies (AI-EdTech) offers a range of advantages to students, instructors, and educational institutions. While much has been achieved, several challenges in managing the data underpinning AI-EdTech are limiting progress in the field. This paper outlines some of these challenges and argues that data management research has the potential to provide solutions that can enable responsible and effective learner-supporting, teacher-supporting, and institution-supporting AI-EdTech. Our hope is to establish a common ground for collaboration and to foster partnerships among educational experts, AI developers and data management researchers in order to respond effectively to the rapidly evolving global educational landscape and drive the development of AI-EdTech.
Observatorio IA
This article examines the potential impact of large language models (LLMs) on higher education, using the integration of ChatGPT in Australian universities as a case study. Drawing on the experience of the first 100 days of integration, the authors conducted a content analysis of university websites and quotes from spokespeople in the media. Despite the potential benefits of LLMs in transforming teaching and learning, early media coverage has primarily focused on the obstacles to their adoption. The authors argue that the lack of official recommendations for Artificial Intelligence (AI) implementation has further impeded progress. Several recommendations for successful AI integration in higher education are proposed to address these challenges. These include developing a clear AI strategy that aligns with institutional goals, investing in infrastructure and staff training, and establishing guidelines for the ethical and transparent use of AI. The importance of involving all stakeholders in the decision-making process to ensure successful adoption is also stressed. This article offers valuable insights for policymakers and university leaders interested in harnessing the potential of AI to improve the quality of education and enhance the student experience.
A new set of principles has been created to help universities ensure students and staff are ‘AI literate’ so they can capitalise on the opportunities technological breakthroughs provide for teaching and learning.
The statement, published today (4 July) and backed by the 24 Vice Chancellors of the Russell Group, will shape institution and course-level work to support the ethical and responsible use of generative AI, new technology and software like ChatGPT.
Developed in partnership with AI and educational experts, the new principles recognise the risks and opportunities of generative AI and commit Russell Group universities to helping staff and students become leaders in an increasingly AI-enabled world.
The five principles set out in today’s joint statement are:
Universities will support students and staff to become AI-literate.
Staff should be equipped to support students to use generative AI tools effectively and appropriately in their learning experience.
Universities will adapt teaching and assessment to incorporate the ethical use of generative AI and support equal access.
Universities will ensure academic rigour and integrity is upheld.
Universities will work collaboratively to share best practice as the technology and its application in education evolves.
This study aims to develop an AI education policy for higher education by examining the perceptions and implications of text generative AI technologies. Data was collected from 457 students and 180 teachers and staff across various disciplines in Hong Kong universities, using both quantitative and qualitative research methods. Based on the findings, the study proposes an AI Ecological Education Policy Framework to address the multifaceted implications of AI integration in university teaching and learning. This framework is organized into three dimensions: Pedagogical, Governance, and Operational. The Pedagogical dimension concentrates on using AI to improve teaching and learning outcomes, while the Governance dimension tackles issues related to privacy, security, and accountability. The Operational dimension addresses matters concerning infrastructure and training. The framework fosters a nuanced understanding of the implications of AI integration in academic settings, ensuring that stakeholders are aware of their responsibilities and can take appropriate actions accordingly.
Why 2023-2024 will be remembered as the academic year that education embraced AI.
The goal of learner discovery is to deeply understand the “why” of your learners. Research shows that by doing this, we can optimise our designs for learner motivation and, as a result, learner achievement.
Writing a good prompt on ChatGPT is easy.
Here are the 4 steps to do it.
Estamos como en Poltergeist: delante de la televisión sin sintonizar, y diciendo eso de «ya están aquí«… No, no es malo, no es negativo, no lo vamos a prohibir, ni a regular de manera que no ocurra. Es, simplemente, inevitable. O desarrollamos una forma de distribuir la riqueza que se adapte a esos nuevos tiempos que ya están aquí, o tendremos un problema muy serio. Y no será un problema de la tecnología: será completamente nuestro.
There’s been a lot of discussion in recent months about the risks associated with the rise of generative AI for higher education.
Much of the conversation has centred around the threat that tools like ChatGPT - which can generate essays and other text-based assessments in seconds - pose to academic integrity. More recently, others have started to explore more subtle risks of AI in the classroom, including issues and equity and the impact on the teacher-student relationship.
Much less work has been done on exploring the negative consequences that might result from not embracing AI in education.
This working paper discusses the risks and benefits of generative AI for teachers and students in writing, literature, and language programs and makes principle-driven recommendations for how educators, administrators, and policy makers can work together to develop ethical, mission-driven policies and support broad development of critical AI literacy
Pages
Sobre el observatorio
Sección en la que se recogen publicaciones interesantes sobre inteligencia artificial no dedicadas específicamente a la enseñanza de ELE. Los efectos de la IA sobre la enseñanza y aprendizaje de lenguas van a ser, están siendo, muy notables y es importante estar informado y al día sobre este tema.
Tipo
-
aplicación (4)
-
artículo (89)
-
artículo científico (12)
-
boletín (3)
-
curso (1)
-
libro (2)
-
podcast (1)
-
presentación (6)
-
revista (1)
-
sitio web (13)
-
tuit (26)
-
vídeo (7)
Temas
-
AI literacy (1)
-
aplicaciones (31)
-
aprendizaje (10)
-
Australia (1)
-
Bard (4)
-
bibliografía (1)
-
big data (1)
-
Bing (3)
-
brecha digital (1)
-
chatbots (8)
-
chatGPT (44)
-
código abierto (1)
-
cognición (1)
-
cómo citar (2)
-
comparación (3)
-
consejos de uso (2)
-
control (1)
-
curso (1)
-
cursos (1)
-
DALLE2 (1)
-
deep learning (2)
-
deepfakes (1)
-
desafíos (1)
-
destrezas (1)
-
detección de uso (6)
-
diseño educativo (1)
-
disrupción (1)
-
docentes (1)
-
e-learning (1)
-
economía (1)
-
educación (50)
-
educación superior (10)
-
educadores (1)
-
embeddings (1)
-
encuesta (2)
-
enseñanza (5)
-
enseñanza de ELE (1)
-
enseñanza de IA (1)
-
entrevista (2)
-
escritura (2)
-
evaluación (4)
-
experimentación (1)
-
futuro de la IA (2)
-
futuro laboral (1)
-
Google (2)
-
Google Docs (1)
-
guía (2)
-
guía de uso (5)
-
historia (1)
-
IA generativa (3)
-
IA vs. humanos (1)
-
ideas (1)
-
ideas de uso (7)
-
idiomas (1)
-
impacto social (1)
-
información (1)
-
inglés (1)
-
investigación (5)
-
Kahoot (1)
-
LLM (2)
-
machine learning (2)
-
mapas (1)
-
medicina (1)
-
Microsoft (1)
-
Midjourney (1)
-
mundo laboral (1)
-
música (1)
-
niños (1)
-
noticias (1)
-
OpenAI (1)
-
opinión (4)
-
orígenes (1)
-
pedagogía (1)
-
plagio (3)
-
plugins (2)
-
presentación (1)
-
problemas (2)
-
programación (1)
-
prompts (1)
-
recomendaciones (1)
-
recopilación (16)
-
recursos (1)
-
regulación (3)
-
revista (1)
-
riesgos (5)
-
robots (1)
-
sesgos (3)
-
trabajo (3)
-
traducción (2)
-
turismo (1)
-
tutorbots (1)
-
tutores de IA (2)
-
tutoriales (3)
-
uso de la lengua (1)
-
uso del español (1)
-
uso en educación (6)
-
usos (4)
-
valoración (1)
-
viajes (1)
Autores
-
A. Lockett (1)
-
AI Foreground (1)
-
Alejandro Tinoco (1)
-
Alfaiz Ali (1)
-
Anca Dragan (1)
-
Andrew Yao (1)
-
Anna Mills (1)
-
Antonio Byrd (1)
-
Ashwin Acharya (1)
-
Barnard College (1)
-
Barsee (1)
-
Ben Dickson (1)
-
Brian Basgen (1)
-
Brian Roemmele (1)
-
Brian X. Chen (4)
-
Carmen Rodríguez (1)
-
Carrie Spector (1)
-
Ceren Ocak (1)
-
Ceylan Yeginsu (1)
-
Charles Hodges (1)
-
Csaba Kissi (1)
-
Daniel Kahneman (1)
-
David Álvarez (1)
-
David Green (1)
-
David Krueger (1)
-
Dawn Song (1)
-
DeepLearning.AI (1)
-
Dennis Pierce (1)
-
Dimitri Kanaris (1)
-
Eli Collins (1)
-
Emily Bender (1)
-
Enrique Dans (3)
-
Eric M. Anderman (1)
-
Eric W. Dolan (1)
-
Eric Wu (1)
-
Ethan Mollick (1)
-
Eva M. González (1)
-
Francis Y (3)
-
Frank Hutter (1)
-
Gary Marcus (1)
-
Geoffrey Hinton (1)
-
George Siemens (3)
-
Gillian Hadfield (1)
-
Gonzalo Abio (1)
-
Google (3)
-
Gorka Garate (1)
-
Greg Brockman (1)
-
Guillaume Bardet (1)
-
Hasan Toor (4)
-
Hassan Khosravi (1)
-
Helen Beetham (1)
-
Helena Matute (1)
-
Hélène Sauzéon (1)
-
Holly Hassel (1)
-
Ian Roberts (1)
-
James Zou (1)
-
Jan Brauner (1)
-
Jas Singh (3)
-
Javier Pastor (1)
-
Jeff Clune (1)
-
Jeffrey Watumull (1)
-
Jenay Robert (1)
-
Jennifer Niño (1)
-
Johanna C. (1)
-
Johannes Wachs (1)
-
Josh Bersin (1)
-
Juan Cuccarese (1)
-
Julian Estevez (1)
-
Kalley Huang (1)
-
Karie Willyerd (1)
-
Kevin Roose (1)
-
Kui Xie (1)
-
Lan Xue (1)
-
Lance Eaton (1)
-
Leonardo Flores (1)
-
Lijia Chen (1)
-
Lorna Waddington (1)
-
Lucía Vicente (1)
-
Manuel Graña (1)
-
Mark McCormack (1)
-
Marko Kolanovic (1)
-
Melissa Heikkilä (1)
-
Mert Yuksekgonul (1)
-
Microsoft (1)
-
MLA Style Center (1)
-
Muzzammil (1)
-
Nada Lavrač (1)
-
Naomi S. Baron (1)
-
Natasha Singer (2)
-
Nathan Lands (1)
-
Nicole Muscanell (1)
-
Nikki Siapno (1)
-
NLLB Team (1)
-
Noam Chomsky (1)
-
Nuria Oliver (1)
-
Oliver Whang (1)
-
Olumide Popoola (1)
-
OpenAI (2)
-
Paul Couvert (5)
-
Paula Escobar (1)
-
Pauline Lucas (1)
-
Petr Šigut (1)
-
Philip Torr (1)
-
Philippa Hardman (18)
-
Pieter Abbeel (1)
-
Pingping Chen (1)
-
Pratham (1)
-
Qiqi Gao (1)
-
Rafael Ruiz (1)
-
Rania Abdelghani (1)
-
Rebecca Marrone (1)
-
Rishit Patel (1)
-
Rowan Cheung (2)
-
Russell Group (1)
-
Sal Khan (1)
-
Samuel A. Pilar (1)
-
Samuel Fowler (1)
-
Sarah Z. Johnson (1)
-
Sepp Hochreiter (1)
-
Serge Belongie (1)
-
Shazia Sadiq (1)
-
Sheila McIlraith (1)
-
Sihem Amer-Yahia (1)
-
Sonja Bjelobaba (1)
-
Sören Mindermann (1)
-
Stan Waddell (1)
-
Stella Tan (1)
-
Stephen Marche (1)
-
Steve Lohr (1)
-
Stuart Russell (1)
-
Tegan Maharaj (1)
-
Tiffany Hsu (1)
-
Tim Leberecht (1)
-
Timothy McAdoo (1)
-
Tom Graham (1)
-
Tom Warren (1)
-
Tomáš Foltýnek (1)
-
Tong Wang (1)
-
Trevor Darrell (1)
-
Tulsi Soni (2)
-
Vicki Boykis. (1)
-
Víctor Millán (1)
-
Weixin Liang (1)
-
Xingdi Yuan (1)
-
Ya-Qin Zhang (1)
-
Yejin Choi (1)
-
Yen-Hsiang Wang (1)
-
Yining Mao (1)
-
Yoshua Bengio (1)
-
Yurii Nykon (1)
-
Zhijian Lin (1)
Fuentes
-
APA Style (1)
-
Aprende (1)
-
arXiv (4)
-
E-aprendizaje.es (1)
-
EDUCAUSE (9)
-
Educaweb (1)
-
El País (1)
-
ElDiario.es (3)
-
Enrique Dans (1)
-
Enseña (1)
-
eSchool News (1)
-
Explora (1)
-
Formación ELE (1)
-
Generación EZ (1)
-
GP Strategies (1)
-
HigherEdJobs (1)
-
IE Insights (1)
-
IEEE Access (2)
-
INTEF (1)
-
Intellias (1)
-
J.P.Morgan (1)
-
Joshbersin.com (1)
-
Kahoot! (1)
-
La Tercera (1)
-
Learning Letters (2)
-
Medium (1)
-
Meta AI (1)
-
Meta Research (1)
-
MLA (1)
-
Multiplex (1)
-
New York Times (14)
-
Open AI (1)
-
OpenAI (2)
-
PsyPost (1)
-
RTVE (1)
-
Russell Group (1)
-
Science (1)
-
TED (5)
-
TEDx (1)
-
The Atlantic (1)
-
The Conversation (4)
-
The Rundown (1)
-
The Verge (1)
-
ThinkBig (1)
-
Twitter (26)
-
Xataca (1)
-
Youtube (6)