Professor Rose Luckin's keynote speech at the Cambridge Summit of Education asked whether education is ready for AI, and suggested how educators can help future learners outwit the robots.
"If we get [it] right, it stops being about individual pieces of technology and it starts being about an intelligence infrastructure that empowers the social interaction important for learning and our own understanding of ourselves."
Professor Rose Luckin | Professor of Learner Centred Design at University College London, UK
One only has to look to the phenomenal amount of investment in artificial intelligence (AI) and education in countries like China and Singapore to see that machine learning is here, is developing rapidly and is already changing the use of technology in education.
Modern machine learning is certainly smart, but it cannot learn everything. We as educators therefore need to ensure that our human learners develop a rich repertoire of intelligent behaviours and advanced thinking.
Why is artificial intelligence in the news so much, and why is it having a really big impact in many areas including education? Because of the development of artificial intelligence that can learn.
If you take this ability to build AI systems that can learn and you combine it with big data and modern computing power - that creates the ‘perfect storm’ that we're now in where AI is having a huge impact on our lives and increasingly education.
An important thing to remember when it comes to artificial and human intelligence is that the things we take for granted as humans are fundamental to our human intelligence in an AI enhanced world, and actually they're much harder to automate.
They're perhaps the things we need to pay more attention to, because human intelligence and artificial intelligence are not the same thing. We don't necessarily want to focus in our education systems on the things that we can automate through artificial intelligence.
There's lots of talk about ‘artificial general intelligence’- an imagined machine that can function in the world as well as a human brain- but we are nowhere near that because a human brain is not subject specific. For example, AlphaGo can't diagnose cancer, and AI systems that might be able to diagnose cancer can't play Go. We've got very specific smart AI systems.
Implications of AI for education
What does this all mean for education? I think it's useful to think about three routes for AI to impact education:
Using AI in Education to tackle some of the big educational challenges
Educating People about AI so that they can use it safely and effectively
Changing education so that we focus on human intelligence and prepare people for an AI world
These routes are not mutually exclusive. They are highly interconnected: if we use AI effectively in education, we will help people understand AI more. As more people understand it, they will see why things need to change.
One example of AI in education in those that specialise in STEM language learning, trying to help people learn a language not just in terms of their syntax, semantics, but also language within culture. Companies such as Alelo use artificial intelligence to individualise the way that they teach somebody language and can provide useful individual feedback to educators, parents, and to students themselves.
There are some nice uses of AI involving social learning. Chatterbox matches refugee language experts with students who want to learn that language, but it tries to make matches in a way where there's more than the language learning in common and provides nicely tailored instructional resources.
There are many ways to collect data that can be useful for building AI systems to support teaching and learning. To process that data, we need understanding from what I call learning sciences: cognitive psychology, neuroscience, education, sociology, and language learning.
If we get that right, it stops being about individual pieces of technology and it starts being about an intelligence infrastructure that empowers the social interaction important for learning and our own understanding of ourselves.
We absolutely need people to understand AI, but that doesn’t mean understanding how to build AI. It that means that everybody understands how to use it and, importantly, understanding the ethics. This is something I believe passionately that we must focus on now. Every time we think about processing data, every time we're thinking about using any kind of AI, we must be thinking about ethics in terms of the way that data is processed. What happens? What does the learner receive? What happens if we have ‘deep fake’ videos out there that look like me, for example, but it's not me.
That's why we launched the Institute for Ethical AI and Education, because although there’s a lot of work being done with data, AI, and ethics, nobody was holding the education piece. And education must be the most important thing. It's something we want people to interact with throughout their lives.
Education is crucial because regulation will never be enough. We can't keep up with technology and with people who want to do harm. But this is the hardest bit, and this is where the social stuff really comes in.
Changing education for an AI world
We're now in the early stages of the fourth industrial revolution. There's no question about that. It's happening all around us. There are lots of predictions about robots taking over human jobs, and there is a consensus that areas like transportation and storage are most vulnerable, but we're not about to automate education. Thank goodness for that, because education is fundamentally social. But that doesn't mean it won't be disrupted enormously. It's just that we won't replace educators with technology, at least if we get it right.
Another consensus is that if you have a higher level of education, you are least vulnerable when it comes to the fourth industrial revolution: you’re better educated, more resilient, better equipped to deal with changes. The problem is we don't know what the changes will be; we can predict some things quite accurately in the near term, but there's a lot of uncertainty as well. It's a bit like driving a car in fog along a road you don't know. How useful is a map? Not very.
What's the human equivalent of that car that I need in the fog? I think it's a combination of different elements of our complex human intelligence:
Interdisciplinary academic intelligence, because many of the problems we face in the world today need an interdisciplinary approach
Social intelligence
Meta-intelligence:
meta-knowing
meta-cognitive
meta-subjective
meta-contextual
perceived self-efficacy
Meta-intelligence is where I think the most attention needs to be paid. AI does not have it; AI does not understand itself. Humans can. We don't always understand ourselves very accurately, but we can learn. AI really struggles with social intelligence and meta-intelligences.
Meta-knowing intelligence:
Understanding the nature of knowledge. Many people don't understand what knowledge is, where it comes from, or what makes good evidence. Knowledge is something you construct. It's relative. There are very few things that are certain. It’s important to focus on our relationship to knowledge.
Metacognitive intelligence:
Understanding our own cognition: when we're distracted, when we're focusing, what do we know accurately and what don’t we know?
Meta-Subjective intelligence:
This relates to social and emotional intelligence: the way it's developing and its relationship to other people and their development of emotional intelligence.
Meta-contextual intelligence:
AI finds it hard to understand context or environment. Humans can make sense of new circumstances and experiences relatively easily, but AI cannot.
Perceived self-efficacy:
Understanding what goals we should be setting ourselves, how likely we are to be able to achieve them, how motivated we are, who can help us, how can we be effective in the world?
If we build it right, if we get the right data, and we apply what we know about human learning to the algorithms that process it, then we really can fly.
UCL Knowledge Lab: Collaborative problem solving
As an example, at the UCL Knowledge lab we’ve been looking at collaborative problem solving.
We know from the learning sciences that when groups of students' hands are either placed on the same object or involved with each other, and when their gaze is either looking at the same thing or looking at each other, then that's a signifier that collaborative problem solving going on.
We did a series of studies where we collected data including eye gaze and hand movement, and we recorded video which we then gave to a human expert observer and asked her to identify where groups were collaborating effectively. We mapped our data from the hand movements and eye gaze onto her analysis to see where it matches to further identify what collaborative problem solving looks like.
As we analyse this data, we learn about the kinds of things that could be part of a classroom environment to give teachers information about when they should help a particular group, or when a group's doing well - and then give the groups information to reflect on the process themselves. If you can imagine this triangulated with masses of other signifiers, you can see how that intelligence infrastructure can be very powerful.
Educating developers and teachers
At UCL Knowledge Lab we have a program called Educate, which is all about bringing together EdTech developers, many of whom are using AI, with the people they're developing for: teachers and learners.
We've worked with some 250 companies in the last two years to try and get the conversation going: As an EdTech developer, are you addressing a real learning need or not? And if you are, do we have evidence that it's working?
We need to help our educators better understand AI through working with developers, and we need to help developers understand teaching and learning.
I think if we can get that right, then we can start to follow those three routes of AI into education.
Rose Luckin is Professor of Learner Centred Design at UCL, and the Founder and Director of EDUCATE, the leading research and business training programme for EdTech startups in London.
Original article published here:
Comments