PROF SANJAY SARMA
Prof Sanjay Sarma is the CEO, President, and Dean of the Asia School of Business (ASB). He is also a professor of Mechanical Engineering at the Massachusetts Institute of Technology (MIT) and has a courtesy appointment at the MIT Sloan School of Management. He co-founded the Auto-ID Centre at MIT, pioneering modern RFID's technical concepts and standards.
Prof Sarma received his PhD from UC Berkeley, his MS from Carnegie Mellon University and his undergraduate degree from the Indian Institute of Technology. His expertise includes RFID, sensors, manufacturing, autonomy, AI, sustainability and innovation. He has authored over 150 publications and played a key role in India’s Aadhaar unique ID system. Prof Sarma’s contributions have been recognised with multiple awards, including the MIT MacVicar Fellowship and NSF CAREER Award. He has been honoured by Business Week, Fast Company, and RFID Journal for his innovations.
In addition to his academic achievements, Prof Sarma has been influential in education, helping establish Singapore University of Technology and Design (SUTD), serving as the first Director of Digital Learning at MIT and as the Vice President for Open Learning at MIT. His initiatives include MIT Open Learning, MicroMasters, the Jameel World Education Lab, the MIT Integrated Learning Initiative and MIT xPRO.
Prof Sanjay Sarma and three other professors from MIT published a white paper entitled “An Affordable New Educational Institution (NEI)”. In that paper, they proposed a new model for tertiary education institutions which is more aligned with today’s social needs and technological environment. Our chief editor interviewed Prof Sarma in his office at the Asia School of Business in Kuala Lumpur to talk about the white paper and the impact of AI on education.
Your white paper on “affordable NEI” was written in an American context. How applicable are the proposed design concepts to Southeast Asia?
We wrote the paper in the US context because MIT is a US university, and we decided to focus on the local context. Furthermore, our donor was the former governor of Illinois. He was concerned about US competitiveness and about the cost of education in the US. That said, I believe the paper applies even more in Southeast Asia, and it is relevant in two ways:
First, we know that tuition debt is a massive problem in the US. We are aware of it because the “consumers”, who are the students and their families, bear the high cost of education. In most of the rest of the world, including Southeast Asia, the costs are mostly hidden because the governments bear most of them. For instance, I went to IIT (Indian Institute of Technology), a top school in India. The education there is excellent, and it has a lovely campus, but I paid very little tuition. Providing such high-quality education is very expensive, but heavy government subsidies bury the true costs. The rest of the world has the same cost problem as the US, but the costs are borne by the government and, ultimately, in a more diffuse way, by the taxpayers.
The second point is that we have to address education at a fundamental level in the age of climate change and technology change. We really have to ensure that students understand what they are learning and can apply what they have learned. The cost-benefit analysis of higher education is essential regardless of whether you are in the US or the rest of the world, and perhaps more so in poorer countries because the Western model is even less affordable on a relative basis.
To make the cost-benefit analysis work, we must improve the delivery and reduce costs. This can be achieved by applying some of the concepts we proposed.
Your white paper talked about reducing research to reduce costs and offering micro-credentials to improve delivery. Can you please elaborate on these ideas?
Absolutely. My view on research is provocative. MIT does fantastic research; you can tell from the number of patents, new companies, worldchanging technologies, etc., that come out. MIT can improve but should not change, and MIT will continue to be MIT. However, the research in many other universities is not as successful because they don’t have the equipment, the incentive system and the impact of MIT. That doesn’t mean you don’t research, but you don’t have to replicate what’s happening at MIT. And they should focus on the teaching and make it spectacular because young lives are at stake. Research competitiveness should not detract from this more basic duty. That’s the basic idea.
The issues in Asian schools are different from the issues in America. The teaching here is often based on textbooks which are written elsewhere. Learning is not connected to future jobs because students learn concepts that are out of context or out of date. They then graduate with one set of skills, while the fast-moving world demands another set of skills.
WE’RE SAYING, “NO, IT’S LIKE LEGO. YOU GET ONE PIECE OF LEGO, GO TO WORK, FIND OUT WHAT MORE YOU NEED TO LEARN, THEN COME BACK LATER TO GET THE OTHER PIECES.”
So, in the white paper, we proposed to break a university degree into micro-credentials. That means you can be a little bit more modular in your learning; you can use your micro-credentials to work for a couple of years before you come back to learn more. In today’s colleges, especially in Asia, where I grew up, there is no such thing as leaving gracefully during your undergraduate years. It’s called a dropout and viewed as a failure. We’re saying, “No, it’s like Lego. You get one piece of Lego, go to work, find out what more you need to learn, then come back later to get the other pieces.” We legitimise the fact that you must go to work so that you’re not diverging from reality during the four years you spend in college.
Overwhelmed and underfunded
A Lumina Foundation-Gallup 2024 State of Higher Education Study found that 35% of students enrolled in a post-secondary programme in the US have considered stopping out in the past six months. The primary concern among students is emotional stress (54%), followed by personal mental health reasons (43%) and cost of higher education (31%).
Photo: Aaron Hawkins / iStock
To some extent, MOOCs (Massive Open Online Courses) already allow people to do that. What is your view on MOOCs?
I’ve had some second thoughts about MOOCs over the last decade. While I was deeply involved in MOOCs for many years, I’ve recognised a significant issue, which I discussed in my book. Earlier on, when we were building the infrastructure of MIT Open Learning, we noticed a fundamental difference between learning abstractly and practically. Abstract learning works well for older learners like you and me because our background and experience help us contextualise and apply new knowledge. However, for younger people without relevant experience, learning abstract concepts alone does not translate into practical application.
For example, if I’ve done some accounting and subsequently took an accounting class, I know how to apply it, and insights fall into place. However, without the context created by that prior experience, the learning would not know where to situate itself.
One fundamental thing we did at MIT was associate MOOCs with the flipped classroom idea. We use MOOCs for the abstract, but the classroom is where you do things hands-on. While MOOCs deliver abstract knowledge, we need live sessions for coaching and for people to interact with each other as part of the learning process. MOOCs are only a puzzle piece but don’t solve the whole problem.
WORKING WITH YOUR HANDS AND SOLVING PROBLEMS GIVES YOU INSIGHTS. HENCE, THE FLIPPED CLASSROOM APPLIES EQUALLY TO SOMETHING AS ABSTRACT AND PURE AS MATHEMATICS.
Yes, I can understand this if we are talking about some “applied” courses like engineering and programming, but what about more theoretical courses like natural sciences or mathematics?
Let’s take the most theoretical topic: mathematics. Today, we think of it as purely abstract, but historically, mathematicians like von Neumann were trained by teachers who brought mathematics into physical reality. Today, we teach mathematics abstractly, but great mathematicians often think of it in more tangible terms.
For example, the concept of vectors is thought abstractly. But in reality, you’re applying vector principles every time you walk. You walk three steps north and then two steps east; the square root of the square of three plus the square of two is the distance you walked in the hypotenuse. Even in math, it is very possible to bring in practical insights.
Working with your hands and solving problems gives you insights. Hence, the flipped classroom applies equally to something as abstract and pure as mathematics.
MOOCs have contributed to making higher education more inclusive and affordable. Can AI have a similar impact? If so, in what ways?
I was not a good tennis player when I first learned to play tennis. My best tennis partner was the wall behind my house. I hit the ball against the wall, and it always bounced back. We called it wall tennis when we were growing up.
My point is that AI can be something but can’t be everything. Among other things, it can be a great wall. If you want to practice, AI can do some minor coaching. It can give you hints; it can be an ally for learning. It’s not a human being and will never be a human being. However, for most learners who do not have a human ally or a personal coach, it is a fantastic wall to hit the ball back.
Beyond grades
Self-Determination Theory (SDT) posits that fostering autonomy, competence, and relatedness in educational settings enhances intrinsic motivation, leading to greater student engagement and academic achievement. Intrinsically motivated students find joy and personal satisfaction in learning, which improves their overall well-being and academic outcomes.
Photo: Images By T.O.K. / Alamy Stock Photo
So, you’re saying one can bounce ideas off AI and learn from its responses?
Let’s consider teaching algebra, specifically quadratic equations. When students struggle to solve problems, AI can offer hints and guidance to help them progress. This relates to a concept in the psychology of learning known as Vygotsky’s zone of proximal development. Vygotsky, a brilliant Russian philosopher in education, proposed that effective learning happens when a learner is guided by someone slightly more advanced than he is. For instance, if you’re a novice tennis player, you won’t benefit much from training with someone as skilled as Roger Federer. Instead, it would be best to have a coach who is slightly better than you, who can gauge your current level and provide the right amount of challenge to help you improve. This coach knows how to calibrate their guidance to keep you motivated and progressing without overwhelming you. The learner will lose hope if the skill gap is too broad.
HUMAN BEINGS MUST FOCUS ON THE MOTIVATION IN LEARNING — THE ‘WHY’ AND THE ‘WHAT’. WE BRING CURIOSITY, THE NOVELTY, THE APPLICATIONS, AND THE RELEVANCE INTO THE LEARNING PROCESS.
For example, if I ask you to invert a matrix, you might have done it at some point in your life, but you probably don’t remember that you need to calculate the determinant and cofactors first. An AI could assist you by prompting, “Calculate the determinant.” If you make an error, the AI can identify it and ask, “Do you see the error?” If you don’t, it can guide you and help you understand where you went wrong. This kind of minor coaching leads to those ‘aha’ moments, where you learn more effectively through discovery. Although this is not major coaching, it’s still crucial because it is where people get lost. AI can assist in these scenarios, providing the support needed to overcome small obstacles.
AI can act as that coach in education that pulls you along. By offering tips, hints, and incremental support, AI can bridge the gap between what students can do independently and what they can achieve with guidance. This tailored assistance keeps students engaged and helps them develop their skills more effectively.
This sounds helpful, but I have seen students who Google the answers to their homework assignments without even thinking through the questions…
Yes, AI’s strength lies in aiding with specific tasks and offering guidance and support when needed. AI is not good at motivating you or teaching you entirely new concepts. AI can only do so much if a learner has no intrinsic motivation to learn.
In fact, AI has no solution for the lack of intrinsic motivation to learn, and thank heavens for that. You and I, as human beings, still have a role to play. Human beings must focus on the motivation in learning — the ‘Why’ and the ‘What’. We bring curiosity, the novelty, the applications, and the relevance into the learning process. For example, when we teach quadratic equations, we can point at the trajectory of a ball and say, “See, that’s a quadratic”.
This kind of contextual understanding and inspiration is something AI cannot provide. Humans create something called intrinsic motivation, which is essential for effective learning. Once you have intrinsic motivation, AI has the potential to become an assistant. Unfortunately, we have it backwards now. We claim that we are leveraging AI, but we aren’t fully leveraging our strengths and doing what humans are good at. We need to get AI to do the things AI is good at. We must find our role as human beings and reinforce the importance of human involvement in AI-assisted learning.
Shaping innovative pedagogy
Formerly directing MIT’s collaboration with Singapore, Prof Sarma has been pivotal in shaping the distinctive pedagogy model at the Singapore University of Technology and Design (SUTD). This model advocates for a multi-disciplinary, human-centric, and design-focused curriculum, adopting an “outside-in” approach that prioritises industry needs and addresses contemporary global challenges. Additionally, SUTD offers the world’s first programme combining design and AI.
Photo: Edwin Tan / iStock
You mentioned the “What” and the “Why” of learning; let’s discuss that. An example I can think of is writing. We don’t write that much anymore. Some young people I know even try to avoid typing — they ask Siri by talking into their phones instead of typing their questions on Google. But we are still teaching children to write as part of literacy. On the other hand, we have the more abstract “21st-century skills” like critical thinking and creativity. Knowing that AI will become even more capable of composing our reports and solving many of our problems in the near future, isn’t there a need to rethink what skills our children should learn in their education?
There’s a lot of research on this topic. Let’s start with the multiplication tables. The question is, should we memorise multiplication tables? Strangely, the answer is “yes”. We engage ourselves in regular reasoning every day, such as calculating the space needed per student on the campus of the Asian School of Business. My reasoning slows down if I must use a device for every calculation. Being able to calculate quickly in my head accelerates my reasoning significantly. That’s called optimisation. Memorising fundamental concepts like multiplication tables and basic division enables faster processing, like how computers keep certain tasks in the CPU for speed.
WE MUST BE EXTREMELY CAREFUL IN DECIDING WHAT HUMAN BEINGS SHOULD LEARN AND WHAT WE CAN RELY ON SOMETHING LIKE AI TO DO.
There’s a fine line between what we choose to memorise and what we rely on devices for. Multiplication tables are valuable to memorise because they are part of your fundamental reasoning. Now, should you be forced to remember what the capital of Sabah is? I was there a couple of days ago; it’s Kota Kinabalu. It’s interesting if you’re interested in geography, but it is not fundamental to everyday reasoning.
About ten years ago, a math professor at Stanford told the California State Board of Education that teaching algebra below a certain age was unnecessary, which became a big controversy. I disagree with her; I think we should teach algebra at an early age because it’s a part of reasoning. You must draw a fine line to determine what is fundamental and what helps reasoning.
As for writing, research shows that writing and drawing are very important for learning. When you write, you learn better than if you type on a computer. If you’re designing something, drawing it by hand is better than drawing it with CAD (Computer-Aided Design). Part of that has to do with the engagement of the motor cortex when one draws by hand. Besides writing, drawing and math tables, we have to look at thousands of other things to figure out what humans should learn and what we should relegate to robots. This is the research some cognitive psychologists are doing right now.
Now, we must be extremely careful in deciding what human beings should learn and what we can rely on something like AI to do. I worry about it, and I hear a lot of different opinions. We must be careful what we take away from our education because you may take away something very fundamental to human intelligence. For example, if every one of us had a Segway scooter and we could go anywhere on it, would you stop teaching kids how to walk? No, you would not because, intuitively, walking is important — it gives us the freedom to operate in our space. There are cognitive skills that are equivalent to walking and running in physical skills. You don’t take them away no matter how much automation is available because they’re fundamental to being human in performing human reasoning and doing the human things that AI cannot do.
In our existing system, students spend 12 to 13 years in primary education and 17 to 20 years if you include higher education up to a doctorate. However, we don’t teach them enough of the “Why”. They also don’t learn enough about the brain, cognitive science, reasoning, logic, synthesis, or invention. In fact, our education system is designed to take those things away from them. We even have expressions like “curiosity kills the cat” to discourage them from being curious. As a matter of fact, the education system takes away the motivation to learn. If you train them to be robots, they will use a robot when a robot shows up. And when real robots show up, they lose their jobs. We must fundamentally rethink what we’re telling young people about life. We haven’t had the courage to do that.
Where mind, meets hand
The Action Learning pedagogy at Asia School of Business immerses students in real-world business challenges. By collaborating with companies, students apply classroom theories to practical problems, fostering critical thinking, problem-solving, and leadership skills. This hands-on approach bridges academic learning and industry practice, preparing students for complex business environments.
Photo: Wee Hong / CC BY-SA 4.0
Compared to the previous waves of technological innovation, namely IT and the Internet, AI is happening much faster. Would our education have time to respond to it before it is too late to do anything about it?
There are two ways to approach AI. One is to continue the old ways and have a dysfunctional system for another ten to twenty years. The other approach is to let AI force us to rethink what we do fundamentally and have the courage to reexamine why we do it.
In fact, AI is not happening suddenly. It has been creeping up on us for ten years. Large Language Models (LLMs) are a year and a half old; I knew two to three years ago that LLMs were a big deal. The problem is we only react when we are in pain. It’s easier to sell a painkiller than a vitamin, but we are not feeling the pain yet.
Wartime is terrible. However, wartime is also an excellent opportunity to be creative because you’re forced to go back to the basics. Perhaps we should treat this moment like wartime when we examine the fundamentals and step back to reinvent what we do. We’re not doing that yet; we’re drifting along. Let’s not forget we also have climate change looming in the background. Ocean levels will rise, and beaches and islands may vanish suddenly. We have some crises coming up, so this is the time to reinvent and rethink education. Unfortunately, we are still making incremental changes to a broken system.
Are you saying we should expect chaos before things settle?
That’s a human behaviour issue. We wait for chaos before we react. Take COVID-19, for example. Experts had warned about pandemics for twenty years, yet we didn’t change laws to cope with them or adequately prepare for them. We spent more on making movies about pandemics than on preparing for them. It was sheer coincidence that Moderna and Pfizer could spin out those vaccines. But now, we are acting as if the pandemic didn’t happen. In the case of AI, I hope we have the courage to get proactive because we see another crisis coming.
Your white paper on NEI proposed some fundamental changes to our education system, but it was written before ChatGPT arrived. Do you plan to update the paper to take into account AI?
I did consider updating the white paper but decided not to initiate that conversation. If anything, AI makes the white paper even more relevant. For example, should one still spend four consecutive years in college with AI happening so fast? This underscores the importance of micro-credentials. The coaching and the MOOC parts proposed in the white paper became easier with AI, while the flipped classroom became even more valuable. The white paper anticipated AI and used an AI micro-credential as an example of team teaching. Although the rapid progression of AI from 2021 to 2022 surprised me, the foundational concepts of vectorising words were already known to us. It was partly by coincidence and partly due to our forward-thinking approach that we took AI into consideration.
So, if we have an opportunity to start a fresh university in Southeast Asia, is that white paper still applicable?
Oh, a hundred percent! If you start a new retail store today to sell batiks, would you buy some real estate and set up stores? Or would you set up an online store with a few fitting rooms where people can test your products? You will most likely take the latter approach and avoid doing traditional retail. Similarly, the approach we proposed is flexible. It uses better pedagogy and focuses on what people are good at by focusing the teachers on what AI cannot do. I wouldn’t do it any other way.
And AI will have a tremendous societal impact if we do this right, specifically an immense impact on jobs. If you put the effects of AI and climate change together, you will see a massive shift in jobs and a massive need for new skills, making old skills obsolete. AI is both the problem and the solution because it accelerates learning; coaching is, therefore, very important. AI will not replace us humans, but we need to put humans more on the human side and AI more on the AI side in the learning process. If we are so good at teaching machines, we must up the ante on teaching human beings.
INSTEAD OF TREATING EDUCATION AS A BLACK BOX, WE NEED TO MAKE IT TRANSPARENT WITH BACK-AND-FORTH DIFFUSION, ESPECIALLY NOW WITH THE ADVENT OF AI.
Talking of jobs, we know that MOOC providers like edX and Udemy have been helping companies train their employees, either because the required skills are not taught in schools, or they don’t teach them fast enough. I guess we will see that in AI-related skills, too.
We call this model of flexible, cost- and timeefficient learning Agile Continuous Education (ACE). If you go to ace.mit.edu, you can read about it. We see a college degree as a continuous process with no end date. It is like working out; you must go to the gym three times a week for the rest of your life. You can’t just go to the gym once and say, “I’m fit”.
Our traditional education is too monolithic. The systems are basically monastic because they came from Christian monasteries in Europe. In a monastery, the Bible was used as the textbook. Education then involved students entering a black box to learn the unchanging texts. The teachers were priests who had mastered the Bible, which had remained static since it was written nearly 2000 years ago.
That model no longer works today. Today’s knowledge continuously evolves, and our education systems need to reflect that. Instead of treating education as a black box, we need to make it transparent with back-and-forth diffusion, especially now with the advent of AI. Professors and students should engage in these back-and-forth exchanges, and the “bible” of knowledge should be continuously updated. Otherwise, we will keep having this divergence between the “product” of education and what the market needs. We need a much more diffusive connection between the process of education and reality.
This reminds me of when I was a teaching assistant in graduate school. That was when universities started to digitalise teaching and learning, and teaching assistants were supposed to help the professors assign and grade homework on laptop computers. We discovered then the biggest hurdle to such process improvement was the professors’ reluctance to learn the new tools…
It takes a lot of work to change an established institution. We can do it here at ASB because we are young, but it’s not easy in most places. Faculty often come in with a social contract to remain in their roles for the rest of their lives. That’s why we’re starting a new system with a new understanding of the faculty members’ role. You will focus on educational outcomes. Research will be 20%, not 80%. Your primary focus will be on cutting-edge pedagogy; you will learn and be trained and retrained. It’s like medicine; you have to learn the latest — if there’s a new chemotherapy drug or surgical technique, you have to learn it. We must put educators in that model and tell them they are in the business of creating talent and human capital. We need to change their role in society.
Many professors do research as part of their tenure, which I like, but the teaching part is also important. We need to fundamentally change the profession and the mindset. It’s not an unknown thing; it has been done before, like how the Japanese introduced lean manufacturing. Similarly, Microsoft transitioned from being a Windows and Microsoft Word seller to becoming a cloud company. Satya Nadella made that change. He introduced the concept of a growth mindset, which he detailed in his book. We need to apply the same growth mindset to education to keep pace with the evolving world.
Are you on the right side of the algorithm?
The International Monetary Fund reports that AI will impact 40% of global jobs. Roles with a “high complementarity” to AI, like surgeons and lawyers, are safer, while those that could be fully automated by AI, like telemarketing, face higher displacement risks. Low-exposure occupations, including dishwashers and performers, are less affected by AI advancements.
Photo: KSChong / iStock
While there is an urgent need to make that mindset change timely, awareness of that need is not uniform across our societies. We have countries like Singapore and institutions like MIT on one end of the spectrum, but many others are less aware of the potential impact of AI. Should we be concerned with such awareness gaps?
Look, we still need to grow food and maintain certain essential skills — those won’t change. However, some very fundamental aspects of life will shift due to AI. Interestingly, AI’s impact is quite paradoxical. Jobs are less at risk in poorer countries where manual labour is prevalent. Countries with a lot of knowledge workers are more at risk because AI tends to target white-collar jobs more. For instance, countries like India and the Philippines are vulnerable because they are big in programming and business process outsourcing. There are many banks here in Malaysia. Many jobs in those banks could be eroded by AI, too. On the other hand, we expected AI to replace Grab drivers’ jobs with autonomous driving, but that has not happened. The farmers are also less at risk.
These are examples of Moravec’s Paradox: jobs that humans find easy are hard for AI, and jobs that humans find hard are easy for AI. While we need to change education systems everywhere, it is in middle-income and higher-income countries where both the urgency and capability exist — but they need to act fast.
The “CRAFT” of AI
Concerns about AI misuse are rising alongside the increasing adoption of AI in education. This has prompted educators to teach students how to use technology ethically. Initiatives like Stanford University’s CRAFT programme offer free AI literacy resources for high school teachers in various disciplines, encouraging students to explore, understand and critique AI.
Photo: Brothers91 / iStock
AI IS SO GAME-CHANGING AND ADVANCING SO RAPIDLY THAT THE GOVERNANCE OF AI MUST MOVE JUST AS SWIFTLY. IT’S LIKE STEPPING INTO A ROCKET AND HAVING TO LEARN TO CONTROL IT.
Considering the prevalent concerns about AI’s inherent biases from training data, how can we address the risk of bias when implementing AI in an educational setting?
Bias risk is only one of the many risks we need to deal with. AI is so game-changing and advancing so rapidly that the governance of AI must move just as swiftly. It’s like stepping into a rocket and having to learn to control it. We need technical knowledge and a broad understanding of how AI works and how it is used.
For example, the European Union enacted a new law on AI based on risk level just months ago. It says how much AI needs to be governed depends on how risky the application is. For example, autonomous driving is dangerous because someone might get killed if something goes wrong with its AI. In fact, Tesla has been sued a few times because of its self-driving system. A chatbot used by a travel agency may be less dangerous, but it could cause someone to miss a flight, which could be dangerous if it pertains to a health issue. So, risk profiling is a fundamental element in the governance of AI.
If I go to someone at one of the banks in KL and say, “Hey, listen, you need to think about your chatbots; here are the risk levels, and here are the potential damages.” They will probably have no idea how to incorporate these things into their governance system. These are things that I think we need to start working on.
Misinformation and biases disseminated by AI are a primary concern of many people today, especially when AI is used in education. Do you think AI will one day learn to self-audit and fact-check to avoid “hallucination” and become unbiased?
I’m not so sure about that. The jury is out on that for two reasons. First, we must remember it is a large language model rather than a knowledge model. This is why we encounter so much hallucination — it’s essentially generating gibberish at times. The fact that it made sense surprised me when I first saw a hallucination a few years ago. Some people treat it as a feature because of its creative and exploratory nature, but it’s actually a bug that turned out to be quite useful sometimes. However, we must remember AI does not even do reasoning because it’s just a language model for now.
People are still working on two additional dimensions. One of them is reasoning. This involves chains of analysis and feedback loops. For example, when AI adds two numbers, it does pattern matching. Only recently has it started to understand the algorithm every child learns for addition — carrying numbers and so on. This is a basic form of reasoning.
Secondly, the ability to pull in knowledge and generate knowledge was just a coincidence for language models. AI could pull in fake knowledge. What is good knowledge? What is wrong information? What is an opinion, as opposed to intrinsic knowledge? We still haven’t figured out how to let AI distinguish different types of information. Adding to this challenge, there are malicious parties out there who intentionally fill the Internet with fake knowledge, some of them AI-generated. When AI scours the web, how do you ensure the data it learns from is reliable?
Over time, hopefully, we’ll have AI that can reason and be authoritative. If we have that, it’ll be like a personal angel sitting on your shoulder and giving you the proper guidance based on all the incoming information. For instance, “That’s a spam call, ignore it”; “That’s a phishing attack”; “That’s a piece of fake news”; “That article has only 70% quality”. I think we’ll end up there, which will be an excellent thing, but the jury is out on when we will get there.
Remember, what humans find easy, AI finds hard. You can look at an article and say that it’s biased, but AI will take some time to be able to do that.
First-of-its-kind AI pact
Arizona State University (ASU) has teamed up with OpenAI to explore the potential of generative AI in higher education. With full access to ChatGPT Enterprise, ASU plans to enhance coursework, tutoring and research. Key initiatives include offering personalised AI tutors, enabling students to create AI avatars for study help, and expanding the university’s prompt engineering course.
Source: Arizona State University
So, because of all these uncertainties, will it be a while before we have some fixed guidelines on applying AI in education?
No, I think it’s clear what needs to be done. First of all, we need to distinguish general-purpose AI from purpose-specific AI. Most of what we see today, like GPT, is general-purpose. It is like the Wild West, and it boils the ocean. It can provide a lot of information, sometimes incorrectly, but lacks specific reasoning because it isn’t tailored for a particular purpose.
The next wave of AI will focus on purpose-specific applications. We haven’t done much of it yet because our focus has been too broad. However, we’re starting to make progress with purpose-specific AI. For example, Sal Khan of Khan Academy has created a tutoring programme called Khanmigo. He took ChatGPT, a general-purpose AI, and tweaked it for teaching. But since it is based on a general-purpose AI, it still has all the fake things, making things very complicated. It is also very expensive, making Khanmigo quite costly to use. If you had a GPT designed specifically for tutoring, trained only with legitimate textbooks, it would be much more lightweight, running on fewer servers, and thus much cheaper and more accessible. You can apply this approach to every branch of education, including medicine. I call it authoritative AI, which you can train with high-quality data to make it reliable, cheaper, more scalable and more accessible.
We need to make progress fast because we’re in an uncertain period where general-purpose AI is being integrated into various industries like banking and travel. Since it doesn’t work perfectly, you would need a lot of education to take it on. On the other hand, areas like education and medicine are immensely important on their own and purpose-specific, authoritative AI must be created for them. That will be the next wave. ∞
SOOINN LEE
HARNESSING AI IN EARLY CHILDHOOD EDUCATION
Bridging Gaps and Empowering Leaders
Today’s education increasingly emphasises critical thinking, creativity, collaboration and communication while shifting away from traditional teaching methods. Teachers now serve as facilitators, encouraging studentcentred learning to foster critical thinking among students.
Despite the focus on these new skills, foundational abilities such as reading, writing, and math remain essential for all young learners. However, a significant challenge persists; according to the World Bank, 70% of children in developing countries fail to achieve minimum proficiency in literacy and math, primarily due to inadequate access to quality education and a shortage of teachers. Disparities in digital access between different socioeconomic groups compound this issue.
Recognising these challenges, we are enhancing our existing education technology (EdTech) solutions with AI capabilities to democratise access to quality education, focusing on addressing the educational needs of underserved communities. There are three significant roles AI can play in this:
1. Enhance Learning Experiences
AI can be harnessed to enhance learning experiences by employing advanced knowledgetracing algorithms and cutting-edge Large Language Models (LLM). These technologies enable detailed analysis of students’ learning data, facilitating personalised adaptive learning pathways. Multi-modal AI interactions enhance engagement, enabling natural interaction with digital tools and more efficient language learning. Diverse digital assets can also be developed to cater to varied learning preferences, ensuring inclusivity in educational content.
2. Support Learners with Special Needs
In inclusive early childhood education, learners with special needs, diverse backgrounds, and differing learning paces must receive adequate support. AI is crucial in identifying students’ strengths, weaknesses, learning styles and preferences. Real-time translation capabilities and adaptive learning algorithms further personalise educational experiences, promoting educational equity.
3. Redefine the 21st Century Classroom
Modern classrooms for early childhood learning can be transformed into dynamic learning environments through AI integration. For example, we are working closely with the Korean government to introduce AI-powered digital textbooks to early childhood learners. This will enhance interactions among students, teachers, and technology, serving as a blueprint for educational reform globally. We hope to expand this initiative to support digital transformation in education across diverse international contexts.
Challenges and Future Projects
Despite the promising potential of AI in education, significant challenges persist. Issues such as internet connectivity and infrastructure disparities between schools and regions impact the effectiveness of AI applications. Moreover, adapting AI technologies developed initially with a Western focus to diverse cultural and linguistic contexts remains a complex endeavour.
However, we remain optimistic about the transformative impact of AI tailored to regional educational needs. Advancements in AI technology hold promise for reducing content creation costs in multiple languages, enhancing adaptive learning methodologies, and providing real-time support for educators in the 21st-century classroom.
As AI continues to reshape education globally, it is imperative to advocate for adaptive solutions that respect and reflect the unique cultural and linguistic identities of diverse societies, particularly in Asia. AI promises opportunities to improve inclusivity and bridge educational divides, and it is up to us to deliver its full potential by ensuring our young learners benefit from it as they build their foundational skills.
AI-powered digital textbooks
South Korea plans to introduce AI-powered digital textbooks in 2025 for core subjects in local elementary and secondary schools, offering personalised learning and AI-driven mentoring. These textbooks will feature interactive curriculum content and self-directed learning courseware. Additionally, AI-powered assessment and analysis tools will help teachers track student progress and tailor lesson plans. This initiative aims to reduce educational inequality and is expected to expand to all subjects by 2028.
Source: Ministry of Education, South Korea
SOOINN LEE
Sooinn Lee is the co-founder and CEO of Enuma Inc., an educational technology company changing the paradigm of basic education through digital learning. In 2019, Enuma’s Kitkit School won the Global Learning XPRIZE competition by helping children in remote Tanzanian villages to read, write and do math independently. Sooinn was named an Ashoka Fellow and a Schwab Foundation Social Entrepreneur of the Year in 2020. In March 2024, she was appointed a committee member of the Korean National Commission for UNESCO.
JULY 2024 | ISSUE 12
NAVIGATING THE AI TERRAIN