Why our children are the original large language models

Why Children are the Original Large Language Models

You may know that before I discovered teaching, I started out in life as an investment banker.  Well, I was a lowly analyst really, but I was fortunate in that I dealt with high tech advisory work for some interesting clients. Someone recently recommended a Podcast which continues to scratch this itch and has become something of a weekly ritual for me. ‘The All In’ podcast is a useful way of helping me to stay connected with some of the key themes emerging from Silicon Valley.

This week, the team invited Sam Altman to join them in conversation and over an hour or so they dissected every aspect of the Artificial Intelligence (AI) landscape and his rollercoaster ride with OpenAI.  While Sam is fascinating to listen to and incredibly insightful, he said two things which really made me take note.  The first was his genuine interest in the potential for AI to transform the educational landscape. As I mentioned in last week’s blog, the opportunity for AI to reinvent learning is enormous. As Sam acknowledged, it will probably take the form of some type of AI tutor, capable of delivering personalised learning and discovery opportunities for students.  We’re still some way away from this end point as it will require advanced inference skills, but the future has been imagined and I am certain a well-funded tech entrepreneur will find a way of developing the potential in the near future.

The second point Sam made was around feeding these AI entities with vast banks of data.  If you have followed the progress of many of the current AI tools, most are based on large language models (LLMs) which mimic intelligence through the use of probability to suggest words and sentences which answer prompts or questions.  The latest ones are capable of creating images and even movies but much of their underlying ability requires data based on trawls of language.

As Sam was talking, his enthusiasm for finding better and more powerful machine learning solutions was evident – ‘AI can self-drive’, ‘AI can diagnose a disease’, ‘AI can even transform scientific discovery’.  However, as a teacher and father, the long term societal risk is that as AI becomes smarter, the demands placed on children is diluted and their development is impacted.  When ChatGPT emerged on to the scene 18 months ago everyone was worried about nuclear armageddon or killer robots, but the nagging concern I have is for children.  I worry that excessive mechanisation could lead to huge challenges for society, some of which are perhaps starting to play out in our children’s generation with rising levels of obesity and increasing mental health concerns.  Perhaps the causal link is slightly spurious at the moment, but if some of these long-term trends continue, it may require legislative intervention to correct.

So, while I remain a tech-friendly Head Teacher I am fascinated to see how these trends will develop and what the impact of an AI tutor might be for learning.  As I said last week, the potential is huge, I just don’t know which way the impact is going to go.

 

David Paton

Head of Radnor Sevenoaks

Where Next?