pietro-jeng-n6B49lTx7NM-unsplash.jpg

An interview with blake richards

How Academia Can (And Probably Should) Adapt In A Post-AI World

Blake Richards is an Associate Professor in the School of Computer Science and Montreal Neurological Institute at McGill University and a Core Faculty Member at Mila. Blake has received several awards for his work in research at the intersection of neuroscience and AI, and his laboratory investigates universal principles of intelligence that apply to both natural and artificial agents.

ChatGPT is everywhere now – how can the advent of large language models like ChatGPT make a difference for decision makers within institutions?

Well, at least right now, most people are not seeking to use these models to help with decision making of any sort. In part because the current models are still not actually up to the task. They're very impressive, but they are still kind of detached from reality and they're not necessarily something that you'd want to use for decision making of any great importance.

What I think will surely happen across academia is that rather than large language models being used to help with decision making, we’ll have them take over most of the dreary tasks we actually spend a majority of our time doing.

There’s a lot of administration that’s done as part of your job as a university professor. And it feels kind of silly that you're spending your time doing this given the amount of time you spent getting educated, right? Despite expertise in your given area, you probably end up spending ~40% of your time dealing with administrative tasks that you probably could have done just upon getting out of high school. These are the sorts of tasks that we will probably hand over to AI models.

How quickly do you think the university sector will adapt to take advantage of AI models/LLMs for the sake of productivity?

Well, with respect to the risk-averse nature of university culture, I'm not entirely sure how it'll play out. There are lots of things that universities could do to make everything a lot more efficient and productive that they choose not to do.

For example, take the way that most universities handle money. It's horribly inefficient. With modern technology, we could have solved that long ago, but we haven’t – because the greatest fear of many university administrators isn’t inefficiency, but rather that at some point someone might come and accuse them of having misspent university funds in some way.

So instead we've got a million different controls and paper trails to make sure that we can justify various expenditures regardless of the loss of productivity that induces.

I certainly would say that the culture of universities is such that I wouldn't anticipate them rapidly adopting this stuff, and I could see many schools implementing overly restrictive policies that might hold back the adoption of AI for administrative tasks.

Granted, those schools that do take a risk and embrace this kind of tech could stand to benefit massively.

“3rd-party firms are the foremost purveyors of practical data application – The most useful data-driven strategies are rarely crafted in-house.”

What are some other major ways you see AI shaking things up in academia?

I'm not an expert in pedagogy by any means, but I have always found that when it comes to teaching, the process of evaluating students is both the worst part of the job, and, surprisingly, the component that most interferes with learning.

Because many students end up working towards exams and assignments rather than trying to engage with the material. Now, a really good teacher figures out a way to design assignments and exams such that the students must engage with the material. That's something I never felt I was good at. But other people can figure out how to do that, and do it well. However, the presence of these various AI models will greatly disrupt that process.

The challenge for universities is that we're educational institutions, but we're also more than that. We are, and this is where the evaluation aspect comes in, the arbiters of the skill level of to-be practitioners. That's why we give grades so that downstream employers or scholarship decisions have some signal to hook onto.

In the presence of these various AI technologies, it becomes much more difficult to continue to defend even more than it was previously. So at some point, there might have to be a shift in the way we imagine our role. And that might mean accepting that, at the end of the day, we can provide the resources to help train students and teach them things, but we might not be able to give the clearest signals on how well this person truly understands this material.

As the role of the university changes and the job market changes, what kind of skills do you think that universities could be teaching students now to prepare for a post-AI world?

That's a good question. I think probably the best thing to do would be to get over the natural fear that many have of these technologies and instead fully embrace them.

My colleague Konrad Körding at the University of Pennsylvania was calling for (and I think it's not a crazy idea) actually giving free AI assistants to all students. So schools would facilitate the use of AI, equipping students with an AI assistant that they can work with and learn with. It could help to function as a personal tutor and potentially even help them to craft their assignments and stuff like that. Of course there’s a host of data privacy concerns that come with that, but that’s a whole other discussion.

As far as using AI to parse through student data and identify strategic opportunities, where do you see that going?

It certainly can and should be possible to use machine learning models to help inform decisions based on student data. The question would be indeed to what extent are universities willing to actually open up the data to machine learning researchers to analyze it properly?

Even today, 3rd-party firms are the foremost purveyors of practical data application – the most useful data-driven strategies are rarely crafted in-house.

If universities can work through student data anonymity issues with reputable research firms, I think you’ll see some major opportunities there down the road. Eventually, I could see it becoming the competitive norm – it’s just a matter of how willing universities are to embrace opportunity.

Curious about adopting a more data-driven approach to your goals?

Leave us your info and we’ll reach out.