Stephanie Verkoeyen is Special Advisor, Generative AI at McMaster University.
Generative AI (GenAI) refers to artificial intelligence models that can create new content – like text, images, audio, or code – based on patterns learned from existing data. Unlike traditional AI, which classifies or predicts, GenAI generates original outputs.
Your title is Special Advisor, Generative AI. How many institutions have created these kinds of roles so far?
Great question. We're actually starting a research project now to look into that exact issue. In Canada, not many. Sometimes it's attached to a Vice Provost position, where it's part of their portfolio. Western has a Chief AI Officer, but as far as I know, it's still pretty rare. Not many institutions have formalized this yet but I think that’ll change soon.
Makes sense. And your role is funded just for the next year?
That's right. So part of what I’m doing is figuring out the future – whether the Special Advisor position should continue or evolve. Things may look very different a year from now.
“We often fall back on how we were taught, instead of questioning what really works.”
Many institutions are still figuring out their approach to generative AI. What realistic steps have you seen McMaster or others take to integrate genAI into their systems without causing major disruption?
I think the caveat at the end really shapes the response. In some ways, this technology is majorly disruptive – and maybe it should disrupt existing systems. But it's hard to bring people along with that level of change. At its core, it’s about change management and human behavior.
A lot of people just want guidance – what they can and can't do. So, many institutions have started with guidelines that provide broad boundaries for experimentation. But now we’re seeing people push against those boundaries, wanting more specific guidance for their context and use cases.
There are a few ways to support that. A lower-effort, more flexible option is offering online modules or prompt libraries. But one challenge is that people need to carve out time to actually engage with those.
What’s been more successful is running workshops or presentations with demonstrations. Hands-on activities make a big difference; people learn more when they see it in action.
But what I’ve found most impactful long-term is building structure and community around the learning process. At my home institution, before I moved into this role, I piloted a ChatGPT Teams initiative last April. Each month we’d experiment with a different task – like using genAI to create a presentation or meeting notes – and then come together to discuss what worked, what didn’t, and what context helped or hindered its effectiveness.
That kind of shared reflection really helped shape conversations and decisions. Rather than everyone learning in isolation, we got to see patterns, differences in context or experience, and build from that.
“There’s a lot of potential to design activities that build both critical thinking and AI literacy.”
What about faculty who are apprehensive about AI tools in the classroom? What are they saying, and how do you support them?
This was a big part of my previous role as an educational developer. The most common concerns were around academic misconduct and students cognitively offloading rather than doing the hard work of learning.
It’s important to validate those concerns. They’re valid and yes, there’s a risk of misuse. But I also think faculty can’t expect to keep doing things the way they always have. The good news is, we can make changes that lead to better learning outcomes.
In practical terms, I encourage faculty to start with just one change – one activity or assignment – and reflect on what they’re asking students to do and why. Because many instructors don’t have formal training, we often fall back on teaching the way we were taught, without asking if it’s really effective.
AI gives us a chance to rethink that. For instance, a lot of instructors use reading responses to make sure students do the reading and are prepared for discussion. But genAI can easily generate summaries, highlight key points, or draft questions.
So what can you do instead? You might ask students to annotate the reading – individually or collaboratively. I did this in a course I taught and it mimicked the close reading process of highlighting and note-taking. It was more engaging for students and gave me better visibility into their thinking.
Another option is to use genAI in assignments to highlight its limitations. For example, students could compare a genAI-generated summary to the original text, or try applying knowledge using only that summary to see the risks firsthand.
There’s a lot of potential to design activities that build both critical thinking and AI literacy.
Many institutions are working on AI-related policies. What key considerations should be included to encourage innovation without compromising academic integrity?
Most institutions have guidelines, but it’s important to distinguish those from policies. Guidelines are flexible, quicker to develop, and don’t carry consequences if they’re not followed. Policies, on the other hand, are starting to get revised to include AI.
For instance, our academic integrity policy now includes generative AI. We’re also looking into revising our undergraduate course management policy to decide what information needs to be included in syllabi, for example formalizing the requirement for AI usage statements.
Policies and guidelines help set boundaries. They need to be broad enough for flexibility, but specific enough to be useful.
Some key considerations include:
Privacy and data protection: Clarify what data should or shouldn’t be used. That includes educating people about what counts as personal information.
Accountability: Make it clear who is responsible in case of harm or malfunction.
Autonomous decision-making: There need to be boundaries on when and how AI can make decisions without human oversight.
“This isn’t just about tools—it’s about culture, change, and community.”
Have you seen any success stories from institutions implementing genAI within existing systems?
Definitely. The biggest one right now is in chatbots to help with student inquiries. Our registrar’s office is piloting one. In a recent staff meeting with students, the students actually expected services to be using AI to improve.
Some institutions build their own tools; others use platforms with genAI features built in. Our registrar uses Comm100, a live chat tool that blends custom answers with AI-generated responses. It's low risk; it only draws from publicly available website content.
Interestingly, this setup forces units to make sure their websites are accurate and up to date. Some units held off using the AI feature because their web content wasn’t ready and that affected the AI’s output. So it improves not just the chatbot, but the overall clarity of publicly available info.
We just got an update showing that AI responses were sometimes 200% more accurate than custom ones. And they expect it to keep improving as they refine training data and web scraping.
Wow, that’s a huge difference. Are institutions using genAI yet for strategy-setting,like budget or direction?
Funny you ask! I’m actually working on that. We don’t have a standalone genAI strategy, but I do see intersections with existing strategies.
So I’m using genAI to do a first-pass analysis of all our strategic plans – identifying where genAI could support or intersect with our goals. Then we can use that draft to spark more focused and nuanced conversations with leadership. It’s not about outsourcing strategy, it’s about using genAI to enrich the process.
That's great. Final question: how can institutions prepare students to work alongside AI without overhauling curricula or overwhelming already-stretched departments?
It’s a big challenge. But we can’t pretend AI doesn’t exist. If students can use genAI to get a B on an assignment, we need to show – not just tell – why they’re better off doing it themselves.
More broadly, we need to explain the why behind what we’re asking students to do. Oregon State has a great resource called Bloom’s Taxonomy Revisited, showing where human skills shine and how genAI can support learning at each level. That kind of nuance helps instructors show students the value of authentic engagement.
As for courses, I’d love to see a course focused on AI literacy where students actually create resources on AI for use in their disciplines. That kind of model could be more sustainable, especially if students contribute to keeping the content current.
And honestly, I think the future is co-learning. We don’t always have to be the experts. Let students explore and share. It takes pressure off instructors and makes space for shared ownership. Even trying this for just one class or assignment can be transformative. Ask students what they got out of using genAI, where it fell short, and how to redesign an activity to be more meaningful.