Skip to main content
Prof Kanshu presenting on AI at the 2026 SU International Staff Training Week
Media release

Never put AI on autopilot: demystifying data and intelligence in internationalisation

Petro Mostert
24 April 2026
  • Prof. Kanshukan Rajaratnam defined AI at the SU International Staff Training Week as the replication of human thinking through computational systems, emphasizing that while these tools are efficient at predicting data, they lack human logic and situational context.
  • PASS Staff engaged in hands-on exercises, such as generating professional biographies, which revealed that while AI can boost efficiency and simplify complex information, it often produces generic or inaccurate content that requires human "piloting" and responsibility.
  • The sessions highlighted that AI struggles with cultural nuances and carries inherent data biases, prompting a call for internationalisation practitioners to use the tools critically to ensure they don't replace human judgement or compromise data privacy.

There was a moment, midway through the session, when the room burst into quiet laughter.

A simple question had been posed to a range of AI tools: Should I walk or drive to the mechanic if my car’s headlight is broken? It was a sunny day, so driving was technically fine. A doctor had advised more walking. The goal, of course, was to get the car fixed.

Every AI model gave the same answer: walk.

“And that,” said Prof Kanshukan “Kanshu” Rajaratnam, smiling, “is how you know these tools don’t think as humans do.”

It was one of many moments during the AI-focused day of Stellenbosch University’s (SU) 2026 Staff Internationalisation Training Week when complexity gave way to clarity, and when artificial intelligence became a little less mysterious and daunting, and a lot more human and manageable.

A week of global exchange — with AI in focus

The AI sessions formed part of a broader programme hosted by Stellenbosch University International (SUI) from 13–17 April 2026, bringing together 22 professional, administrative and support staff from across South Africa, Europe and beyond. The week explored themes such as intercultural competence, professional identity, and the evolving role of data and technology in higher education, alongside activities ranging from UNESCO Story Circles to job shadowing across SU’s campus.

Within this context, Kanshu, Director of SU’s School for Data Science and Computational Thinking, stepped into a room of internationalisation practitioners, many of whom do not work in AI or data science, but who increasingly encounter its influence in their daily work.

His task was not to turn them into data scientists, but to help them understand what AI is, what it is not, and how to use it: responsibly, critically, and confidently.

What AI is — and what it is not

Kanshu began with a deceptively simple question: What does artificial intelligence mean to you?

The answers came quickly: its efficiency, automation, tools that make life easier, and the smart features in our phones.

“All true,” he acknowledged. “But those are descriptions of what AI does and not what it is.”

His working definition was straightforward: AI is the replication of aspects of human thinking using computational systems. But he emphasised that machines are not good at being human.

“Large language models don’t think as they predict the next most probable word,” he explained.

He distinguished between Artificial Narrow Intelligence (ANI) — systems that are highly effective at one specific task — and the still-distant idea of Artificial General Intelligence (AGI).

When AI gets it wrong

The “mechanic question” was not just a joke, but was a turning point in the session.

Any human in the room immediately recognised the flaw: if you walk, your car never gets fixed. The AI tools, however, defaulted to generic advice and missed the practical logic entirely.

“This is my way of showing you that these tools are not logical,” Kanshu said. “Any human in this room knows I need to drive to the mechanic if I want my car fixed. The AI doesn’t.”

Learning by doing

The session quickly became hands-on. Participants were asked to generate short professional biographies using AI tools. The results were both impressive and imperfect.

“It was actually very good , but also a bit too generic,” said Dr Sonja Buchberger from the University of Vienna. “It sounded like me, but not quite like my voice.”

Others noticed inaccuracies or overuse of buzzwords.

 

Kanshu shared his own experience of AI-generated content that confidently invented academic credentials he never earned. “Whatever AI writes, you still take responsibility for it,” he reminded participants. “You are still the pilot.”

Why this matters for internationalisation

For many in the room, the real question was not what AI is, but what it means for their work.

AI tools are already influencing how international offices operate: from analysing recruitment trends to managing large volumes of student enquiries. One example that resonated strongly was the use of AI to simplify complex visa regulations.

“It’s incredibly helpful,” said Tracy Poelzer from the University of Groningen. “You can take something very technical and turn it into something a student can actually understand, but you still need to check every detail.”

AI, in this sense, becomes a bridge, not a final authority.

The human layer: culture, context and care

If AI excels at processing information, it struggles with something far more subtle and that is context.

Participants noted that AI-generated text often sounded overly formal or culturally misaligned. “I often have to tell it to ‘boast less’,” one participant shared during the discussion. “Otherwise it doesn’t sound like me at all.”

Another reflection highlighted deeper concerns: “When I asked about a medical condition in general, it replied as if I had it,” said a participant. “That made me think about assumptions — and how easily the tone can go wrong.”

These insights pointed to a broader reality: AI is shaped by its training data, which often reflects dominant global perspectives. For internationalisation practitioners working across cultures, this requires constant awareness and adjustment.

Risks, ethics and responsibility

As the session deepened, so did the conversation around ethics. Kanshu urged participants to think carefully about what they share when using AI tools. “If you wouldn’t put it on a public billboard, please don’t put it into an AI prompt.”

Participants engaged actively with these concerns. “There should almost be a way to see how confident the system is,” one participant suggested, questioning why AI outputs do not include visible margins of error. Another raised questions about privacy settings and “temporary chat” modes, asking whether they truly protect user data.

The discussion made one thing clear: using AI responsibly requires understanding not just what it can do, but also how it works.

From data to decisions

The session also explored how AI can support data-driven decision-making in higher education.

Using examples from student mobility datasets, Kanshu demonstrated how AI can quickly identify patterns and generate insights.

“For people who are not data specialists, this is incredibly empowering,” said Babalwa Mtshawu from Stellenbosch University. “It allows you to engage with data in a way that was previously quite intimidating.”

But the emphasis remained on interpretation: AI can highlight patterns, but humans must decide what they mean.

Preparing for a changing world

As the session drew to a close, the conversation turned to the future.

AI will not replace internationalisation practitioners, but it will change their work. “The distinction between AI as a co-pilot and not an autopilot really stayed with me,” said Madita Elise Siddique from Free University Berlin. “It’s about using it to support your thinking, not replace it.”

For those willing to engage with it, AI offers a powerful opportunity. For those who ignore it, it presents a risk.

The takeaway: stay in the pilot’s seat

By the end of the day, AI no longer felt like an abstract, intimidating force for some, but more like a tool that one can actually master. Use, but not abuse.

It is a powerful tool — capable of saving time, enhancing communication, and unlocking new insights, but also one that demands care, curiosity, and critical thinking.

If there was one message from Kanshu that stayed with participants, it was this:

AI can help you fly. But you are still the pilot.

Related stories