Read articles and posts about technology and hardware Follow my startup journey and entrepreneurship insights View photo gallery and visual content Download and view resume and professional experience Visit GitHub profile for code repositories Watch educational videos and tutorials Connect on LinkedIn professional network Get in touch for collaboration or inquiries

The Return of the Viva

Table of Contents

Contents

Two problems, and only one is Artificial

Much is being written about the impact of AI on education. There is (rightfully) widespread concern about whether students are going to learn both knowledge and skills essential to surviving in the modern world. Critical thinking, logical reasoning, emotional intelligence, persuasive writing, creativity and numerous other skills are being eroded by AI while teachers scramble to find new ways of educating the next generation.

And then there’s assessment…

Part of a global mental health crisis, children aged 3 to 18 are now subjected to more assessment than ever before. In the UK, the average time between ‘consequential’ assessments (tests, marked homework, tracking grades, etc.) has fallen from 8 weeks to less than 4 weeks in the last 20 years.

I say ‘consequential’ because to an adult, a failed homework and a note to your parents isn’t a big deal. But to a child, it is often a heavy-handed reaction to a single dropped grade which can have a horrible impact on mental health. This starts in primary school (approx. age 8) and continues up to age 18 and into young adulthood for university students. Yikes.

For anyone aged over 38, you won’t have grown up in this system. So believe me when I say, it’s not what you experienced. You have no idea what your education and assessment policies are doing to your kids. You want to blame social media for the mental health crisis, and in part you’re right, but you should also blame the pressure you’re putting on your kids. How can kids learn to deal with failure when there is no space to fail? How can they be expected to not become stressed and broken by a system that is constantly monitoring, looking for any sign of missed attainment? The consequences of failure are too great at every stage, and the frequency of looking for failure is too high.

So is it any wonder kids have adopted AI? No. It’s a valid solution to the challenges presented by current assessment strategies. Adults are hoovering up all the big-tech bullshit about the power of LLMs, so it’s no surprise kids are too. And frankly, with good reason. If it’s okay for a company to write all its marketing copy using AI, why shouldn’t a child write their geography essay using it?

We shouldn’t really be worrying about AI. It’s not actually the problem, in my opinion. I’ve been espousing to friends and family for years that current approaches to assessment fail to measure actual ability. In my view, AI has just shone a very bright light on the flaws in an already broken system. It’s good because now we get a chance to fix the system, and perhaps kill two birds with one stone.

The Death of the Viva

Back in 2018, I was a teaching assistant at the University of Bristol (UK) on the 1st-year Introduction to Computer Architecture course. I also taught on the MSc Computer Science Conversion course, as well as delivering outreach workshops into local primary and secondary schools. On top of that, I delivered various A-level classes to fill gaps in teacher’s knowledge, and teacher training workshops. I mentored 3 university students one-to-one during their studies. I’ve also been exposed to other parts of the education system, through my dad’s 20+ years at my former primary school, and in visits to my old secondary school. Throughout, I have received strongly positive reviews about my teaching, further reflected in the excellent outcomes achieved by students I have taught.

The Computer Architecture courses at Bristol were somewhat unique among the Computer Science courses, not just because they covered the design of actual hardware, but also because of how they were assessed. Aside from coursework and exams, the final grade included a viva assessment. Towards the end of the course, students would be examined through a viva, consisting of 3 or 4 students being interviewed by usually 1 (sometimes 2) academics.

Unfortunately, with the growing number of students (undergraduate CS: 140 in 2014; 240 in 2020; 280 in 2025), the university started putting pressure on staff to stop doing viva assessments. They take considerable staff time, a lot of coordination, and some students didn’t like having to answer questions face-to-face. Actually I remember one guy in my year who had a screaming match in the main lobby with non-academic staff about having been asked to leave his dorm for the first time in 6 weeks in order to attend an assessment. Anyway, I digress…

As a result of this pressure, and some staff changes, the UG course ultimately moved away from viva assessments. Whether they’ve returned to them or not I don’t currently know.

The Disadvantages of the Viva

There are a few disadvantages of viva assessment. From a student’s point of view, they may feel higher pressure than traditional exams, and a lot more pressured than a coursework. Getting flustered in a viva can cause the entire assessment to derail if the person asking the questions isn’t well trained.

That brings us to the next issue: training. There are many fantastic lecturers, professors and teachers in the world. But not all teachers are brilliant with people. Actually some are downright awful because we select for academic ability not emotional intelligence. As a result, without dedicated training, putting students in a room and interviewing them can be risky, in a way which coursework and exams are much less so.

Vivas are also at greater risk of inconsistency and subjective bias. If a student is friendly with a lecturer, they’re much more likely to be relaxed when answering the lecturer’s questions. Worse, the lecturer might stray towards asking questions they know will interest the student, and thus which the student may find easier to answer. These are some of the least malign ways that vivas can go wrong. There’s also issues with grading quality of answers when the entire assessment is verbal and may follow a flexible path.

Lastly, of course, there’s potentially a language barrier. Despite most UK universities requiring a particular standard of English ability, fake certificates carried by students that can’t actually speak English (let alone on a highly technical course like Computer Science) is still a widespread problem. Coursework offers an opportunity to use translation tools, whereas having a live translator in a viva is that much more difficult (and expensive) to organise.

The Advantages of the Viva

I would like to suggest to you, that in the age of AI, the advantages of viva assessment outweigh the disadvantages. I think they also resolve several key challenges facing our methods of assessment which have been exacerbated by AI.

First, but perhaps not the most obvious, is that viva assessment is much closer to what students will encounter in the working world. With a few exceptions, we rely heavily on verbal assessment to progress in our careers. This starts with the much-dreaded job interview, and is followed by meetings and presentations. In some paths, you may also end up pitching for investment, or giving sales pitches to potential customers. For all of these scenarios, a viva is both a much better assessment of future performance, and a much better preparation, than any written exam or coursework.

A viva offers a unique format that gives an assessor the opportunity to directly observe a student’s thought processes. I would argue this is far more important than the student getting to the right answer or knowing particular facts. We can look up facts, and with the right method, achieve correct answers. The key thing is the method. The thoughts. Critical thinking, logical reasoning, emotional intelligence, persuasive arguments, creativity – these aren’t outcomes, they’re processes. A viva is the only form of assessment that really examines the process as much as the final result.

Now, while a viva has disadvantages, in many cases these can also be turned into advantages. If a student is struggling, staff can spend time with that student and get to know them. When it comes time for the viva, that friendly rapport turns a scary examination into a much lower-pressure discussion.

The flexibility to explore a student’s strengths also allows students to engage with the parts of a course which most excite and interest them, reducing the laboriousness of learning material which they will likely never think about again after the exam. Flexibility is a strength, not a weakness of the viva. Of course, careful planning is needed by the assessor – questions must be prepared in advance to fairly cover all possible directions of travel of the viva and thus maximise consistency between independent assessments.

For a student, a viva can also be less stressful afterwards. Feedback is immediate - it’s a built-in part of human communication - rather than handing in an exam paper and waiting weeks or months for a grade.

In less formal assessment, vivas also give teachers an opportunity to understand what a student is struggling with, and to explore other presentations of the material or other methods of learning a skill which may help the student. This is much harder to achieve through homework or in-class tests. How often have parents found themselves telling a teacher their child needs some different way of learning a subject? Why do we rely on this highly inefficient, indirect and often unhelpfully-biased feedback loop?

And finally, there’s the AI element. A viva gives the student the opportunity to use AI (e.g. using their phone) or even internet searches, while the assessor can observe how they’re using that information. If the student recites the output of ChatGPT, the assessor can ask them questions to make them think for themselves. The assessor has the opportunity to explore the student’s process of answering the question, which is the most important thing.

The Return of the Viva

In most educational institutions, it’s impossible to run 4-weekly vivas of all students. Even 12-weekly (once per term) would likely be too much given we typically have 1 teacher to every 30 or 40 students. There are two arguments I could make here:

  1. This is a great reason to increase staff-to-student ratios. Between 1 to 10 and 1 to 15 would seem to be much better in schools; and 1 to 5 in universities (for regular tutorials) and up to 1 to 70 for lectures (depending on the course).
  2. This is a great reason to reduce the frequency of assessment. One viva at the end of the year per subject (at least for ages 16+) is enough.

I also wonder whether a Total Cost of Assessment analysis would reveal that, all-in, traditional coursework and exams cost just as much to devise, produce, execute and grade, as organising an equivalent number of 1-hour 4-person group vivas.

We still have challenges to address around neurodiversity, fairness/consistency, language, disability, etc. - challenges which already plague exams and coursework.

One thing I will say though: for most people, if they get too stressed out by being assessed by talking to someone, they need this kind of assessment more than ever. To talk to people, face-to-face, is fundamentally human. Viva assessment, from an early age, can help refocus an entire generation onto human connection. The irony that I’m saying this via a written blog post, not a video, podcast or conference talk, is not lost on me.

Reduced assessment frequency, greater attention to individuals, less pressure, more flexibility, direct relevance to future career milestones and activities, no problems with using AI assistance, and a focus on human communication. What’s not to love?

I hope the viva will return as a primary form of assessment across all ages and stages of education. We will only become better for having tried the human-centred approach.