University of Oxford held its inaugural artificial intelligence-focused conference last month, bringing together investors and the world’s foremost researchers in the area.

University of Oxford sees itself “as an artificial intelligence powerhouse,” vice-chancellor Louise Richardson said during her opening speech to the AI@Oxford conference, the first such event put on by the institution – though you would not know it from the hundreds of people filling out the large lecture theatre in the Saïd Business School and an overflow room next door. As speakers and delegates would prove time and again over the course of the next two days, Richardson was, if anything, underselling her institution.

Half of the spinouts generated by University of Oxford in its 800-year history have been in the past five years. These portfolio companies span the entire breadth of industries, but Richardson had no trouble identifying AI spinouts, of which she highlighted Oxbotica, an autonomous vehicle software developer, as a particular success story.

It was important to understand that “the Hollywood version of AI” – conscious machines and the singularity – made for great fictional content but did not reflect that this “is not where the action is,” Mike Wooldridge, professor of computer science, explained in his opening keynote. In fact, he added, the singularity would not be the reality for a while and was actually extremely controversial in the community.

“On the one hand that is disappointing,” Wooldridge admitted, “we won’t have robot butlers any time soon. On the other hand, it is comforting we won’t have terminators any time soon.”

The type of artificial intelligence technology where all the focus lay was technically called “narrow AI”, though researchers usually left out the “narrow” part, Wooldridge said. 

It was the type of technology seen in countless applications today, where algorithms handled specific tasks that would previously have required human input. The approach was closely linked to machine or deep learning, Wooldridge said, whereby the algorithm was told what it needed to learn but not how.

Wooldridge showed an example of an algorithm independently figuring out how to play Atari’s Breakout game – where a player must get rid of a layer of bricks by making a ball bounce against them. But while AI was good at figuring out how to complete a task, it was unable to explain the process, he cautioned.

The Breakout game was also picked up on by Nick Bostrom, director of the Future of Humanity Institute, during his keynote at the end of the conference’s first day. Bostrom explained that the interesting thing wasn’t that it could be done – which it could have been many years ago – but that the algorithm was capable of figuring out the game’s objective and the way to gain the highest score.

The example might seem abstract – the game, though iconic, was released in 1976 – but Bostrom easily found consumer applications that the audience would recognise, such as Google Duplex, an AI-based service that handled restaurant reservations and other bookings on behalf of the user.

AI, despite its modern-day feel, had been around for decades, Bostrom explained, but had suffered setbacks in the late 1980s that eventually led to a period known as the AI winter when funding dried up. Bostrom was intimately familiar with the historic struggles of AI, having observed in an interview with CNN in 2006 that “a lot of cutting-edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it’s not labelled AI anymore.”
He struck a more optimistic note at AI@Oxford saying: “I don’t believe there will be another AI winter, because the technology is already massively useful.”

Another keynote speaker was Chris Bishop, director at software producer Microsoft’s R&D subsidiary, Microsoft Research, who took on the topic of hype versus reality. That reality, he said, was that the cloud was “enormous data centres that cannot be built fast enough to keep up with the computing power required”.

Bishop theorised that, much like Moore’s law dictated increasingly faster hardware, we might be experiencing an equivalent progress in AI software development, having moved beyond software that merely told a computer what to do to producing software that could learn. The majority of algorithms, Bishop explained, were open source – even if the implementation might be a trade secret – allowing anyone to play around with them. This was positive, he said, reassuring the audience that his corporation had already formulated principles around AI: fairness, accountability, transparency and ethics.

Also providing his insights through a keynote speech was Adrian Smith, director of the Alan Turing Institute, which Smith said wanted to innovate and help develop world-class research. The institute, launched in 2015 by the UK government, was advising both policy-makers and the general public, Smith explained, and aimed to tackle real-world problems.

Although keynotes took place throughout the conference, much of AI@Oxford occurred across three streams, focused on technology and applications of AI, AI in healthcare and AI in society. Highlights included Phil Howard, director of the Oxford Internet Institute, speaking about AI and trust in society as part of the latter stream, offering his insights into how artificial intelligence was shaping politics.

Howard largely focused on the public conversation around politics and fake news, pointing out that although the perception was that social media platforms Facebook and Twitter had to deal with the highest influx of misinformation spread by bots, such activity was in fact markedly higher on image sharing service Instagram.

Howard, who had analysed 40 countries over time, cautioned that trust in AI systems would inevitably be used by politicians to form public opinion and run campaigns. This was already true today, he said, in places such as China where the government ramped up activity to shape international opinion amid the Hong Kong protests.

He explained that society would need “values in code and diversity in data,” because literature written exclusively by straight, white men would have a negative impact on where resources were allocated. He concluded that: “Politics was what happened when someone tried to represent your interests. Politics will become what happens when AI represents your interests.”

In the healthcare stream, a noteworthy session was the TED-style talks by representatives from three image diagnostics companies – Oxford Brain Diagnostics, Caristo Diagnostics and Perspectum Diagnostics.

Ian Hardingham, chief technology officer of Oxford Brain Diagnostics, was the first to provide an overview of his company,  which has developed a cortical disarray management platform that focuses on a small part of MRI scan images, initially to diagnose Alzheimer’s disease. The company hopes to expand its approach to other neural conditions in future, but perhaps most intriguingly, Hardingham noted that the spinout at first did not rely on machine learning.

Cheerag Shirodaria, chief executive of Caristo, had no such claim and sang the praises of self-learning algorithms. Shirodaria explained that the assumption had been, until very recently, that fat caused an adverse effect on arteries but that actually, the fat acted as a sensor: inflammation in the fat could indicate cardio-vascular risk before there was any inflammation in the vessels themselves. More notably still, he explained that half of heart attacks occurred in patients who did not have any narrowing or only minimal narrowing of the coronary arteries, despite the medical consensus on this topic to date. Caristo was now commercialising technology that analysed CT scans to predict cardiac failure, he said.

Finally, John McGonigle, senior data scientist at Perspectum, explained that using artificial intelligence, his spinout could non-invasively examine the health of a patient’s liver through quantitative MRI scans. The company, founded in 2012, operated both as a contract research organisation and a clinical service, he said, adding that “regulatory bodies have been very responsive to our advances in AI.”

Another healthcare-focused session focused on interconnected health and featured Samuel Conway, chief executive of data visualisation platform Zegami, Michalis Papadakis, chief executive of stroke detection software developer Brainomix, and Nick De Pennington, chief executive of autonomous telemedicine provider Ufonia.

Although focusing on different aspects, all three spinouts showed the potential of using AI to alleviate stresses on the healthcare system to the benefit of patients, staff and providers.

Zegami, a big data visualisation platform, has been used for a wide range of applications – including the Oxford cluster map, more of which below – but Conway specifically touched on how his technology had been used to detect more lesions in oesophageal cancer.
Up to 25% of these lesions, he explained, were currently missed by traditional tools and concluded that “if we stick with the approaches we have applied so far, the results will stay the same”.

Brainomix is similarly using its technology to help identify patients that need treatment. The spinout’s automated process is able to detect strokes using AI to interpret CT scans, speeding up a process where every minute counts, Papadakis said. The statistics for strokes that were missed were even more dire than for lesions, with up to 50% of patients not being treated because the hospital lacks the technology to detect the stroke.

Ufonia – named after the world’s first synthetic voice machine Euphonia, developed some 170 years ago – has meanwhile created a platform to take care of patients following cataract surgery. The telemedicine offering provided an AI-based voice assistant that handled follow-up phone calls, which were used to ask a set of five standardised questions to figure out if the surgery was successful, De Pennington said. Using AI rather than relying on manual labour meant patients could schedule calls at any time of day that was convenient for them. With more than 30 million cataract surgeries expected to occur each year in the near future, using an AI would also ensure all calls could be placed without delay.

Elsewhere, Nic Newman, visiting fellow at the Reuters Institute of Journalism, and David Tomchak, digital editor-in-chief of the Evening Standard, discussed the applications of AI in journalism. Newman noted that publishers were keen to use AI to deliver more relevant and engaging content, as well as to reduce costs through automation, and claimed that 72% of news publishers were now
experimenting with AI.

He admitted that there “is still ambivalence about AI in the newsroom,” but said that AI tools could prove invaluable to journalists drowning in information without an easy way of surfacing high-quality, fact-checked content.

Tomchak admitted that “there is still a long way to go before AI can write full stories without human intervention,” but noted that the technology still provided countless opportunities, including commercial ones. Many media companies, he said, were creating personalised front pages based on reading history and popularity both among the general audience and the user’s group of contacts. He cautioned against relying solely on popularity metrics, however, saying that they led to low quality content.

With the inaugural AI@Oxford conference a sold-out success, it seems safe to say that this is an event that will stick around and will, if there was any doubt left, be the place to be for anyone interested in AI. With Gregg Bayes-Brown, marketing and communications manager at tech transfer office Oxford University Innovation and one of the masterminds behind AI@Oxford – as well as a former editor of Global University Venturing – also unveiling the innovation map of Oxford (for more on this, check out our May 2019 magazine with a guest comment by Bayes-Brown) ahead of the first night’s gala dinner, the local ecosystem is clearly going from strength to strength. And that is fantastic news, especially in the current socio-political climate.

Thierry Heles

Thierry Heles is the former editor-at-large of Global University Venturing and Global Corporate Venturing, and was the producer and host of the Beyond the Breakthrough podcast until December 2024.