This special quarterly report for both the Global Healthcare Council and Global AI Council picks up how developments in one field are increasingly relevant for others.
The ubiquitous sensors in phones and wearables most obviously bring potential benefits for healthcare – to help keep people healthy and treat the sick. As the Global AI Council of CVCs said in its end-March meeting in California, the increased amounts of data from sensors allied to artificial intelligence and machine learning could deliver great gains, when it arrives.
“The potential for AI to enable extremely innovative digital health solutions is massive. But in reality, we have yet to really see the dramatic impact of AI on healthcare that many of us have been expecting. What/where are the areas that AI can impact the most (and most quickly), what roadblocks, if any, are holding back progress, when can we expect to see a true ‘digital healthcare’ industry arrive and who will emerge as the biggest winners.”
Sensors are also offering intriguing opportunities for countries to scale up the capabilities of their populations through neuroplasticity and improve the speed of decision-making.
As Steve Blank, adjunct professor at Stanford University, said: “For the first time, our national security is inexorably intertwined with commercial technology (drones, AI, machine learning, autonomy, biotech, cyber, semiconductors, quantum, high-performance computing, commercial access to space, et al).”
It was fascinating to attend the Global Cyber Innovation Summit in Baltimore in April, organised by Bob Ackerman at venture capital firm AllegisCyber. Being able to peer over the horizon on the vectors of attack and defence strategies was important, as much for what the implications for other strategic investors backing startups will bring.
One of the first challenges will come from deciding what is done with the data from sensors.
The ethics behind using the data collected by sensors is a pressing issue. Lessons from the development of artificial intelligence (AI) offer glimpses of what is required.
Anik Bose, author of the guest comment in this issue developed out of work by the Global AI Council, said the demand for ethical AI innovation (including terms such as “explainable AI” or “responsible AI”) has skyrocketed.
“In the past decade or so, public perceptions have shifted from a state of general obliviousness to a growing recognition that AI technologies, and the massive amounts of data that power them, pose very real threats to privacy, accountability and transparency, and to an equitable society.
“The Ethical AI Database project (EAIDB) seeks to generate another fundamental shift – from awareness of the challenges to education of the potential solutions – by spotlighting a nascent and otherwise opaque ecosystem of startups that are geared towards shifting the arc of AI innovation towards ethical best practices, transparency and accountability. EAIDB, developed in partnership with the Ethical AI Governance Group, presents an in-depth market study of a burgeoning ethical AI startup landscape geared towards the adoption of responsible AI development, deployment and governance.”
More broadly, we need to review the threats, as well as the opportunities.