The demand for ethical artificial intelligence (AI) services (including terms such as “explainable AI” or “responsible AI”) has skyrocketed, in part due to some of the troubling practices employed by large technology companies.

Everyday media is full of news of privacy breaches, algorithmic biases and AI oversights. In the past decade, public perceptions have shifted from a state of general obliviousness, to a growing recognition that AI technologies, and the massive amounts of data that power them, pose very real threats to privacy, accountability, and transparency, and to an equitable society. The Ethical AI Database project (EAIDB) seeks to generate another fundamental shift – from awareness of the challenges, to education of potential solutions – by spotlighting a nascent and otherwise opaque ecosystem of startups that are geared towards shifting the arc of AI innovation towards ethical best practices, transparency and accountability. EAIDB, developed in partnership with the Ethical AI Governance Group, presents an in-depth market study of a burgeoning ethical AI startup landscape geared towards the adoption of responsible AI development, deployment and governance. We identify five key subcategories, then discuss key trends and dynamics. Motivation The concept of ethical artificial intelligence (AI) is quickly building momentum, with startup executives developing AI-first solutions, enterprise customers deploying them and VC and CVC investors that are financing them, as well as for end users consuming them, academics researching them and policymakers seeking to regulate them. The sheer volume of companies identified as “ethical AI companies” in this study pays testament to this emerging reality. While ESG has traditionally focused on “do no harm” versus “do good,” ethical AI businesses, for the purposes of this research, include solutions that either remediate the pitfalls of existing AI systems (“do no harm”) or that leverage AI to address a broader societal good (“do good”). The space has seen very significant growth in the past five years. The motivation behind this market research is multidimensional: n Investors are increasingly seeking to assess AI risk as part of their comprehensive profiling of AI companies. EAIDB provides transparency on the players working to make AI safer and more responsible. n Internal risk and compliance teams need to operationalise, quantify and manage AI risk. Identifying a toolset to do so is critical. There is also an increasing demand for ethical AI practices, as identified in IBM Institute for Business Value’s report. n As regulators cement policy around ethical AI practices, the companies on this list will only grow in salience. They provide real solutions to the problems AI has created. n AI should work for everyone, not just one portion of the population. Enforcing fairness and transparency into black box algorithms…

Subscribe to go deeper

GCV subscribers get access to all our proprietary data and deep-dive articles, as well as the global directory of CVC investors.



Not sure if you have a subscription?