From applications in clinical areas such as diagnostics and drug discovery, to the chemical, manufacturing, retail and automotive sectors, artificial intelligence (AI) technologies have disrupted many traditional industries, opening new frontiers and driving the development of ground-breaking solutions.

The global AI market is projected to grow at a CAGR of 33% in the next decade, expanding from $47bn in 2021 to $360bn in 2028, according to a recent report published by Fortune Business Insights.

An increasing number of corporate venturing units have been targeting AI businesses across the globe. In 2019, 100 corporate-backed deals were inked in the sector, worth an aggregate value of around $4bn, a 33% increase in volume and a 100% rise in value on 2018, according to GCV Analytics.

With the spread of the pandemic, activity decreased in 2020 and the sector recorded 86 corporate-sponsored deals for a total value of around $2bn. However, 2021 has seen a strong rebound in activity and a huge amount of capital has been poured into the AI segment from corporate investors. In the first ten months of the year, 83 corporate-financed deals were inked, with an aggregate value that exceeded pre-pandemic levels and reached more than $5bn.

Alphabet, Softbank and Bloomberg have been the three top investors in AI deals in the past two years. Geographically, North America recorded the majority of deals in 2021, with 31 rounds, followed by Japan with 16 and China with 13, according to GCV Analytics.

Our data also shows that there has been a strong increase in later-stage rounds backed by corporates this year, with 22 series C rounds and beyond in the first ten months of 2021, compared with only eight in 2020 and six in 2019.

Among the largest investments was a $676m series D round for software developer SambaNova Systems, which is backed by several technology corporations including Samsung, Intel and Alphabet. The round was led by SoftBank’s Vision Fund 2 in April 2021 and valued the company at $5.1bn.

“SambaNova is developing a technology able to run advanced AI applications that is intended to be more powerful than existing central or graphics processing units,” said Patrick Bangert, vice-president of AI at Samsung SDS. “We believe in the great potential of the company’s technology, which can be deployed in a wide array of use cases, including digital health, by training systems based on high-resolution imagery and improving therapeutic and compound research.”

Biomedical potential

Another segment that has attracted hefty rounds of funding is AI as applied to drug development. Employing AI models enables a more accurate detection of patterns and trends across vast biomedical databases, reducing failure rates and speeding up the discovery process.

One company working in this field is 1910 Genetics, which this year raised a $22m series A round featuring M12, the corporate venturing unit of Microsoft.

1910 Genetics has developed a technology that integrates AI, computation and biological automation to accelerate the design of small molecule and protein therapeutics. Suede, one of its platforms for small molecule drug discovery, is able to screen 14 billion molecules in less than six hours to identify promising hit compounds, while Bagel, another platform developed by the company, rapidly generates de novo lead molecules using a hit compound as a template.

Samir Kumar, managing director, M12

“Combining lab automation and biology with the latest technologies in generative AI is the key to achieving a more efficient and successful drug discovery pipeline,” said Samir Kumar, managing director at Microsoft’s M12. “This is what 1910 Genetics does by developing AI-driven platforms that are able to reduce the timeline and cost of drug discovery, while improving the success rate of bringing effective medicines to patients in need.”

Ethics concerns

The development of AI technologies and the significant impact that they might exercise on our lives and societies in the near future have sparked mounting concerns on the ethics underlying their implementation.

One of the main issues presented by technologies able to detect, capture and reveal information about our gender, age, ethnicity, location and preferences, pertains to the violation of our privacy and how this abundant and detailed information is used by those that are able to collect it.

We have become familiar with the lack of transparency and ethics shown by many companies that have used advanced algorithms to harvest data, without asking users for permission and without offering them the opportunity to have their information corrected or deleted.

“Data has become an extremely enticing source of power and profits for many corporations,” said Samsung SDS’ Bangert. “Developing AI-powered technologies that perform an undetected collection of huge amounts of data could be invaluable, despite the breach of privacy that this often entails. This is why it is essential to implement and enforce laws and regulations able to impose boundaries to the current, unregulated harvesting of data, thus protecting the privacy and rights of users.”

Furthermore, with advancements in AI-powered facial recognition, governments around the world that operate large camera infrastructures could use this technology at scale for political motives, racial profiling, and discrimination. This is what has already happened in China, where, according to several press reports, the authorities used advanced facial recognition technology last year to track and control the Uyghur minority.

AI-powered facial recognition technology poses serious ethical issues

“Despite their great potential, biometric facial recognition technologies carry significant privacy and ethical implications, especially when used by governments and law enforcement authorities for identification purposes and potentially for the profiling of minorities,” said Samsung SDS’ Bangert. “This is a field where regulatory decisions need to be moulded and shaped in a way that does not suffocate scientific research and allows further technological advancements while protecting minorities and safeguarding the rights of every individual.”

Moreover, the development of AI technologies has sparked rising concerns around algorithmic bias that derives from unrepresentative and incomplete training data or the reliance on flawed information, resulting in potential inequalities and unfair outcomes for minorities.

This issue has been identified long ago and is caused by the impossibility to fully control and eliminate human bias when coding and developing an AI system. Some of the more complex algorithms that we have developed in recent years have helped identify and sometimes reduce the impact of human biases, impeding them from making their way into data selection and processing.

However, the problem is far from being solved. Refined and complex algorithms could carry more subtle and complex biases that are difficult to identify and correct, making it almost impossible to spot fallacies and clean the system from unfair and misleading outcomes. The widespread adoption of machine learning, their deployment at scale in sensitive areas and across personal applications could subsequently make this a huge problem.

“Algorithmic bias can be highly problematic in medical applications, where it is essential to avoid a biased pre-selection that can jeopardise the development of datasets statistically representative of the population at large,” said Samsung SDS’ Bangert. “However, the real issue that needs to be addressed from both an ethical perspective and an accuracy point of view does not pertain to AI itself, but is ingrained in the underlying data science. Developing clean, unbiased training datasets is the real key to avoid introducing biases into AI systems. It is the human-generated data science that needs constant and attentive vigilance.”

Patrick Bangert, vice president of AI, Samsung SDS

Recent advancements in AI have led to the development of technologies able to manipulate audio, video and images to create synthetic media or ‘deepfakes’, which are often distributed at scale through various social media platforms.

Although deepfakes could serve positive purposes, they have often been used for harming and intimidating individuals and creating a false narrative that has corroded the public trust in the media and the democratic system.

“The development of AI generative models able to create synthetic variants and representations of our reality carries the disturbing risk that we will not be able to distinguish what is real and true from what is not, with dramatic ethical implications that we are currently only partially able to foresee,” said M12’s Kumar. “It is important to work out how we can make sure this technology is used for benign, positive purposes, while minimising the risk of it being deployed with bad and nefarious motives.”

Another important ethical issue that has often been overlooked is related to the carbon footprint and sustainability of AI models, which are highly energy intensive and produce a tremendous amount of carbon emissions.

A study published by the University of Massachusetts Amherst and cited by Forbes estimated that training a single deep learning model can generate up to 284,615kg of CO2 emissions, which would be roughly equal to the total lifetime carbon footprint of five cars.

In a world threatened by climate change and where AI models are becoming increasingly more powerful, this issue represents a grievous cause for concern that needs to be addressed.

“The energy footprint of AI is a serious concern that we need to take into consideration when developing more powerful and advanced models, especially now that climate change and sustainability have become number one priorities for individuals and corporations,” said M12’s Kumar. “We have to rationalise and harmonise these two potentially competing interests: developing stronger and more advanced GPUs and reducing our carbon footprint. To do that, we need to develop new solutions to make AI more sustainable and energy efficient, from the hardware to the software and the algorithms, with the aim of generating green AI models that will allow the deployment at scale of
AI-driven applications.”

Quantum computing potential

Drawing on quantum mechanics, quantum computing promises to exponentially increase a computer’s processing power and solve problems that take conventional computers years to solve. Its applications would revolutionise fields including drug discovery, materials science, chemistry, cybersecurity, energy management, financial analysis and autonomous vehicles, among others.

The global quantum computing market size is expected to reach $487m in 2021 and grow at a CAGR of 25% over the next decade, to around $3.7bn in 2030, according to a report published by Quince Market Insights.

M12’s Kumar said: “The biggest accomplishment in quantum computing and the milestone towards which this entire field is moving will be reaching fault-tolerant universal computation with logical qubits, which will ensure that errors do not multiply and will enable running deeper set of quantum gate operations versus the current small-scale noisy quantum devices.”

Quantum computing promises to exponentially increase a computer’s processing power

Among the companies that are operating in this field and have recently attracted corporate backing is PSI Quantum, a photonic quantum computing technology developer based on research at University of Bristol and Imperial College London. The company has recently completed a $450m series D round led by BlackRock and backed by Microsoft’s M12, reaching a valuation of more than $3bn.

“PSI Quantum is developing what it expects to be the first commercially viable quantum computer, by utilising photonic integrated circuits built into silicon chips and used to manipulate photons,” said M12’s Kumar. “The photonic approach is currently the most promising way to achieve the development of a large-scale quantum computer and has strong technical advantages compared with other models.”

Quantum backing

Other companies operating in the field include IonQ, which was backed by Samsung’s Catalyst Fund in 2019 and in March this year became the first quantum computing company to list on the New York Stock Exchange. IonQ floated through a reverse merger with a special purpose acquisition company and reached a market capitalisation of around $2bn.

A couple of months later, Rigetti, which develops multi-chip quantum processors for quantum computing systems and counts media group Bloomberg as an investor, also listed through a reverse takeover with a special purpose acquisition company, at a valuation of around  $1.5bn.

More recently, Riverlane, a UK-based startup that received funding in 2021, has launched a new open-source hardware abstraction layer named HAL, that will allow high-level quantum computer users, such as application developers, system software engineers and cross-platform software architects, to write programs that will be interoperable with multiple types of quantum hardware.

AI applications are driving the development of ground-breaking solutions

Furthermore, after claiming so-called quantum supremacy in 2019 with its Sycamore quantum processor (that took 200 seconds to solve a problem that would have taken the world’s fastest supercomputers 10,000 years to complete), Google has recently announced plans to build a commercial quantum computer that would perform large-scale calculations without errors before the end of the decade. In the meantime, IBM has made available in Europe its IBM Quantum System One, which will be operated by the Fraunhofer Society in Germany and will help companies develop and test applied quantum algorithms.

In addition to the excitement for the great potential harboured by this field, there are also concerns over the possibility that when quantum computers can factor large prime numbers, they will be capable of breaking RSA (Rivest–Shamir–Adleman algorithm) — the cryptosystem that is widely used today for secure data transmission.

“Despite concerns that quantum computing could break RSA encryption, there are resistant algorithms already in existence that would be able to defend our systems against that,” said Samsung SDS’ Bangert. “It might take time and effort to implement those strategies, but quantum resistant crypto and quantum key distribution will allow us to reach full protection from those risks, if and when quantum computing becomes a truly applicable and feasible technology.”