We're Not Using AI to Its Fullest Human Potential

Scroll

By Eric and Wendy Schmidt

This article originally appeared in time.com on November 1,  2022

 

We should be living in a golden age of science.

 

For centuries, the scientific method was defined by two pillars—theory and experiment. Now, we live in the age of Artificial Intelligence, which adds a vital third pillar. Without advanced computation, according to leading scientific bodies, discoveries of the past decade, such as the detection of the Higgs boson, the discovery of new drugs like halicin, which can kill strains of bacteria resistant to all known antibiotics, or the observation of gravitational waves, “would have been impossible”.

 

But despite these advances, scientific innovation today is too often defined by new use cases for existing technologies or refining previous advancements, rather than the creation of entirely new fields of discovery.

 

In daily life, Artificial Intelligence is ubiquitous in our homes, from Alexa buying our groceries with a simple command, to Netflix anticipating what will entertain us through algorithmic ingenuity. But we need a lot more of it in our laboratories—moving science forward for public benefit, and helping us to solve the hardest problems of our time, from climate change and poverty to healthcare and sustainable energy.

 

This can only happen by accelerating the next global scientific revolution—by supporting broad and deep incorporation of AI techniques into scientific and engineering research. Because while AI innovation has been substantive, its adoption into scientific and engineering research has not been ubiquitous, fast, or interdisciplinary.

 

Why is it that, despite remarkable advances in AI, it is not yet helping us consistently make the kind of breakthroughs that will expand the frontiers of our knowledge, and accelerate the process of scientific discovery?

 

There are two main reasons. First, while plenty of money is already pouring into AI projects at universities, these funds tend to be allocated to particular disciplines, such as AI for computer science, rather than to work that builds bridges between the natural sciences, computer science, and engineering.

 

At this moment, the use of AI tools in the scientific and engineering research ecosystem is still in the early adopter stage, rather than being a default part of researchers’ toolkits. We can’t expect scientists to embrace the capacities of AI without appropriate training. A researcher hoping to use AI will need to acquire not only a deep understanding of a particular problem—such as antibiotic resistance—but also the knowledge of which data, and what representation of that data, will be useful for training an AI model to solve it.

 

Second, the incentives simply don’t exist for young scientists to attempt truly bold research. Much of postdoctoral funding is tied to specific research grants and expected results within disciplinary boundaries, so postdoctoral fellows do not usually have full freedom to take risks with new techniques.

 

So what can be done to change the status quo? We believe training for AI in science, equitable access to AI tools, and its responsible, ethical application should govern any meaningful response.

 

First, we need rigorous and interdisciplinary training for young scientists using AI. AI’s failures can largely be attributed to unrealistic expectations about AI tools, errors in their use, and the poor quality of data used in their development. Scientists across disciplines, from all backgrounds, will need AI fluency to prevent such missteps.

 

Postdoctoral research is a particularly opportune moment in a scientist’s career to receive this training. This may sound counterintuitive, as conventional academic pressures dictate the swift publishing of papers after a Ph.D. degree is earned, before moving on to the next job. But this is actually the perfect time to broaden research horizons instead of falling into the orthodoxy of hyperspecialization. Instead of being rushed to prove themselves quickly, postdocs should be given the time and the support to try something new.

 

Second, we have to ensure equitable access to AI tools. According to a recent National Artificial Intelligence Research Resource report, equitable participation in cutting edge AI research is limited by gaps in access to the necessary data and computational power. Leaving out scientists from historically underrepresented and underserved backgrounds “limits the breadth of ideas incorporated into AI innovations and contributes to biases and other systemic inequalities.”

 

We have an opportunity to anticipate and eliminate biases instead of deepening and furthering them. It is our hope that through our philanthropic efforts, by expanding access to AI tools to a generation of postdoctoral candidates around the world over the next several years, we can lay the groundwork for equitable AI.

 

Third, the responsible application of AI should enhance human intelligence, not replace it or repeat its mistakes. The power of AI in science is just beginning to be unlocked, but we should remember that breakthroughs like the discovery of halicin could not have been achieved solely by humans or AI. There is clear evidence that AI can augment the analytical capacity of humans, and run complex experiments beyond traditional approaches. For example, research published in Nature showed how an AI algorithm can help advance a promising path to sustainable energy by containing and controlling highly energetic plasma for fusion energy research. AI can also discover theorems at the forefront of mathematical research.

 

But the real excitement in applying AI for science is found in the new domains of enquiry which we cannot yet perceive, which will bring new dimensions to the history of the scientific method. The microscope allowed scrutiny of a whole new world of microorganisms that the early biologists did not even consider. The telescope showed early astronomers how expansive the universe is beyond our own solar system. AI could help us to discover new phenomena that would not have been considered by human scientists.