Working List of Hard Problems in AI

Compiled by James Manyika for the AI2050 Initiative

About the Working List of Hard Problems in AI

Drawing on previous work in AI, and through numerous conversations with other experts, the initiative has developed an initial working list of the hard problems for AI2050 to take on. This list is aimed at realizing the opportunity for society from AI and addressing the risks and challenges that could result from it.

While we believe the opportunities and challenges described in the working list are multidisciplinary, they are generally aimed at hard scientific and technical problems and societal challenges of different kinds that represent both opportunities and challenges. The list aims at relatively distinct categories of challenges and opportunities to solve.

This working list makes no claim to being comprehensive, final, or fixed in time. We fully expect such a list to continue to evolve as we learn more and as AI’s capabilities progress and our use of it continues to evolve. We plan to update this list over time, revising current categories, including subcategories, and potentially introducing new categories of hard problems to solve guided by the motivating question.

The AI2050 Hard-problems working list was compiled drawing from research and other initiatives that Co-chairs Eric Schmidt and James Manyika have been involved in, and input from numerous conversations with people at the forefront of researching and developing AI, and those researching its impacts on society.

Throughout this list “Solve” should be taken to mean solve or make dramatic advances on or make progress sufficient to stay significantly ahead of the challenges or emerging issues as AI itself continues to advance and how society and its actors use it or misuse it also evolves.

Working List last updated June 2023

Develop more capable and more general AI, that is useful, safe and earns public trust

  • 1

    Solved the science and technological limitations and hard problems in current AI that are critical to enabling further breakthrough progress in AI leading to more powerful and useful AI capable of realizing the beneficial and exciting possibilities, including artificial general intelligence (AGI).

    Examples include generalizability, causal reasoning, higher/meta-level cognition, multi-agent systems, agent cognition, the ability to generate new knowledge, novel scientific conjectures/theories, novel beneficial capabilities, and novel compute architectures, breakthroughs in AI’s use of resources.

  • 2

    Solved AI’s continually evolving safety and security, robustness, performance, output challenges and other shortcomings that may cause harm or erode public trust of AI systems, especially in safety-critical applications and uses where societal stakes and potential for societal harm are high.

    Examples include bias and fairness, toxicity of outputs, factuality/accuracy, information hazards including misinformation, reliability, security, privacy and data integrity, misapplication, intelligibility, and explainability, social and psychological harms.

  • 3

    Solved challenges of safety and control, human alignment and compatibility with increasingly powerful and capable AI and eventually AGI.

    Examples include risks associated with tool-use/connections to physical systems, multi-agent systems, goal misspecification/drift/corruption, risks of self-improving/self-rewriting systems, gain of function risks and catastrophic risks, alignment, provably beneficial systems, human-machine cooperation, challenges of normativity and plasticity.

Leverage AI to address humanity’s greatest challenges and deliver positive benefits for all

  • 4

    Made game-changing contributions by having AI address one or more of humanity’s greatest challenges and opportunities.

    Examples include the fields of health and life sciences, climate and sustainability, human well-being, in the foundational sciences (including social sciences) and mathematics, space exploration, scientific discoveries, pressing societal challenges (e.g., the Sustainable Development Goals), etc.

  • 5

    Solved the economic challenges and opportunities resulting from AI and its related technologies.

    Examples include new modes of abundance, scarcity and resource use, economic inclusion, future of work, IP and content creation, responsible business models, network effects and competition, and with a particular eye towards countries, organizations, communities, and people who are not leading the development or direct use of AI.

  • 6

    Solved for access, participation, and agency in the development of AI and the growth of its ecosystem and its beneficial use for countries, companies, organizations, and segments of society and people, especially those not involved in the development of AI.

    Examples include access to research and resources for AI development, AI ecosystem participation diversity, equitable access to capabilities and benefits, and disciplinary diversity in development of AI.

Develop, deploy, use and compete for AI responsibly

  • 7

    Solved the challenges and complexities of responsible research, deployment, and sociotechnical embedding of AI into different use-domains, societal spaces, and accounting for different cultures, participants, stakes, risks, societal risks and externalities, and market and other forces.

    Examples include publication, responsible open-source approaches, distributions and access to tools and datasets, testing/learning/iterating approaches, domain-relevant approaches and responsible use and resource consumption.

  • 8

    Solved AI-related risks, use and misuse, competition, cooperation, and coordination between countries, companies and other key actors, given the economic, geopolitical and national security stakes.

    Examples include cyber-security of AI systems, governance of frontier/most capable systems, approaches to govern misuse by different types of actors, governance of autonomous weapons, avoiding AI development/deployment race conditions at the expense of safety, protocols and verifiable AI treaties, and stably governing the emergence of AGI.

Co-evolve societal systems and what it means to be human in the age of AI

  • 9

    Solved the adaptation, co-evolution and resiliency of human governance institutions and societal infrastructure and capability to keep up with and harness AI progress for the benefit of society.

    Examples include understanding of AI by leaders in policy, regulation, deployment, and adaptation of socio-political systems, civic and governance institutions and infrastructure, education and other human capabilities and systems to enable human and societal flourishing alongside increasingly capable AI.

  • 10

    Solved what it means to be human in the age of AI, or John Maynard Keynes’ problem when he noted, “Thus for the first time since his creation man will be faced with his real, his permanent problem— how to use his freedom from pressing economic cares, how to occupy the leisure, which science and compound interest will have won for him, to live wisely and agreeably and well.”

    Examples include humanistic ethics alongside powerful AI, a world without economic striving, human exceptionalism, meaning and purpose.