AI2050’s Hard Problems Working List
Compiled by James Manyika for the AI2050 Initiative1
What follows is a working list of hard problems we must solve or get right for AI to benefit society in response to the following motivating question:
“It’s 2050, AI has turned out to be hugely beneficial to society and generally acknowledged as such. What happened? What are the most important and beneficial opportunities we realized, the hard problems we solved and the most difficult issues we got right to ensure this outcome, and that we should be working on now?”
While we believe the opportunities and challenges described in the working list are multidisciplinary, they are generally aimed at hard scientific and technical problems and societal challenges of different kinds that represent both opportunities and challenges. The list aims at relatively distinct categories of challenges and opportunities to solve2.
This working list makes no claim to being comprehensive, final, or fixed in time. We fully expect such a list to continue to evolve as we learn more and as AI’s capabilities progress and our use of it continues to evolve. We plan to update this list over time, revising current categories, including subcategories, and potentially introducing new categories of hard problems to solve guided by the motivating question.
What follows is the working list organized thematically [published February, 2022]:
- 1. Solved the science and technological limitations and hard problems in current AI that are critical to enabling further breakthrough progress in AI leading to more powerful AI capable of realizing the beneficial and exciting possibilities, including artificial general intelligence (AGI). Examples include generalizability, causal reasoning, higher/meta-level cognition, multi-agent systems, agent cognition, and novel compute architectures.
- 2. Solved AI’s continually evolving safety and security, robustness, performance, output challenges and other shortcomings that may cause harm or erode public trust of AI systems, especially in safety-critical applications and uses where societal stakes and risk are high. Examples include bias and fairness, toxicity of outputs, misapplications, goal misspecification, intelligibility, and explainability.
- 3. Solved challenges of safety and control, human alignment and compatibility with increasingly powerful and capable AI and eventually AGI. Examples include race conditions and catastrophic risks, provably beneficial systems, human-machine cooperation, challenges of normativity.
- 4. Made game-changing contributions by having AI address one or more of humanity’s greatest challenges and opportunities, including in health and life sciences, climate, human well-being, in the foundational sciences (including social sciences) and mathematics, space exploration, scientific discoveries, etc.
- 5. Solved the economic challenges and opportunities resulting from AI and its related technologies. Examples include new modes of abundance, scarcity and resource use, economic inclusion, future of work, network effects and competition, and with a particular eye towards countries, organizations, communities, and people who are not leading the development of AI.
- 6. Solved for access, participation, and agency in the development of AI and the growth of its ecosystem and its beneficial use for countries, companies, organizations, and segments of society and people, especially those not involved in the development of AI. Examples include access to resources for AI development, AI ecosystem participation diversity, equitable access to capabilities and benefits, and disciplinary diversity in development of AI.
- 7. Solved the challenges and complexities of responsible research, deployment, and sociotechnical embedding of AI into different societal spaces, and accounting for different cultures, participants, stakes, risks, societal externalities, and market and other forces. Examples include publication, testing/learning/iterating approaches, distribution, and access to tools and datasets, robustness, and responsible use and resource consumption.
- 8. Solved AI-related risks, use and misuse, competition, cooperation, and coordination between countries, companies and other key actors, given the economic, geopolitical and national security stakes. Examples include cyber-security of AI systems, governance of autonomous weapons, avoiding AI development/deployment race conditions at the expense of safety, mechanisms for safety and control, protocols and verifiable AI treaties, and stably governing the emergence of AGI.
- 9. Solved the adaptation, co-evolution and resiliency of human governance institutions and societal infrastructure and capability to keep up with and harness AI progress for the benefit of society. Examples include understanding of AI by leaders in policy, regulation, deployment, and adaptation of socio-political systems, civic and governance institutions and infrastructure, education and other human capabilities systems alongside increasingly capable AI.
- 10. Solved what it means to be human in the age of AI, or John Maynard Keynes’ problem when he noted, “Thus for the first time since his creation man will be faced with his real, his permanent problem— how to use his freedom from pressing economic cares, how to occupy the leisure, which science and compound interest will have won for him, to live wisely and agreeably and well.” Examples include humanistic ethics alongside powerful AI, a world without economic striving, human exceptionalism, meaning and purpose.
- The AI2050 Hard-problems working list was compiled drawing from research and other initiatives Eric and James have been involved in, and input from numerous conversations with people at the forefront of researching and developing AI, and those researching its impacts on society with whom we have been in dialogue.
- Throughout this list “Solve” should be taken to mean solve or make dramatic advances on or make progress sufficient to stay significantly ahead of the challenges or emerging issues as AI itself continues to advance and how society and its actors use it or misuse it also evolves.