Democratization of AI

AI’s future: utopia or dystopia?

February 6, 2024 Off By admin
Shares

In the largest survey of its kind, involving 2,778 researchers who had published in top-tier artificial intelligence (AI) venues, predictions were gathered on the pace of AI progress, the nature of advanced AI systems, and their impacts. The study revealed various insights into the expectations and concerns of machine learning experts regarding the future of AI. Here are key findings from the survey:

  1. AI Milestones by 2028:
    • The aggregate forecasts indicated at least a 50% chance of AI systems achieving several milestones by 2028.
    • Predicted achievements included autonomously constructing a payment processing site from scratch, creating a song indistinguishable from a new song by a popular musician, and autonomously downloading and fine-tuning a large language model.
  2. Human-Level Performance by 2027:
    • The chance of unaided machines outperforming humans in every possible task was estimated at 10% by 2027 and 50% by 2047.
    • This estimate was 13 years earlier than a similar survey conducted only one year earlier.
  3. Automation of Human Occupations:
    • The chance of all human occupations becoming fully automatable was forecast to reach 10% by 2037 and 50% by 2116.
  4. Concerns about AI Outcomes:
    • While 68.3% of respondents thought good outcomes from superhuman AI are more likely than bad, substantial uncertainty existed.
    • Among net optimists, 48% gave at least a 5% chance of extremely bad outcomes such as human extinction.
    • Among net pessimists, 59% gave 5% or more to extremely good outcomes.
    • Between 38% and 51% of respondents gave at least a 10% chance to advanced AI leading to outcomes as bad as human extinction.
  5. Concerns about AI-Related Scenarios:
    • More than half of the respondents suggested that “substantial” or “extreme” concern is warranted about six different AI-related scenarios.
    • Concerns included issues like misinformation, authoritarian control, and inequality.
  6. Prioritizing Research for Minimizing Risks:
    • There was broad agreement that research aimed at minimizing potential risks from AI systems should be prioritized more.
  7. Divergent Views on AI Progress Speed:
    • While there was disagreement about whether faster or slower AI progress would be better for the future of humanity, consensus existed on prioritizing research for risk mitigation.
  8. Long-Term Value of AI Progress:
    • 70% of experts believed that good outcomes are more likely than bad as AI becomes smarter and more powerful.
  9. Concerns about AI Trends:
    • More than half of the respondents expressed “substantial” or “extreme” concern over troubling AI trends, especially the spread of false and misleading information.
  10. Ethical Considerations:
    • The study did not explicitly address the ethical implications of AI achievements, such as generating flawless songs in the style of popular musicians.
  11. Publication: The study, titled “Thousands of AI Authors on the Future of AI,” was posted on the arXiv preprint server on January 5.

The survey provides a comprehensive overview of expert opinions on the future trajectory of AI, addressing expectations, concerns, and the need for research prioritization to mitigate potential risks associated with AI systems.

 

More information: Katja Grace et al, Thousands of AI Authors on the Future of AI, arXiv (2024). DOI: 10.48550/arxiv.2401.02843

Shares