Generative_AI

Emerging AI Tools Raise Urgency for Responsible AI Frameworks, MIT and BCG Survey Finds

March 26, 2024 Off By admin
Shares

New artificial intelligence (AI) platforms and tools are continuously entering the market, providing developers, data scientists, and business analysts with powerful resources. However, the rapid growth of these technologies is outpacing the ability to manage the responsibilities and accountability associated with AI systems.

A recent survey conducted by MIT Sloan Management Review (MIT SMR) and Boston Consulting Group (BCG) of 1,240 executives highlighted the challenges posed by the proliferation of AI, including the adoption of what the researchers term “shadow AI,” referring to AI tools developed outside formal organizational structures.

Elizabeth Renieris from Oxford’s Institute for Ethics in AI, along with David Kiron (MIT SMR) and Steven Mills (BCG), emphasized that while AI holds immense promise, it also presents significant risks. The study notes that the rapid advancement of generative AI, such as ChatGPT, Dall-E 2, and Midjourney, is introducing unpredictable risks to organizations that are unprepared for its wide range of applications.

Many organizations are struggling to manage the widespread use of AI tools across their enterprises. The researchers warn that this trend, combined with the increasing reliance on third-party AI tools and the rapid adoption of generative AI, exposes companies to new commercial, legal, and reputational risks.

The researchers emphasize the importance of responsible AI frameworks, which are defined as a set of principles, policies, tools, and processes to ensure that AI systems are developed and operated in a manner that benefits individuals and society while also driving business impact.

Despite the growing need for responsible AI, the researchers note a trend in industry layoffs where companies are scaling back internal resources devoted to responsible AI. This reduction in investments is occurring at a time when responsible AI is most needed to address the risks posed by AI technologies.

To mitigate the risks associated with third-party AI tools, the researchers suggest a multi-pronged approach, including evaluating vendors’ responsible AI practices, incorporating contractual language mandating adherence to responsible AI principles, and adhering to relevant regulatory requirements and industry standards.

Overall, the researchers stress that responsible AI frameworks should apply to both internally developed and third-party AI tools, as ethical principles must be upheld regardless of the tool’s origin. As AI continues to evolve, responsible AI frameworks will play a crucial role in ensuring that AI systems are developed and operated in a manner that benefits society.

Shares