humans-AI

10 Key Trends in AI and Machine Learning for 2024

March 5, 2024 Off By admin
Shares

Following the introduction of ChatGPT in November 2022, 2023 marked a significant milestone in artificial intelligence. The advancements made over the past year, ranging from a thriving open-source ecosystem to sophisticated multimodal models, have set the stage for substantial progress in AI.

While generative AI continues to captivate the technology industry, there is a growing sense of nuance and maturity as organizations pivot from experimental to practical applications. This year’s trends underscore a deepening sophistication and prudence in AI development and deployment strategies, with a focus on ethics, safety, and the evolving regulatory environment.

Here are the top 10 trends in AI and machine learning to watch for in 2024

Multimodal AI

Multimodal AI extends beyond traditional single-mode data processing by incorporating various types of input, such as text, images, and sound. This approach mirrors human abilities to process diverse sensory information.

During a November 2023 presentation at the EmTech MIT conference, Mark Chen, head of frontiers research at OpenAI, highlighted the importance of multimodal interfaces. He stated, “The world’s interfaces are multimodal. We aim for our models to see and hear as we do, and to generate content that resonates with more than one of our senses.”

OpenAI’s GPT-4 model exemplifies multimodal capabilities, allowing it to process visual and audio inputs. Chen illustrated this by describing a scenario where a user takes photos inside a refrigerator and asks ChatGPT to recommend a recipe based on the ingredients in the images. This interaction could also include an audio component if the user utilizes ChatGPT’s voice mode to verbally request the recipe.

While most generative AI projects today are text-centric, Matt Barrington, Americas emerging technologies leader at EY, emphasized the potential of integrating text, images, and video. He explained, “The true potential of these capabilities will emerge when you can integrate text and conversation with images and video, combining all three and applying them across a range of industries.”

Multimodal AI has diverse practical applications. In healthcare, for example, multimodal models can analyze medical images alongside patient history and genetic data to enhance diagnostic precision. Within organizations, these models can empower employees by extending basic design and coding capabilities to individuals without formal training in those fields.

Barrington noted, “I’m terrible at drawing. But now, I can. I have a decent command of language, so… I can tap into a capability like [image generation], and those ideas that I could never put on paper, I can now have AI bring to life.”

Introducing multimodal capabilities can also enhance models by providing them with new data sources to learn from. Chen explained, “As our models become more adept at understanding language and approach the limits of what they can learn solely from text, we aim to expose them to unprocessed inputs from the world so they can interpret the world independently and draw conclusions from sources such as video or audio data.”

Agentic AI

Agentic AI is a significant advancement that shifts AI from reactive to proactive. AI agents are sophisticated systems that exhibit autonomy, proactivity, and the ability to act independently. Unlike traditional AI systems, which primarily respond to user inputs and follow predetermined programming, AI agents are designed to understand their environment, set goals, and act to achieve those objectives without direct human intervention.

For example, in environmental monitoring, an AI agent could be trained to collect data, analyze patterns, and initiate preventive actions in response to hazards such as early signs of a forest fire. Similarly, a financial AI agent could actively manage an investment portfolio using adaptive strategies that react to changing market conditions in real time.

This advancement is highlighted by computer scientist Peter Norvig, a fellow at Stanford’s Human-Centered AI Institute, who noted that while 2023 was focused on interacting with AI, 2024 will showcase agents capable of completing tasks for users, such as making reservations, planning trips, and connecting with other services.

Combining agentic and multimodal AI opens up new possibilities. For instance, an application designed to identify the contents of an uploaded image can now be created without the need to train a separate image recognition model. This integration allows for the no-code development of computer vision applications, similar to how prompting enabled the development of text-based applications.

Open source AI

Developing large language models and other powerful generative AI systems is a costly endeavor, requiring significant compute resources and vast amounts of data. However, leveraging open source models can help developers reduce costs and expand access to AI technologies. Open source AI refers to publicly available models, typically offered for free, that enable organizations and researchers to build upon existing code and contribute to the development of AI tools.

According to GitHub data from the past year, there has been a notable increase in developer engagement with AI, particularly in the field of generative AI. In 2023, generative AI projects made their way into the top 10 most popular projects on the platform for the first time, with projects like Stable Diffusion and AutoGPT attracting thousands of first-time contributors.

At the beginning of the year, open source generative models were limited in number and often lagged behind proprietary options like ChatGPT in terms of performance. However, the landscape expanded significantly throughout 2023, with the introduction of powerful open source alternatives such as Meta’s Llama 2 and Mistral AI’s Mixtral models. This expansion could potentially change the dynamics of the AI landscape in 2024 by providing smaller, less resourced entities with access to advanced AI models and tools that were previously out of reach.

“It gives everyone easy, fairly democratized access, and it’s great for experimentation and exploration,” commented Barrington.

Open source approaches can also promote transparency and ethical development, as the availability of the code allows for greater scrutiny, leading to the identification of biases, bugs, and security vulnerabilities. However, experts have expressed concerns about the potential misuse of open source AI to create disinformation and other harmful content. Additionally, building and maintaining open source projects, especially complex and compute-intensive AI models, can be challenging.

Retrieval-augmented generation

In 2023, generative AI tools gained widespread adoption; however, they still face challenges, notably in the form of hallucinations—plausible-sounding but incorrect responses to user queries. This issue has hindered enterprise adoption, as hallucinations in critical business or customer-facing scenarios could have serious consequences. Retrieval-augmented generation (RAG) has emerged as a technique to mitigate hallucinations, potentially revolutionizing enterprise AI adoption.

RAG combines text generation with information retrieval to improve the accuracy and relevance of AI-generated content. By allowing large language models (LLMs) to access external information, RAG helps them produce more accurate and contextually aware responses. This approach also reduces the need to store all knowledge directly in the LLM, leading to smaller model sizes, faster processing speeds, and lower costs.

“You can use RAG to gather a lot of unstructured information, documents, etc., [and] feed it into a model without having to fine-tune or custom-train a model,” explained Barrington.

These advantages are particularly appealing for enterprise applications where current, factual knowledge is essential. For instance, businesses can leverage RAG with foundation models to create more effective and informative chatbots and virtual assistants.

Customized enterprise generative AI models

While large, general-purpose generative AI tools like Midjourney and ChatGPT have captured consumer attention, the business landscape is seeing a shift towards smaller, specialized models tailored to niche requirements. While developing a new model from scratch is resource-intensive, many organizations are modifying existing AI models to meet their specific needs, such as tweaking architecture or fine-tuning on domain-specific datasets. This approach can be more cost-effective than building a new model or relying on API calls to public large language models (LLMs).

Customized generative AI models offer the advantage of catering to niche markets and user needs. These tailored tools can be developed for a wide range of scenarios, from customer support to supply chain management to document review. This customization is particularly valuable in sectors with specialized terminology and practices, such as healthcare, finance, and legal industries.

For many business use cases, massive LLMs are often unnecessary. While ChatGPT might be suitable for a consumer-facing chatbot handling various queries, it may not be the best choice for smaller enterprise applications. As AI developers’ capabilities converge, enterprises are expected to explore a more diverse range of models in the coming years.

Building a customized model instead of using a public tool also enhances privacy and security, as organizations have greater control over their data. This aspect is particularly crucial for tasks involving sensitive personal data, where sending data to a third party may not be desirable.

With stricter AI regulation on the horizon, organizations may increasingly focus on proprietary models to ensure compliance and data security. This shift could lead to a greater emphasis on domain-specific, private models rather than relying solely on large, publicly available LLMs trained on diverse datasets from the internet.

Need for AI and machine learning talent

Developing, training, and deploying machine learning models is a complex task, especially in a large organizational IT environment. As the demand for AI and machine learning talent continues to grow, professionals who can bridge the gap between theory and practice are increasingly sought after. This need is particularly evident in the field of MLOps, which focuses on deploying, monitoring, and maintaining AI systems in real-world settings.

According to a recent O’Reilly report, the top skills needed for generative AI projects include AI programming, data analysis and statistics, and operations for AI and machine learning. However, these skills are currently in short supply, posing a challenge for organizations looking to leverage AI technologies.

In 2024, organizations are expected to prioritize hiring talent with these critical skills, not just in big tech companies but across industries. As AI becomes more integrated into business operations, building internal AI and machine learning capabilities will be crucial for digital transformation.

Diversity is also crucial in AI initiatives, from technical teams building models to the board overseeing these projects. Ensuring a diverse team can help challenge biases in training data and improve the overall outcomes of AI projects. “One of the big issues with AI and the public models is the amount of bias that exists in the training data,” explained Crossan. “And unless you have that diverse team within your organization that is challenging the results and challenging what you see, you are going to potentially end up in a worse place than you were before AI.”

Shadow AI

The increasing accessibility of AI is leading to a rise in shadow AI within organizations, where employees use AI tools without explicit approval or oversight from the IT department. This trend is particularly common for easy-to-use AI chatbots, which employees can quickly experiment with in their web browsers, bypassing official IT review processes.

Shadow AI often emerges when employees seek quick solutions or want to explore new technology faster than official channels allow. While this demonstrates a proactive and innovative spirit, it also poses risks, as end users may not be aware of important considerations such as security, data privacy, and compliance. For example, using public-facing LLMs without realizing the risk of exposing sensitive information, such as trade secrets, to third parties.

“There’s a bit of a fear factor and risk angle that’s appropriate for most enterprises, regardless of sector, to think through,” noted Barrington. Once data is exposed to public models, it cannot be retracted, highlighting the importance of ensuring proper oversight and approval processes for AI usage within organizations.

In 2024, organizations will need to address the challenge of shadow AI through governance frameworks that balance innovation with privacy and security concerns. This may involve establishing clear policies for acceptable AI use, providing approved platforms, and fostering collaboration between IT and business leaders to understand departmental AI needs.

According to Barrington, “The reality is, everybody’s using it.” Recent EY research found that 90% of respondents used AI at work. Therefore, it is essential for organizations to align employees with ethical and responsible AI use practices.

 A generative AI reality check

As organizations move beyond the initial excitement of generative AI and into actual adoption and integration, they are likely to encounter a phase known as the “trough of disillusionment,” as described in the Gartner Hype Cycle.

“We’re definitely seeing a rapid shift from what we’ve been calling this experimentation phase into [asking], ‘How do I run this at scale across my enterprise?'” noted Barrington.

During this phase, organizations may face challenges such as limitations in output quality, security and ethics concerns, and difficulties integrating generative AI with existing systems and workflows. The complexity of implementing and scaling AI in a business environment can be more significant than anticipated, including tasks like ensuring data quality, training models, and maintaining AI systems in production.

“It’s actually not very easy to build a generative AI application and put it into production in a real product setting,” added Luke.

Despite these challenges, this phase can lead to a more realistic and nuanced understanding of AI’s capabilities and limitations. To move past this phase successfully, organizations should set realistic expectations for AI projects, clearly tie them to business goals and practical use cases, and have a clear plan for measuring outcomes.

“If you have very loose use cases that are not clearly defined, that’s probably what’s going to hold you up the most,” explained Crossan.

Increased attention to AI ethics and security risks

The rise of deepfakes and sophisticated AI-generated content has raised concerns about misinformation, manipulation, identity theft, and fraud. AI can also enhance the effectiveness of ransomware and phishing attacks, making them more convincing and harder to detect.

Efforts are underway to develop technologies for detecting AI-generated content, but this remains a challenging task. Current AI watermarking techniques are relatively easy to bypass, and existing AI detection software can be prone to false positives.

The increasing prevalence of AI systems underscores the importance of ensuring they are transparent and fair. This includes carefully vetting training data and algorithms for bias. Crossan emphasized that these ethics and compliance considerations should be integrated throughout the AI development process.

“When implementing AI, you have to consider the controls you’ll need,” she said. “This helps you plan for regulation concurrently with your AI experimentation, rather than realizing later that you need controls.”

Luke also noted that smaller, more narrowly tailored models can be safer and more ethical. “These smaller, tuned, domain-specific models are just far less capable than the really big ones — and we want that,” he said. “They’re less likely to produce unwanted outputs because they’re not as capable.”

Evolving AI regulation

In 2024, AI regulation is becoming a focal point, with laws, policies, and industry frameworks rapidly evolving both in the U.S. and globally. This shift is driven by concerns about ethics and security, particularly regarding deepfakes, AI-generated content, and potential misuse of AI in fraud and manipulation.

The EU’s AI Act, which is close to adoption, would be the world’s first comprehensive AI law. It aims to ban certain uses of AI, impose obligations on developers of high-risk AI systems, and require transparency from companies using generative AI. Noncompliance could lead to significant fines. Additionally, existing regulations like GDPR are also expected to have a significant impact, particularly regarding the right to be forgotten and data erasure in the context of AI systems.

While the U.S. lacks comprehensive federal AI legislation like the EU’s AI Act, recent executive orders and agency actions suggest a growing focus on AI regulation. President Biden’s executive order, for example, mandates safety testing for AI developers and imposes restrictions to protect against risks in engineering dangerous biological materials.

Organizations are advised not to wait for formal regulations to think about compliance. Engaging with clients and proactively addressing potential regulatory requirements can help businesses stay ahead of the curve. With the EU potentially leading the way in AI regulation, other regions, including the U.S., may need to adapt to these new standards.

Shares