AI-public-health

Multidisciplinary Perspectives on Artificial Intelligence (AI)

December 20, 2024 Off By admin
Shares

This document provides an overview of key discussions around Artificial Intelligence (AI) from a multidisciplinary perspective, covering technological advancements, application domains, ethical considerations, and societal impacts. The document highlights AI’s potential across various fields, including decision-making, healthcare, manufacturing, and the public sector, while also addressing challenges related to explainability, bias, privacy, and governance. Crucially, it proposes frameworks and research agendas to navigate these complexities, emphasizing the need for a human-centric approach and international collaboration.

Timeline of Main Events & Concepts Discussed

  • 1960s: Laws against “red-lining” in loan applications are established in the United States, serving as early precedents for regulating decision-making systems.
  • 1980s: Initial development of AI and expert systems in decision making (as per references in the document).
  • 1987: Kusiak’s research discusses potential of AI in automation within manufacturing.
  • 1992: Jain & Mosier discuss AI’s potential to monitor and control manufacturing processes.
  • 2000: Edwards, Duan, & Robins publish on the use of expert systems in business decision making.
  • 2002: Lee discusses AI-based systems for manufacturing processes.
  • 2011: Bostrom and Yudkowsky publish on the ethics of AI.
  • 2013: Abbot & Marohasy’s study on AI applications in rainfall forecasting.
  • 2014: Milano, O’Sullivan, & Gavanelli’s research explores Decision Support Systems and Deep Neural Networks.
  • 2016: Klaus Schwab publishes “The Fourth Industrial Revolution,” highlighting the societal transformation occurring due to technological advancements.
  • 2017: Research by Li et al on AI in Intelligent manufacturing.
  • 2017: Nikolic et al publish on the benefits of integrating AI into manufacturing.
  • 2017: Carrasquilla & Melko use ML for phase identification in known systems.
  • 2018: The European Union’s General Data Protection Regulation (GDPR) is introduced, including Article 22, which provides rights against solely automated decision-making.
  • 2018: The UK government introduces a code of conduct for “data-driven technology” in health and social care.
  • 2018: UKRI launches a funding call for Centres for Doctoral Training (CDTs) focused on AI.
  • 2018: Giannetti, Lucini, & Vadacchino publish work using ML for phase identification.
  • 2019 (Early): OpenAI releases a smaller version of GPT-2 due to concerns over potential misuse.
  • 2019 (July): The paper by Dwivedi et al., “Artificial Intelligence (AI): Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy,” is submitted for publication on July 11, 2019; accepted on August 3rd.
  • 2019 (October): 16 funded CDTs for AI training start in the UK.
  • Ongoing: Research and development in various aspects of AI, including:
  • AI in decision making, with a focus on deep learning, neural networks, and algorithmic systems.
  • Applications of AI across diverse domains like healthcare, manufacturing, education, and robotics.
  • The intersection of AI and Big Data for analytics and predictive capabilities.
  • Ethical, societal, and governance implications of AI systems.
  • The importance of data privacy and protection related to AI systems.
  • The need for standards and explainability in AI to ensure trustworthiness.
  • The exploration of AI’s potential in the developing world, specifically India.
  • The use of blockchain for AI data validation and scoring.
  • Human-centered design of AI, addressing cultural and behavioral aspects.
  • AI’s role in fundamental sciences.
  • The ongoing exploration of the nature of “Artificial General Intelligence” (AGI).

Key Themes and Ideas:

AI and Decision Making:

AI is increasingly used in decision-making systems, leveraging techniques like Artificial Neural Networks, Deep Learning, and algorithms.

Studies show the potential of these techniques for tasks like rainfall prediction, demonstrating their power in solving pattern recognition problems.

Studies have posited the benefits of utilising deep neural networks to improve the use of AI, however, the use of deeper networks and big datasets is unlikely to develop meaning in the human context, requiring further interdisciplinary research to unlock this area (Mitchell, 2019).” This underscores the need for human-centered research on AI’s impact on decision making.

Application Domains:

The literature identifies numerous applications of AI across various sectors:

Healthcare: AI assists with diagnosis, predictive capabilities and digital imaging.

Manufacturing: AI is transforming factory processes and is expected to enable intelligent manufacturing and the “smart factory” via automation.

Education & Policy: AI’s role is being explored for its impact on education.

Robotics: AI enables the development of intelligent robots.

Digital Imaging: AI assists in digital imaging analysis.

Existing factory processes will increasingly be analyzed for automation (Lee, 2002; Löffler & Tschiesner, 2013; Yang, Chen, Huang, & Li, 2017).

Data and Information:

Big Data’s integration with AI is a focal point, particularly for its potential to provide analytical insights and predictive capabilities.

Big Data Analytics (BDA), with its “volume, velocity, variety, veracity, and value” framework, when combined with AI, can transform areas like health and manufacturing (Abarca-Alvarez et al., 2018; Shukla et al., 2018; Spanaki et al., 2018; Wang and Wang, 2016).

AI is being applied to solve Big Data problems offering significant value (Rubik & Jabs, 2018).

Explainability and AI Systems:

There is a growing need for explainable AI (XAI), where the reasoning behind AI decisions can be understood, particularly in automated decision-making systems.

The EU’s GDPR and similar regulations demand transparency in automated decision-making.

The authors emphasize the importance of explanations for gaining trust and developing applications in areas like safety-critical systems.

Research should be conducted on the design of explanations for AI systems that are tailored to different stakeholders (developers, end-users, etc.) (Preece, 2018).

Can explanations from a single central approach be tailored to different classes of explainee?” is a central research question.

Advances in data visualization are cited as a potential approach to explain the “reasoning” behind AI (rather than relying on solely written explanations).

Integration Challenges & Viewpoints:

AI components developed independently might have differing viewpoints based on data and ecosystem conventions.

How can the inferences de-livered by different AI components be integrated coherently when they may be based on different data, and subject to different ecosystem conventions… ?” is a key question.

The text also alludes to the fact that human beings have different “viewpoints” due to different perception and inference capabilities (Mercier & Sperber, 2017). The question is whether similar gaps are possible for implementations of AI within organizations.

AI Fitness:

Three types of fitness are described as important considerations for implementing AI systems: narrow fitness, broad fitness, and adaptiveness.

There is a tension between these fitnesses and focusing on one can diminish the others.

AI in Decision Making:

Early AI developments aimed to mimic human decision-making (such as chess).

AI systems can be used to support or replace human decision-makers.

There’s a call to understand the factors affecting AI’s success in decision-making, particularly within the context of big data.

AI in Digital Marketing:

AI offers great opportunities to enhance campaign creation, targeting, and evaluation.

Three key stakeholders are identified: Brands, advertisers/marketing agencies, and consumers.

Ethical considerations around consumer privacy and the collection of personal data are highlighted as the “personalization-privacy paradox” (Gutierrez et al., 2019).

The metrics used to evaluate the effectiveness of AI should also be explored.

AI in the Developing World (Case of India):

The development and adoption of AI in developing countries are tied to complementary assets (organizational, managerial, and social) and affordable technology.

The national IT infrastructure of India and complementary assets available for the AI domain are assessed as lagging in a global context.

A research agenda should explore possibilities of AI-based affordable technologies for this area.

People-Centred Design & Social Change:

AI’s impact on people’s lives should be the focus of innovation.

Inclusive Design should be employed to meet the needs of a wide range of people with different backgrounds.

The document categorizes challenges and opportunities for AI goods and services in relation to social change and consumer behavior into: (i) taste, (ii) fear and (iii) cultural proximity.

“Taste” is identified as a double-edged sword with the potential for initial rejection and enthusiasm-driven “bubbles” followed by “busts” (Garber, 2000; Breuninger & Berg, 2001; Emmett, 2000).

A multidisciplinary Culture-Based Development (CBD) ‘toolkit’ that combines moral philosophy, consumer behavior, behavioral economics, and regional economics should be employed (Smith, 1759; Becker, 1996; Scitovsky, 1976; Kahneman & Tversky, 1979; Torre, 2008; Tubadji & Nijkamp, 2016).

Research is called for to measure the level of fear of AI and regional variation in preferences and choices (Kahneman & Tversky, 1979; Tubadji et al., 2016).

AI in Fundamental Sciences:

AI, especially machine learning, is increasingly used in fundamental science to analyze complex data.

In physics, AI is used to study the cosmos and elementary particles.

ML can be used to learn material properties and phases of matter from measured data without labelled datasets (Carrasquilla & Melko, 2017; Giannetti, Lucini, & Vadacchino, 2018; Melko, 2017).

The convergence of Machine Learning (ML) and High-Performance Computing (HPC), or High-Performance Data Analytics (HPDA), is a particularly promising trend.

The need to unbox AI algorithms is discussed (i.e., making them more explainable) in order to engender wider acceptance.

AI Training and Skills:

There is a high demand for data scientists and the development of data skills is critical for the field.

Fundamental sciences provide opportunities to train data scientists and develop transferrable skills.

The document references funding from UK Research & Innovation (UKRI) for Centers for Doctoral Training (CDTs) focused on AI.

AI Governance and Public Policy:

Government guidelines for automated decision-making systems are now being introduced, especially around data privacy (GDPR).

AI is becoming pervasive and included in diverse components from smart assistants to enterprise products, cloud libraries and bespoke data science applications.

A key concept is that AI inferences must be integrated cohesively.

Governments should introduce measures to counteract the public distrust of government (as there is a possibility that AI systems might reduce public servant corruption).

AI systems have the potential to create trust, especially through micro-targeted policies and real-time flexible actions by street level bureaucrats.

AI in the public sector presents both opportunities and challenges.

The TAM-DEF framework (Transparency & audit, Accountability & Legal issues, Misuse protection, Ethics, Fairness & Equity, Digital Divide & Data Deficit) is introduced to address AI’s public policy challenges.

AI Ethics & Misuse:

Ethical considerations around data privacy, human values, and the potential for misuse are significant.

AI systems must be sensitized to human values such as respect, kindness, compassion and should have a preferential duty towards children and vulnerable people.

Data protection laws are crucial.

The potential for reinforcement learning to have “unpredictable consequences” is identified as a concern (OpenAI’s GPT2 language generator is referenced as an example).

A global alliance for AI standardization is proposed along with a transparent DEEP-MAX scorecard for rating AI systems.

The DEEP-MAX scorecard assesses AI on Diversity, Equity, Ethics, Privacy, Data protection, Misuse protection, Audit and Transparency, and Digital Divide.

A major challenge is to design AI to protect it against misuse (such as a facial recognition system being misused in undemocratic regimes).

Blockchain for AI Safety:

Blockchain technology is suggested as a mechanism to ensure the safety, auditability and trustworthiness of AI systems.

Blockchain can be used for: certifying training data, providing tamperproof DEEP-MAX scores, and to help with the activation atlas based AI rating system.

Blockchain may help for public policy practitioners to implement misuse protection (by creating tamperproof records of changes and who ordered the changes in a system).

“Training data certification: Blockchain can provide a trusted me-chanism to certify the quality of training data for an AI system module.”

AI Governance and Unintended Consequences:

The lack of AI governance is highlighted and the potential for unintended consequences is noted (Janssen & Kuk, 2016b).

Society norms and values should be embedded in AI.

“Algorithms may develop biases due to measurements problems, their training data, reinforce historical discrimination, favour a political orientation, reinforce undesired practices or result into unanticipated outcomes due to hidden complexities (Janssen & Kuk, 2016b).”

“Compliance-by-design” is a suggestion for the best way to ensure values and regulations are embedded in AI systems.

Future of AI (AGI):

There is a differentiation between task-specific AI and human-centric cognitive AI capable of multiple intelligent tasks (Artificial General Intelligence (AGI).

The focus is currently on domain-specific AI rather than AGI.

The document also includes a discussion of AI in the context of the UN’s Sustainable Development Goals.

Research Agenda & Recommendations:

The authors provide many research agendas and recommendations which can be summarised as follows:

  • Explainability and Transparency: Investigate methods to make AI decision-making processes more transparent and understandable, particularly for different stakeholders.
  • Data Integration: Examine how inferences derived from different AI components can be integrated coherently, especially when they rely on diverse data and ecosystem conventions.
  • Ethical Implications: Conduct thorough research on the ethical implications of AI, especially in relation to privacy, bias, misuse, human values, and digital divides.
  • Cultural Context: Investigate the influence of culture, taste, and fear on the adoption and perception of AI technologies in different regions.
  • AI Governance: Develop comprehensive public policy frameworks that can effectively regulate AI development and deployment, including the use of the TAM-DEF framework.
  • Blockchain Integration: Explore the potential of blockchain to enhance AI safety, transparency, and trustworthiness.
  • AI in the Developing World: Research should be conducted on making AI more accessible to developing countries (including the development of “affordable tech”).
  • Human Centric Design: New ways of designing AI that focuses on “softer” goals (such as enhancing well-being and solving real-world problems).

Conclusion:

This document highlights the significant opportunities presented by AI across various sectors but also emphasizes the challenges related to ethics, governance, and social implications. The authors emphasize that a multidisciplinary approach, combining insights from technology, business, the humanities, and science, is essential for navigating the complexities of AI development and deployment. The focus must remain on human-centric design and ensuring the benefits of AI are realized responsibly and inclusively. It also underscores the importance of a diverse and holistic research agenda to address these complex issues.

This document provides a broad overview of the key topics discussed in the provided source text. It also includes direct quotes from the source text which should help provide a quick understanding of the document’s most important ideas.

FAQ’s AI Applications, Ethics, and Societal Impact

What are some of the key application domains where AI is currently being researched and implemented? AI is being applied across numerous sectors, including but not limited to: robotics, healthcare and informatics, digital imaging, education and policy, and manufacturing. In healthcare, AI is being used for diagnosis, predictive capabilities, and analysis of medical data. Within manufacturing, AI is utilized for intelligent automation of factory processes, real-time monitoring, and quality control. Other sectors are exploring AI’s potential for data analysis, pattern recognition, and optimization.

What are the major challenges associated with using AI for decision-making, and what is the current research agenda to address these? A significant challenge is explainability – the ability to understand the reasoning behind AI-driven decisions. Current research is focused on developing methods that can provide clear explanations of AI decision-making processes, particularly for critical applications like medical diagnoses and safety systems. Researchers are exploring if explanations can be tailored for various user types (developers, end-users, domain experts) and investigating whether data visualization techniques can offer an alternative to verbal explanations. The integration of AI components and their inferences, especially when based on diverse datasets and conventions, also needs further study. Furthermore, understanding the factors affecting AI’s success in decision-making, including the impact of data quality and potential bias, is crucial.

How are ethical considerations and data privacy being addressed in the context of AI, especially in areas like digital marketing? The ethical issues of AI use, especially data privacy, are being addressed through frameworks like the TAM-DEF framework, which emphasizes Transparency, Accountability, Misuse protection, Ethics, Fairness, and addressing the Digital Divide and Data Deficit. The framework proposes a DEEP-MAX scorecard system which includes ratings for privacy, ethics, diversity, equity, auditability, consistency across geographies, and misuse protection. Specifically in digital marketing, the personalization-privacy paradox is a significant concern, as the use of personal data for targeted advertising raises privacy concerns. A better understanding of how stakeholders navigate these concerns is needed. Also the asymmetry of information between companies and consumers needs to be addressed. There is growing awareness of the need for strong data protection laws and incorporating human values into AI system design.

What are “complementary assets” and “affordable tech” in the context of AI adoption, particularly in developing countries, like India? Complementary assets are organizational, managerial, and social factors necessary to derive value from IT investments, including AI. These encompass organizational culture and processes, management support, national IT infrastructure, education, and regulatory frameworks. For AI to succeed, these complementary assets must be in place. In the context of developing countries, “affordable tech” refers to the development and implementation of cost-effective AI technologies tailored to the local market’s needs and infrastructure, since these assets are often underdeveloped in countries like India. The interplay of these elements are crucial for the success of AI adoption.

How is AI being used in fundamental scientific research, and what are the opportunities and challenges within this space? In the physical sciences, AI and machine learning are used to analyze vast datasets related to the cosmos, particle physics, and complex quantum systems. Machine learning algorithms are being used for phase identification in quantum materials, which could lead to new technological advances like topological superconductors. One key opportunity is the convergence of machine learning and high-performance computing. However, challenges exist, including ensuring software compatibility with the rapid evolution of AI. Also, training a new generation of data scientists who understand both AI and fundamental science is needed.

What impact can AI have on the public sector and how can it improve government service delivery? AI has the potential to transform government operations by enabling personalized services tailored to individual citizen needs. AI can help address challenges like resource shortages, scale of operations, and cumbersome, standardized processes by utilizing smart systems, intelligent adaptive forms, and predictive service delivery. AI can potentially reduce public servant corruption and improve citizen trust by ensuring fair and efficient service provisions. The development of value-aware systems that conform to human values and ensure transparency are of utmost importance. Governance of AI algorithms and systems is needed to ensure that the benefits are gained and risks avoided. A multi-pronged public policy framework should test the AI systems prior to use, particularly when it comes to their fairness, safety, and social desirability.

What is the significance of “people-centered design” when developing AI, and what factors affect how individuals embrace or reject AI goods and services? People-centered design emphasizes empathy and iterative solutions to solve the real needs of people, focusing on inclusiveness and meeting the needs of a wide variety of users, regardless of their background. This design approach is crucial in the development of AI. The acceptance of AI products and services is affected by three primary factors: (i) Taste – where people’s slow adaption to technology can impact adoption rates as well as over enthusiasm leading to bubbles, (ii) Fear – where fear of new technologies or loss of jobs impact public attitudes, and (iii) Cultural proximity – where cultural differences influence acceptance and the degree of perceived distance between human and AI. Understanding these factors through a multidisciplinary approach combining moral philosophy, consumer behavior, behavioral economics and regional economics is essential for managing how people respond to AI.

How can Blockchain technology help address safety, transparency, and misuse concerns related to AI? Blockchain can provide a tamper-proof mechanism for certifying AI training data, ensuring the AI systems are trained using diverse datasets. Blockchain can also store and verify DEEP-MAX scores, enhancing the transparency and auditability of AI models. It helps protect against misuse of systems by establishing an immutable record of changes, particularly with systems that are critical to safety. The data surrounding the AI activation atlas can also be stored on the blockchain which can reveal internal neural net features, improving transparency. Blockchain, therefore, provides a way to build trust and ensure the safe implementation of AI.

Glossary of Key Terms

  • Artificial Intelligence (AI): A broad term for computer systems able to perform tasks normally requiring human intelligence, such as learning, problem-solving, and decision-making.
  • Machine Learning (ML): A subset of AI that allows computer systems to learn from data without explicit programming, improving their performance over time.
  • Deep Learning: A subfield of machine learning using artificial neural networks with multiple layers to extract complex patterns from data.
  • Artificial Neural Networks (ANN): Computing systems inspired by biological neural networks, used for pattern recognition and classification.
  • Big Data: Large, complex data sets that are difficult to process using traditional data processing applications. Often described using the “V”s: volume, velocity, variety, veracity, and value.
  • Algorithm: A set of rules or instructions that a computer follows to perform a task or solve a problem.
  • Explainability: The ability of an AI system to provide a clear and understandable explanation of how it arrives at a decision or prediction.
  • Transparency: The state of being open and honest about how AI systems function, including the data they use and the logic they apply.
  • Bias: Systemic error in a data set or algorithm that can lead to unfair or discriminatory outcomes.
  • Data Visualization: The graphical representation of information and data. By using visual elements like charts, graphs, and maps, data visualization tools provide an accessible way to see and understand trends, outliers, and patterns in data.
  • Digital Discretion: The concept that government officials and bureaucrats apply their own judgment when making decisions. This can be influenced by ICT (information communication technology) in the modern era.
  • Personalization-Privacy Paradox (PPP): The ethical tension between the personalization of advertising and the protection of consumer privacy.
  • Privacy Calculus Theory (PCT): A theory used to analyze the trade-off between perceived benefits and perceived risks of using technology, especially regarding privacy.
  • Cultural Proximity: The degree of cultural similarity or difference between groups, affecting how ideas and technologies are adopted.
  • TAM-DEF Framework: A framework that highlights six key AI public policy challenges: Transparency & audit, Accountability & Legal issues, Misuse protection, Ethics, Fairness & Equity, Digital Divide & Data Deficit.
  • DEEP-MAX Scorecard: A transparent rating system for AI systems, based on Diversity, Equity, Ethics, Privacy, Misuse protection, Audit and Transparency, Digital divide and Data deficit.
  • Blockchain: A distributed, decentralized, and immutable ledger used to record transactions and data securely.
  • Artificial General Intelligence (AGI): A theoretical form of AI that matches or exceeds human intelligence across a wide range of tasks, often referred to as “real” AI.
  • Industry 4.0 (I4.0): The ongoing automation of traditional manufacturing and industrial practices, using modern smart technology.
  • Unsupervised Learning: A type of machine learning where the algorithm learns from unlabeled data without human guidance, often used for pattern recognition.
  • High-Performance Data Analytics (HPDA): The combination of high-performance computing and data analytics to process large datasets.

Reference

Dwivedi, Y. K., Hughes, L., Ismagilova, E., Aarts, G., Coombs, C., Crick, T., … & Williams, M. D. (2019). Artificial Intelligence (AI): Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy. International Journal of Information Management.

Shares