<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Bohr. Iam.</journal-id>
<journal-title>BOHR International Journal of Internet of things, Artificial Intelligence and Machine Learning</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Bohr. Iam.</abbrev-journal-title>
<issn pub-type="epub">2583-5521</issn>
<publisher>
<publisher-name>BOHR</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.54646/bijiam.2024.20</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Case Study</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Analyzing the guiding principles of AI ethics: A framing theory perspective on the communication of ethical considerations in artificial intelligence (AI)</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>Younas</surname> <given-names>Asifa</given-names></name>
<xref ref-type="corresp" rid="c001"><sup>&#x002A;</sup></xref>
</contrib>
</contrib-group>
<aff><institution>M. Phil-HRM, Superior University</institution>, <addr-line>Lahore</addr-line>, <country>Pakistan</country></aff>
<author-notes>
<corresp id="c001">&#x002A;Correspondence: Asifa Younas, <email>asifayounas12@gmail.com</email></corresp>
</author-notes>
<pub-date pub-type="epub">
<day>24</day>
<month>09</month>
<year>2024</year>
</pub-date>
<volume>3</volume>
<issue>1</issue>
<fpage>16</fpage>
<lpage>24</lpage>
<history>
<date date-type="received">
<day>05</day>
<month>02</month>
<year>2024</year>
</date>
<date date-type="accepted">
<day>05</day>
<month>09</month>
<year>2024</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x00A9; 2024 Younas.</copyright-statement>
<copyright-year>2024</copyright-year>
<copyright-holder>Younas</copyright-holder>
<license xlink:href="https://creativecommons.org/licenses/by/4.0/"><p>&#x00A9; The Author(s). 2024 Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.</p></license>
</permissions>
<abstract>
<p>Various organizations have created AI ethics standards and protocols in an era of rapidly expanding AI, all to ensure ethical AI use for the benefit of society. However, the ethical issues raised by AI&#x2019;s societal applications in the actual world have generated scholarly debates. Through the prism of framing theory in media and communication, this study examines AI ethics principles from three significant organizations: Microsoft, NIST, and the AI HLEG of the European Commission. Institutional AI ethics communication must be closely examined in this rapidly changing technical environment because of how institutions frame their AI principles.</p>
</abstract>
<kwd-group>
<kwd>Artificial intelligence</kwd>
<kwd>AI ethics</kwd>
<kwd>AI principles</kwd>
<kwd>framing theory</kwd>
<kwd>TRUST</kwd>
<kwd>AI framings</kwd>
</kwd-group>
<counts>
<fig-count count="0"/>
<table-count count="4"/>
<equation-count count="0"/>
<ref-count count="45"/>
<page-count count="9"/>
<word-count count="5981"/>
</counts>
</article-meta>
</front>
<body>
<sec id="S1" sec-type="intro">
<title>1. Introduction</title>
<p>A new era marked by artificial intelligence&#x2019;s (AI) ubiquitous influence across many sectors has begun with AI technology&#x2019;s rapid growth. Many industries, including healthcare, aerospace, banking, entertainment, and many more, have been impacted by this technological transformation, which is sometimes referred to as the &#x201C;fourth industrial revolution.&#x201D; These businesses are all trying to increase productivity and efficiency while cutting costs. In this sense, artificial intelligence (AI) describes computer programs that mimic human intelligence processes, carrying out or even surpassing human performance (<xref ref-type="bibr" rid="B1">1</xref>).</p>
<p>The application of AI technology is challenging, though. Biases from training data are known to be inherited by AI systems, which can have unforeseen repercussions and promote inequality in a variety of domains. Examples of this problem include instances of gender bias in research publishing and racial prejudice in healthcare projections (<xref ref-type="bibr" rid="B2">2</xref>). These biases have sparked questions regarding the reliability of AI systems and their opaque decision-making procedures, especially because sophisticated AI technologies like deep learning are still difficult for people to understand (<xref ref-type="bibr" rid="B3">3</xref>).</p>
<p>To ensure responsible use and shape the development of AI technology, it is imperative to define ethical rules and guidelines in light of these challenges. Notably, leading technology corporations have taken action to regulate their AI endeavors, such as Microsoft with its Responsible AI framework (<xref ref-type="bibr" rid="B4">4</xref>). Recognizing the strategic significance of AI for innovation, equity, and security, the US government has also joined the AI standards and regulatory space through the National Institute of Standards and Technology (NIST) (<xref ref-type="bibr" rid="B5">5</xref>). Furthermore, through its High-Level Expert Group on AI (AI HLEG), the European Union has been actively involved in creating ethical standards for AI, with an emphasis on an approach to AI ethics that is human-centric (<xref ref-type="bibr" rid="B6">6</xref>).</p>
<p>These many pioneering institutions&#x2019; conceptualization of these institutional ethical principles for AI technology provides insights into regulating AI&#x2019;s social and technological advancement (<xref ref-type="bibr" rid="B7">7</xref>). Understanding the guiding concepts behind AI development and deployment is essential to ensure that these technologies remain reliable, open, and consistent with human values as they become increasingly integrated into our daily lives (<xref ref-type="bibr" rid="B4">4</xref>).</p>
<p>The proliferation of ethics guidelines by multiple organizations has fractured the debate on AI ethics, making it difficult to fully understand the field and making the pursuit of equitable implementation more difficult (<xref ref-type="bibr" rid="B8">8</xref>). Many organizations, such as user groups, government agencies, and developers, have published AI ethics principles (<xref ref-type="bibr" rid="B9">9</xref>). As a result, there are a lot of similarities and discrepancies between their efforts to create practical rules for the benefit of society (<xref ref-type="bibr" rid="B10">10</xref>). There needs to be a broad agreement on normative frameworks and standard norms for AI ethics (<xref ref-type="bibr" rid="B11">11</xref>). The central question is how to define &#x201C;common good&#x201D; and &#x201C;social benefit&#x201D; in an increasingly globalized and digitalized world (<xref ref-type="bibr" rid="B12">12</xref>). This calls for clear definitions of justice, human rights, and widely acknowledged values, as well as how to identify potential risks in AI applications that have the potential to support or contradict these values in various social and economic contexts (<xref ref-type="bibr" rid="B4">4</xref>).</p>
<p>This research is important because it offers a semi-systematic overview of governance, legislation, and ethics in AI and sheds light on how the area of AI ethics is developing (<xref ref-type="bibr" rid="B13">13</xref>). It tackles ethical issues and conflicts in formulating and disseminating ethical AI principles by classifying AI guidelines and pointing out institutional overlaps and omissions (<xref ref-type="bibr" rid="B14">14</xref>). As AI technology continues to advance in societal use cases, research helps to bring hidden tensions, fresh viewpoints, and tech-business social agendas to the fore (<xref ref-type="bibr" rid="B15">15</xref>). This promotes conflict resolution and progress. By offering insightful information for regulatory strategies and assurance services, this study adds to the continuing conversation on AI ethics (<xref ref-type="bibr" rid="B16">16</xref>). It guarantees stakeholders&#x2019; comprehension of AI technology&#x2019;s performance, risk, and compliance (<xref ref-type="bibr" rid="B17">17</xref>). Additionally, by using framing theory to study institutional AI ethics principles and norms, it highlights the crucial roles that trust and understanding play in sophisticated AI technologies and their communication (<xref ref-type="bibr" rid="B18">18</xref>).</p>
<sec id="S1.SS1">
<title>1.1 Literature review</title>
<sec id="S1.SS1.SSS1">
<title>1.1.1 Framing theory literature: a viewpoint for research and instrument for communicating AI ethics</title>
<p>One of the first academics to define the term &#x201C;framework&#x201D; was (<xref ref-type="bibr" rid="B19">19</xref>), who described frames as &#x201C;schemata of interpretation&#x201D; for understanding what has happened (<xref ref-type="bibr" rid="B20">20</xref>) frames assist in bringing seemingly unrelated occurrences into coherent wholes. The intricacy of framing was emphasized by pointing out that there might be frames inside frames (<xref ref-type="bibr" rid="B21">21</xref>). According to (<xref ref-type="bibr" rid="B3">3</xref>) framing is the process of choosing which parts of reality to highlight in a communication to support particular problem definitions, causal interpretations, moral assessments, or therapeutic suggestions (<xref ref-type="bibr" rid="B22">22</xref>).</p>
<p>The conceptualization and communication of climate change in Swedish agriculture were examined by (<xref ref-type="bibr" rid="B11">11</xref>), emphasizing the discrepancy between farmers&#x2019; perceptions and media portrayals of the issue. (<xref ref-type="bibr" rid="B20">20</xref>) used framing analysis to examine how the news media covered the IPCC Fifth Assessment Report on climate change to find dominating frames (<xref ref-type="bibr" rid="B23">23</xref>).</p>
<p>Research on framing in political science and sociology looks at the words, pictures, sentences, and ways that news items are presented, as well as the processes that shape them (<xref ref-type="bibr" rid="B24">24</xref>). Diverse theoretical and methodological approaches to framing have been given by many scholars [(<xref ref-type="bibr" rid="B25">25</xref>), Matthes, 2009 #629].</p>
<p>While framing and agenda-setting are similar, framing concentrates on the substance of issues rather than particular subjects (<xref ref-type="bibr" rid="B10">10</xref>). Discourse analysis and the idea of the explanatory theme are connected to framing (<xref ref-type="bibr" rid="B20">20</xref>). Four framing processes were distinguished by (<xref ref-type="bibr" rid="B26">26</xref>):</p>
<list list-type="simple">
<list-item>
<label>&#x2022;</label>
<p>Frame creation</p>
</list-item>
<list-item>
<label>&#x2022;</label>
<p>Frame Placement</p>
</list-item>
<list-item>
<label>&#x2022;</label>
<p>The consequences of frames at the individual level</p>
</list-item>
<list-item>
<label>&#x2022;</label>
<p>The audience role of journalists</p>
</list-item>
</list>
</sec>
<sec id="S1.SS1.SSS2">
<title>1.1.2 TRUST framings serve as the study&#x2019;s academic framework</title>
<p>Transparent and understandable AI systems are required to solve the &#x201C;black box problem&#x201D; in AI (<xref ref-type="bibr" rid="B4">4</xref>). To reduce dangers and improve confidence in AI decision-making processes, academics and organizations are developing technological and moral regulation strategies (<xref ref-type="bibr" rid="B9">9</xref>).</p>
<p>AI development and application heavily depend on the public dissemination of AI principles and guidelines (<xref ref-type="bibr" rid="B25">25</xref>). These published AI ethics principles do, however, have some notable distinctions, similarities, and conflicts (<xref ref-type="bibr" rid="B9">9</xref>). The project&#x2019;s goal is to find important TRUST framings in texts, including AI concepts and guidelines (<xref ref-type="bibr" rid="B27">27</xref>).</p>
<p>AI principles and guidelines writings that incorporate issues such as interpretability, transparency, comprehensibility, and explainable AI are called transparent and understandable AI framing (The Royal Society, 2019, <xref ref-type="bibr" rid="B28">28</xref>).</p>
<p><bold>Safe and Reliable AI Framing:</bold> Covers safety management procedures, public reporting of issues and future goals, and reliability (<xref ref-type="bibr" rid="B4">4</xref>).</p>
<p>Human augmentation, user control, autonomy, and consent are the main topics of the User Control and Autonomy Framing [(<xref ref-type="bibr" rid="B4">4</xref>, Endsley, 2018) #634].</p>
<p>Data security, privacy, and the requirement for secure AI systems are all covered under the &#x201C;Secure and Privacy AI Framing&#x201D; (<xref ref-type="bibr" rid="B29">29</xref>).</p>
<p>Changing narratives surrounding the complexity, risks, and issues surrounding artificial intelligence, such as ethical conundrums, human resources, employment, rights, accessibility, fairness, non-discrimination, justice, inclusion, diversity, solidarity, accountability, whistleblowers, and AI audits; additionally, hidden costs associated with AI and responsible research funding (<xref ref-type="bibr" rid="B30">30</xref>). These scholarly frameworks provide a basis for comprehending the various facets of communication on AI ethics (<xref ref-type="bibr" rid="B31">31</xref>).</p>
</sec>
<sec id="S1.SS1.SSS3">
<title>1.1.3 Research questions</title>
<p>RQ1: What kinds of frameworks are included in the text of the selected organizations&#x2019; AI principles and guidelines?</p>
<p>RQ2: How much do the framings that these institutions use correspond with or mimic the TRUST framings that are explained in this study? These frameworks include The Other Framings, User Control and Autonomy, Secure and Privacy AI, and Transparent and Comprehensible AI.</p>
</sec>
</sec>
</sec>
<sec id="S2">
<title>2. Methodology</title>
<p>The goal of the study is to examine AI ethics communication in the context of leading AI organizations&#x2019; AI principles and guidelines&#x2014;Microsoft, NIST, and AI-HLEG, in particular&#x2014; and to distinguish different framings in their communication about AI ethics. The TRUST is used to identify these framings&#x2014;Framings from the AI literature review that were developed in the preceding part. The selection of these AI firms for analysis was done with great care to reduce the possibility of author bias. Other prominent AI organizations were not included in the analysis because of unclear institutional approaches to AI research, innovation, and self-regulation, ongoing ethical disputes that have been covered in the media recently (like Google&#x2019;s Project Maven), or past ties to the author. The processes for gathering textual data and the researcher&#x2019;s approach to locating frames in the AI messages of the selected universities are described in the part that follows.</p>
<p>Phase 1: The researcher gathered the text data from the open-access AI principles and standards published on the websites of the chosen three institutions. <xref ref-type="table" rid="T1">Table 1</xref> contains the source links for this text data.</p>
<table-wrap position="float" id="T1">
<label>TABLE 1</label>
<caption><p>Artificial Intelligence (AI) principles data for textual analysis as downloaded in Dec 2021.</p></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<thead>
<tr>
<td valign="top" align="left">AI Principles</td>
<td valign="top" align="left">Microsoft</td>
<td valign="top" align="left">AI-HLEG</td>
<td valign="top" align="left">NIST</td>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Published Document Source Links</td>
<td valign="top" align="left"><ext-link ext-link-type="uri" xlink:href="https://www.microsoft.com/en-us/ai/principles-and-approach">https://www.microsoft.com/en-us/ai/principles-and-approach</ext-link></td>
<td valign="top" align="left"><ext-link ext-link-type="uri" xlink:href="https://www.europarl.europa.eu/cmsdata/196377/AI%20HLEG_Ethics%20Guidelines%20for%20Trustworthy%20AI.pdf">https://www.europarl.europa.eu/cmsdata/196377/AI%20HLEG_Ethics%20Guidelines%20for%20Trustworthy%20AI.pdf</ext-link></td>
<td valign="top" align="left"><ext-link ext-link-type="uri" xlink:href="https://www.nist.gov/system/files/documents/2020/08/17/NIST%20Explainable%20AI%20Draft%20NISTIR8312%20%281%29.pdf">https://www.nist.gov/system/files/documents/2020/08/17/NIST%20Explainable%20AI%20Draft%20NISTIR8312%20%281%29.pdf</ext-link></td>
</tr>
<tr>
<td valign="top" align="left">Active Web Links</td>
<td valign="top" align="left"><ext-link ext-link-type="uri" xlink:href="https://www.microsoft.com/en-us/ai/responsible-ai">https://www.microsoft.com/en-us/ai/responsible-ai</ext-link></td>
<td valign="top" align="left"><ext-link ext-link-type="uri" xlink:href="https://learn.microsoft.com/en-us/azure/machine-learning/concept-responsible-ai?view=azureml-api-2">https://learn.microsoft.com/en-us/azure/machine-learning/concept-responsible-ai?view=azureml-api-2</ext-link> <ext-link ext-link-type="uri" xlink:href="https://digital-strategy.ec.europa.eu/en/library/communication-building-trust-human-centric-artificial-intelligence">https://digital-strategy.ec.europa.eu/en/library/communication-building-trust-human-centric-artificial-intelligence</ext-link></td>
<td valign="top" align="left"><ext-link ext-link-type="uri" xlink:href="https://www.nist.gov/artificial-intelligence">https://www.nist.gov/artificial-intelligence</ext-link></td>
</tr>
<tr>
<td valign="top" align="left">Document Length</td>
<td valign="top" align="left">13 full-length webpages with text on AI approach (7 video transcripts and 6 additional AI guideline blog entries) and 1 training module with 9 units</td>
<td valign="top" align="left">24 pages (August 2020) Draft NISTIR 8312 and website updates on AI principles.</td>
<td valign="top" align="left">36 pages (additionally 1 page mentioning High-Level Expert Group members) of Deliverable 1 (Ethics Guidelines for Trustworthy AI) and web links to Deliverables 2, 3, 4.</td>
</tr>
</tbody>
</table></table-wrap>
<sec id="S2.SS1">
<title>2.1 Data sources</title>
<p>Phase 2: As Matthes (2009) noted in their methodical examination of media framing studies published in prestigious communication journals, frame analysis is an essential technique for closely examining the selection and prominence of particular components of a problem {(Guenther, 2023) #745}. The framings within the textual data were manually identified using the (<xref ref-type="bibr" rid="B3">3</xref>) concept of framing and the academic sources cited in the literature study. The framings included in the AI principles language of the chosen firms were identified using inductive and deductive methods (<xref ref-type="bibr" rid="B7">7</xref>). Based on the qualitative paradigm of frame analysis, which holds that frames are visible through particular words, this study explores framings using direct quotations taken from the selected AI pioneers&#x2019; recently developed and published AI principles and guidelines, making connections with different aspects of the current scholarly debate on AI ethics (<xref ref-type="bibr" rid="B32">32</xref>). During the textual study of Microsoft, NIST, and AI-HLEG&#x2019;s AI principles and guidelines, the identification of frames was led by the systematic processes described by (<xref ref-type="bibr" rid="B22">22</xref>) in &#x2018;Frames in Communication&#x2019;.</p>
<p>Describing the process for identifying certain framings is crucial before providing the research and findings (<xref ref-type="bibr" rid="B33">33</xref>). &#x201C;When researchers employ computer programs for analyzing large volumes of text, they must identify the universe of words that signal the presence of a frame,&#x201D; according to guidelines (<xref ref-type="bibr" rid="B34">34</xref>). This study&#x2019;s academic framing literature review phase found theme words indicative of the identified framings in the sample text on AI principles and guidelines. It is important to remember that identifying &#x201C;frames in communication&#x201D; entails being aware of the important points highlighted in a speaking act. In the methodology, which lacks uniform measuring standards, persuasive communication research adheres to four essential steps: (1) Identifying a particular problem, occasion, or person; these components define communication frames. (2) Isolating particular attitudes to understand how frames shape public opinion (<xref ref-type="bibr" rid="B32">32</xref>). (3) Determining an issue&#x2019;s starting set of frames inductively to create a coding scheme. (4) Using the original set of frames that have been identified to select the content sources for analysis.</p>
<p>All of the methods above for finding framing were followed, except the second stage, which examined how frames influence public opinion, given the study&#x2019;s goals and scope {(Mhlanga, 2020) #744}. Previous sections identified and explained specific topics, pertinent events, examples, AI actors, and the chosen sample institutions. The academic framing literature review portion identified and elaborated on an initial set of framings corresponding to the concerns covered. Regarding the last phase (<xref ref-type="bibr" rid="B32">32</xref>), the study&#x2019;s introductory part detailed the textual selection of AI principles and guidelines taken from three institutional sources for analysis.</p>
</sec>
</sec>
<sec id="S3">
<title>3. Results or finding</title>
<p>As was already mentioned, every institution&#x2019;s AI principles should encourage risk reduction and problem-solving related to this new technology. This insight is related to Goffman&#x2019;s person-role formula, which states that an AI actor&#x2019;s social role is closely related to Its type. The framings of the AI principles and guidelines are soft (because there is no legal obligation) but strong (as they take into account each position&#x2019;s/society&#x2019;s role&#x2019;s priorities) (<xref ref-type="bibr" rid="B35">35</xref>). The following two research problems are addressed by the AI ethics principles and guidelines text analysis:</p>
<list list-type="simple">
<list-item><p><bold>RQ1</bold>: What framings can be found in the AI principles and guidelines text of the chosen institutions?</p>
</list-item>
</list>
<p>The High-Level Expert Group on Artificial Intelligence (AI HLEG) was established by the European Commission to foster trust in the AI system&#x2019;s entire life cycle (from development to deployment, from planning and communication to policy and investment recommendations). They produced a comprehensive guiding document that is currently influencing Europe&#x2019;s overall AI approach to empower, benefit, and safeguard European citizens (<xref ref-type="bibr" rid="B18">18</xref>). In addition to the guidelines, which are referred to as the &#x201C;Ethics Guidelines for Trustworthy AI,&#x201D; the expert group produced three other deliverables: the AI Ethics Guidelines document itself included Sectoral Considerations on the Policy and Investment Recommendations, Assessment List for Trustworthy AI (ALTAI), and Policy and Investment Recommendations for Trustworthy AI. The AI ethics standards serve as the cornerstone upon which more comprehensive texts are constructed. Following the foundation chapter on Ethics Guidelines, each extension above receives a full chapter treatment.</p>
<list list-type="simple">
<list-item><p><bold>RQ2:</bold> Which of the institutional framings are the same as or similar to TRUST framings explained in this study? (Where TRUST Framings indicate Transparent and Comprehensible AI Framing, Reliable and Safe AI Framing, User Control and Autonomy Framing, Secure and Privacy AI Framing, and The Other Framings).</p>
</list-item>
</list>
<p>The principles that underpin the guidelines drafted by the AI high-level expert group are rooted in Ethics in Science and New Technologies and the Fundamental Rights Agency (<xref ref-type="bibr" rid="B36">36</xref>). These three components are adhering to legal requirements, upholding ethical principles, and providing assurance of &#x201C;robustness&#x201D; (specifically, &#x201C;technical robustness&#x201D; combined with safety measures for humans, animals, and the environment in a variety of settings, as well as fallback plans)&#x2014;all taken from AI HLEG&#x2019;s EU documents and assessment list for trustworthy-AI.</p>
<p>According to (<xref ref-type="bibr" rid="B9">9</xref>), the standards specify essential requirements that are not legally binding. Although the seven conditions don&#x2019;t impose any new legal duties, they offer developers and stakeholders thorough guidance in persuading them to comply (<xref ref-type="bibr" rid="B6">6</xref>). Developing and implementing AI systems that meet the seven specified characteristics of AI HLEG would create reliable AI systems. The guidelines state that if AI applications respect the following:</p>
<list list-type="simple">
<list-item>
<label>1.</label>
<p>Human agency and oversight.</p>
</list-item>
<list-item>
<label>2.</label>
<p>Technical robustness and safety.</p>
</list-item>
<list-item>
<label>3.</label>
<p>Privacy and data governance.</p>
</list-item>
<list-item>
<label>4.</label>
<p>Transparency.</p>
</list-item>
<list-item>
<label>5.</label>
<p>Diversity, non-discrimination, and fairness.</p>
</list-item>
<list-item>
<label>6.</label>
<p>Societal and environmental well-being.</p>
</list-item>
<list-item>
<label>7.</label>
<p>Accountability, then they will be considered trustworthy.</p>
</list-item>
</list>
<p>The guidelines&#x2019; text and their communication to the European Parliament (<xref ref-type="bibr" rid="B18">18</xref>) are related to the study&#x2019;s other framings (diversity, non-discrimination and fairness, accountability) as well as the following: transparent and understandable AI framing, reliable and safe AI framing, user control and autonomy framing, secure and privacy AI framing, and user control and autonomy (<xref ref-type="bibr" rid="B26">26</xref>). <xref ref-type="table" rid="T2">Table 2</xref> provides some sample quotes from chosen AI principles and guidelines data documents linked to the TRUST framings of this study (<xref ref-type="bibr" rid="B37">37</xref>). Refer to Appendix A, <xref ref-type="table" rid="T3">Tables 3</xref>, <xref ref-type="table" rid="T4">4</xref> in the ensuing sections for further AI ethical language framing examples from Microsoft, the EU&#x2019;s AI HLEG, and NIST&#x2019;s AI principles and guidelines (<xref ref-type="bibr" rid="B38">38</xref>).</p>
<table-wrap position="float" id="T2">
<label>TABLE 2</label>
<caption><p>Examples of Identified Framings in the Institutional AI Ethics Principles and Guidelines Text data (EU&#x2019;s AI HLEG, Microsoft, NIST).</p></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<thead>
<tr>
<td valign="top" align="left">Framing</td>
<td valign="top" align="left">Identifying Word/Phrase</td>
<td valign="top" align="left">Examples</td>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left"><italic>Transparent and Comprehensible AI Framing</italic></td>
<td valign="top" align="left">Transparency, Explainability, Interpretability, Comprehensibility</td>
<td valign="top" align="left">&#x201C;Per-decision explanations provide a separate 370 explanation for each decision&#x2026;Self-explainable models of machine learning systems themselves can be used as global explanations (since the models explain themselves). Likewise, many global explanations (including self-explainable models) can also be used to generate per-decision explanations.&#x201D; (NISTIR 8312, 2020, p.8)</td>
</tr>
<tr>
<td valign="top" align="left"><italic>Reliable and Safe AI Framing</italic></td>
<td valign="top" align="left">Reliability, Management Practices directed toward Safety, Public reports of Problems/Failures/Misses<break/> /Future plans, Oversight Boards</td>
<td valign="top" align="left">&#x201C;ORA [Office of Responsible AI] puts Microsoft principles into practice by setting the company- wide rules for responsible AI through the implementation of our governance and public policy work. It has four key functions.&#x201D;<break/> &#x201C;Aether [AI, Ethics and Effects in Engineering and</td>
</tr>
</tbody>
</table></table-wrap>
<table-wrap position="float" id="T3">
<label>TABLE 3</label>
<caption><p>Identified Framings in the Institutional AI Ethics Principles and Guidelines Text Data.</p></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<thead>
<tr>
<td valign="top" align="left">Microsoft</td>
<td valign="top" align="left">NIST</td>
<td valign="top" align="left">AI-HLEG</td>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Transparent and Comprehensible AI Framing</td>
<td valign="top" align="left">Transparent and Comprehensible AI Framing (Explainability)</td>
<td valign="top" align="left">Transparent and Comprehensible AI Framing (Explicability)</td>
</tr>
<tr>
<td valign="top" align="left">Reliable and Safe AI Framing</td>
<td valign="top" align="left">Reliable and Safe AI Framing</td>
<td valign="top" align="left">Reliable and Safe AI Framing</td>
</tr>
<tr>
<td valign="top" align="left">NA</td>
<td valign="top" align="left">NA</td>
<td valign="top" align="left">User Control and Autonomy Framing</td>
</tr>
<tr>
<td valign="top" align="left">Secure and Privacy AI Framing</td>
<td valign="top" align="left">NA</td>
<td valign="top" align="left">Secure and Privacy AI Framing</td>
</tr>
<tr>
<td valign="top" align="left">The Other Framings (Fairness, Inclusiveness, Accountability)</td>
<td valign="top" align="left">The Other Framings (Accountability)<break/> Knowledge Limits Principle</td>
<td valign="top" align="left">The Other Framings (Fairness, We foster accountability in societal and environmental situations and encourage inclusivity for marginalized or historically underprivileged populations.</td>
</tr>
<tr>
<td valign="top" align="left">Avoid being ableist when creating, refining, or evaluating AI systems.</td>
<td valign="top" align="left">Prejudice, resiliency, and unjust, hurtful, or misleading results are avoided.</td>
<td valign="top" align="left">Promoting well-being, reducing harm, evaluating threats to democracy, the human condition, the rule of law, and distributive justice principles.</td>
</tr>
</tbody>
</table></table-wrap>
<table-wrap position="float" id="T4">
<label>TABLE 4</label>
<caption><p>Examples of identified framings in the institutional AI ethics principles and guidelines text.</p></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<thead>
<tr>
<td valign="top" align="left">TRUST Framings</td>
<td valign="top" align="left">NIST</td>
<td valign="top" align="left">Microsoft</td>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Transparent and Comprehensible AI Framing</td>
<td valign="top" align="left">NA</td>
<td valign="top" align="left">NA</td>
</tr>
<tr>
<td valign="top" align="left">Transparency</td>
<td valign="top" align="left">NA</td>
<td valign="top" align="left">&#x201C;At Microsoft, we&#x2019;ve recognized six principles that we believe should guide AI development and use &#x2014; fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability&#x201D; (microsoft.com/en-us/ai/)</td>
</tr>
<tr>
<td valign="top" align="left">Explainability</td>
<td valign="top" align="left">&#x201D;As the fundamental qualities of explainable AI systems, we provide four key principles for explainable artificial intelligence (AI). These guidelines were developed with the diverse fields of computer science, engineering, and psychology in mind while discussing explainable AI. We realize the need for various explanations to meet the specific needs of different users, realizing that no one explanation fits all circumstances. We also present an overview of explainable AI ideas and identify five explanations&#x201D; (p.i).</td>
<td valign="top" align="left">NA</td>
</tr>
<tr>
<td valign="top" align="left">Interpretability</td>
<td valign="top" align="left">NA</td>
<td valign="top" align="left">NA</td>
</tr>
<tr>
<td valign="top" align="left">Comprehensibility</td>
<td valign="top" align="left">NA</td>
<td valign="top" align="left">NA</td>
</tr>
<tr>
<td valign="top" align="left">Reliable and Safe AI Framing</td>
<td valign="top" align="left">NA</td>
<td valign="top" align="left">&#x201C;It is important to recognize that new intelligent technology has advantages but also unexpected and unintended consequences as it develops and spreads throughout society. Some of these effects are harmful and have significant ethical ramifications. As a result, we must proactively foresee and mitigate these unexpected repercussions resulting from the technology we bring into the world using deliberate actions.&#x201D;</td>
</tr>
<tr>
<td valign="top" align="left"/><td valign="top" align="left"/><td valign="top" align="left">&#x201C;The establishment of guiding principles for responsible AI is strategic planning and continuous oversight. Aether, ORA, and RAISE lead a concerted project to create responsible AI throughout Microsoft. These three organizations&#x2014;Aether, ORA, and RAISE&#x2014;work closely with our teams to ensure Microsoft&#x2019;s responsible AI concepts are incorporated into their day-to-day operations. (from Microsoft.com, about the company-wide adoption of responsible AI)&#x201D;</td>
</tr>
<tr>
<td valign="top" align="left">Reliability</td>
<td valign="top" align="left">The &#x201C;Knowledge Limits&#x201D; notion, as explained on page 4, suggests that systems can identify circumstances in which they are assigned tasks that they were not designed or permitted to carry out or in which their replies are unreliable.</td>
<td valign="top" align="left">&#x201C;AI systems must operate consistently, safely, and reliably in expected and unexpected circumstances to build confidence.</td>
</tr>
<tr>
<td valign="top" align="left">Management practices directed towards Safety</td>
<td valign="top" align="left">NA</td>
<td valign="top" align="left">NA</td>
</tr>
<tr>
<td valign="top" align="left">Public Reports of Problems/Failure/Misses/Futu re Plans</td>
<td valign="top" align="left">NA</td>
<td valign="top" align="left">Within 24 hours of user interactions, Tay, an AI chatbot, changed from nice software to a hate speech platform. This emphasizes the necessity of designing AI systems with the human aspect in mind and preparing for novel attacks on learning datasets, especially for AI systems with the capacity for autonomous learning.</td>
</tr>
<tr>
<td valign="top" align="left">Oversight Boards</td>
<td valign="top" align="left">&#x201C;The National AI Initiative Office and the President will receive advice on AI-related issues from the inaugural National Artificial Intelligence Advisory Committee (NAIAC) members, which consists of 27 experts. The first public webcast meeting of the NAIAC is set for May 4, 2022.</td>
<td valign="top" align="left">NA</td>
</tr>
<tr>
<td valign="top" align="left">User Control and Autonomy Framing</td>
<td valign="top" align="left">NA</td>
<td valign="top" align="left">AI systems with autonomous learning capabilities were equipped with sophisticated content filters and human supervisors in reaction to new assaults that affected learning datasets and to stop the Tay problem from happening again.</td>
</tr>
<tr>
<td valign="top" align="left">Autonomy</td>
<td valign="top" align="left">NA</td>
<td valign="top" align="left">NA</td>
</tr>
<tr>
<td valign="top" align="left">User Control</td>
<td valign="top" align="left">NA</td>
<td valign="top" align="left">NA</td>
</tr>
<tr>
<td valign="top" align="left">Augmentation</td>
<td valign="top" align="left">NA</td>
<td valign="top" align="left">NA</td>
</tr>
<tr>
<td valign="top" align="left">Human Understanding</td>
<td valign="top" align="left">Modeling issues arise from various elements influencing meaningful interactions between AI and humans. Computational and human aspects must be considered by systems that provide meaningful explanations. Additionally, explanations may need to be modified over time as users&#x2019; judgments of meaningfulness shift with experience.</td>
<td valign="top" align="left">NA</td>
</tr>
<tr>
<td valign="top" align="left">Secure and Privacy Framing</td>
<td valign="top" align="left">NA</td>
<td valign="top" align="left">AI will undoubtedly affect decision-making, data security, privacy, and worker skills; therefore it&#x2019;s important to think about how to take use of its benefits while protecting privacy. Unit 3 of Identify guiding principles for responsible AI module, Section: Societal implications of AI)</td>
</tr>
<tr>
<td valign="top" align="left">Security and Safety (w.r.t data collection, processing, access, share, consent, data subject to AI decision making)</td>
<td valign="top" align="left">This process of identifying and recognizing knowledge boundaries protects against making decisions that might not be appropriate.</td>
<td valign="top" align="left"/></tr>
<tr>
<td valign="top" align="left">The Other Framings</td>
<td valign="top" align="left">NA</td>
<td valign="top" align="left">NA</td>
</tr>
<tr>
<td valign="top" align="left">Ethical Dilemma and Moral Framing</td>
<td valign="top" align="left">NA</td>
<td valign="top" align="left">NA</td>
</tr>
<tr>
<td valign="top" align="left">Human Resource, Employment, Rights and Accessibility Framing</td>
<td valign="top" align="left">NA</td>
<td valign="top" align="left">NA</td>
</tr>
<tr>
<td valign="top" align="left">Fairness, Non-discrimination, and Justice Framing</td>
<td valign="top" align="left">&#x201D; The Knowledge Limits Principle can increase trust in a system by preventing misleading, dangerous, or Unjust decisions or outputs.&#x201D; (<xref ref-type="bibr" rid="B43">43</xref>)</td>
<td valign="top" align="left">&#x201C;Microsoft worked with a significant financial lending organization to create a risk assessment system for loan applications. When the system was audited, it turned out that even though it only authorized low-risk loans, all of the accepted borrowers were men. Before the system was implemented, this transparency allowed us to identify and address the historical prejudice among loan officers in favor of male applicants.</td>
</tr>
<tr>
<td valign="top" align="left">Accountability and AI Audits Framing</td>
<td valign="top" align="left">The first step in combating prejudice is for people to become aware of the limitations and repercussions of AI recommendations and forecasts. Ultimately, people must supplement AI results with sound human judgment and take ownership of important decisions affecting others.</td>
<td valign="top" align="left">&#x201C;The first step in combating prejudice is for people to understand the limitations and ramifications of AI recommendations and forecasts. Ultimately, people must supplement AI conclusions with sound human judgment and take accountability for important decisions that affect others.</td>
</tr>
<tr>
<td valign="top" align="left"/><td valign="top" align="left"/><td valign="top" align="left">Microsoft and a well-known financial lending organization worked together to develop a risk assessment system for loan applications. We used the customer&#x2019;s data to train a well-known industry model. We discovered a prejudice during our system audit, indicating a past predilection among loan officers whereby all authorized loans were given to male applicants. Through this analysis, we addressed the bias before system deployment.</td>
</tr>
<tr>
<td valign="top" align="left">Inclusion, Diversity, Solidarity, Protection of Cultural Differences and Whistleblowers Framings</td>
<td valign="top" align="left">NA</td>
<td valign="top" align="left">NA</td>
</tr>
<tr>
<td valign="top" align="left">AI Education, Science policy, and Public Awareness Framing</td>
<td valign="top" align="left">NA</td>
<td valign="top" align="left">&#x201C;As we learn more and work with consumers, other digital businesses, researchers, civic society, and other stakeholders, we anticipate these principles will evolve and change. This module&#x2019;s summary and resources section will provide an overview of these concepts.</td>
</tr>
<tr>
<td valign="top" align="left">Responsible Research funding, Hidden AI Costs, Field Specific Deliberations Framing</td>
<td valign="top" align="left">NA</td>
<td valign="top" align="left">NA</td>
</tr>
</tbody>
</table></table-wrap>
<p>Transparent and Comprehensible AI Framing Because advanced artificial intelligence (AI) systems in social settings can be complicated, NIST, a federal non-regulatory agency under the U.S. Department of Commerce whose goal is to foster innovation and industrial competitiveness in the country, places a strong emphasis on &#x201C;transparency&#x201D; in its AI principles (<xref ref-type="bibr" rid="B39">39</xref>). The transparency of AI systems and their understandability by human recipients of the information are the foundations of three of the four NIST AI principles (<xref ref-type="bibr" rid="B40">40</xref>). NIST&#x2019;s AI principles, which elaborate on the kinds, meanings, and precision of explanations, support The Royal Society (2019) assertion that there are several explainability approaches, which are covered under the Transparent and Comprehensible AI Framing in this study&#x2019;s literature review. The principles of NIST reaffirm that the nature and specifics of an explanation would differ based on the application in question and the kind of AI technique created and implemented in a social context (<xref ref-type="bibr" rid="B41">41</xref>). The text under AI principles in Microsoft&#x2019;s published case studies and video transcripts covers three AI framings: Secure and Privacy (words: Privacy and Security), Fairness, Inclusiveness, and Accountability, and Transparent and Comprehensible (words: Transparency and Explainability) (<xref ref-type="bibr" rid="B42">42</xref>). These are discussed in the academic frames section of this study&#x2019;s literature review (for data examples, refer to <xref ref-type="table" rid="T3">Tables 3</xref>, <xref ref-type="table" rid="T4">4</xref>)</p>
<sec id="S3.SS1">
<title>3.1 Safe and dependable AI framing</title>
<p>AI ethical guidelines published by an organization are considered soft law or non-legislative policy tools with persuasive language but no legal force behind them (<xref ref-type="bibr" rid="B9">9</xref>). Through its three offices/committees&#x2014;the Office of Responsible AI (ORA), the Aether Committee (which stands for AI, Ethics, and Effects in Engineering and Research), and the Responsible AI Strategy in Engineering (RAISE)&#x2014;Microsoft operationalizes its AI principles, which it has dubbed &#x201C;Responsible AI.&#x201D; While the Aether Committee advises Microsoft&#x2019;s senior leadership on responsible AI issues, technologies, processes, and best practices, RAISE is an initiative and engineering team designed to facilitate the implementation of Microsoft&#x2019;s responsible AI rules and processes across its engineering groups (<xref ref-type="bibr" rid="B44">44</xref>). In summary, committees that advise Microsoft&#x2019;s leadership, engineering, and all other teams inside the organization provide direction as it implements its responsible AI principles. Thus, the text&#x2019;s six key AI principles come first.</p>
</sec>
</sec>
<sec id="S4">
<title>4. Discussions</title>
<p>The debate highlights the significance of word choices and framing within AI principles and standards when examined through the prism of framing theory. The results of this study support the notions put out by (<xref ref-type="bibr" rid="B19">19</xref>) and (<xref ref-type="bibr" rid="B3">3</xref>) on the existence of frames within frames by showing that these frames might function as &#x201C;signs of priorities&#x201D; within these documents. For instance, Microsoft prioritizes some framings by partner needs. Still, it withholds the weight given to these framings across different industries, creating a lack of transparency in the process of deciding how certain settings will turn out. Contrastingly, the approach taken by the European Union, as described in the AI ethics document by (<xref ref-type="bibr" rid="B18">18</xref>), treats all framings equally. The research also emphasizes how convincing these documents are, despite not having legal force behind them, and how they add to the conversation about global AI ethics, governance, and legislation [(<xref ref-type="bibr" rid="B9">9</xref>), (<xref ref-type="bibr" rid="B18">18</xref>) #639].</p>
<p>The conversation emphasizes how international AI stakeholders must come together to create a single database with ethical norms and principles unique to AI. According to (<xref ref-type="bibr" rid="B13">13</xref>) this convergence is necessary to handle the difficulties and possible conflicts that may occur when giving particular AI principles, like fairness and priority. As it prepares the way for the creation of formal AI norms and laws for various societal scenarios, convergence in the framing of AI ethics principles is essential for building faith in the technology&#x2019;s transformative potential (<xref ref-type="bibr" rid="B16">16</xref>). This talk emphasizes the importance of framing theory in understanding how AI ethical discourse impacts our future and the necessity for convergence to protect the common good in the setting of a global digital society.</p>
<p>This study, which focused on pioneering organizations like the European Commission and NIST in developing AI principles and standards, was confined to AI ethics draft texts available until December 2021. However, many actors from many sectors&#x2014;including enterprises, academic institutions, national and international organizations, and more&#x2014;are involved in the quickly changing field of artificial intelligence and are working on reports and frameworks related to AI ethics (<xref ref-type="bibr" rid="B45">45</xref>). Future studies should, therefore, take into account the dynamic field of AI ethical principles and delve further into the implications of these frames at the personal level (<xref ref-type="bibr" rid="B45">45</xref>). It should consider the difficulties that come with putting these ideals into reality and the diversity of values that exist among various socioeconomic classes and geographic regions. The three components of this research framework&#x2014;developing AI ethics principles, applying them in particular contexts, and examining their effects on individuals and society as a whole&#x2014;can greatly support moral behavior and just (<xref ref-type="bibr" rid="B14">14</xref>).</p>
</sec>
<sec id="S5" sec-type="conclusion">
<title>5. Conclusion</title>
<p>To sum up, this research explores the quickly changing field of AI ethics standards and principles, concentrating on trailblazing organizations like NIST and the European Commission. The study&#x2019;s limitations, which only included draft texts accessible through December 2021, draw attention to the necessity for continued research in this rapidly developing sector. The significance of examining developing AI ethics frameworks is highlighted by the spread of AI technology and its interactions with diverse industries and societies. Future ethical studies in AI should consider the varied values found in various social groups and geographic areas, in addition to monitoring modifications to guiding principles and guidelines and investigating their consequences at the individual level.</p>
<p>Furthermore, since these are only the first steps, it is crucial to address the difficulties that come up when putting AI ethics concepts into practice. The present study underscores the significance of a thorough research methodology that encompasses three fundamental domains:</p>
<list list-type="simple">
<list-item>
<label>&#x2022;</label>
<p>Devising ethical guidelines for AI</p>
</list-item>
<list-item>
<label>&#x2022;</label>
<p>Executing them in particular situations or settings</p>
</list-item>
<list-item>
<label>&#x2022;</label>
<p>Examining their influence on individuals and society as a whole</p>
</list-item>
</list>
<p>In an AI environment that is always evolving, such research can substantially contribute to moral behavior and the fair application of AI ethics concepts.</p>
<p>In the end, as AI technology continues to change society, it will be vital for everyone to work together to create, modify, and apply AI ethics principles to make sure that AI upholds ethical standards, advances justice, and respects a variety of values.</p>
</sec>
</body>
<back>
<ref-list>
<title>References</title>
<ref id="B1"><label>1.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rich</surname> <given-names>E.</given-names></name></person-group> <source><italic>Artificial intelligence.</italic></source> <publisher-loc>New York, NY</publisher-loc>: <publisher-name>McGraw-Hill, Inc</publisher-name> (<year>1983</year>).</citation></ref>
<ref id="B2"><label>2.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Obermeyer</surname> <given-names>Z</given-names></name> <name><surname>Mullainathan</surname> <given-names>S</given-names></name></person-group>. <article-title>Dissecting racial bias in an algorithm that guides health decisions for 70 million people.</article-title> <source><italic>Paper presented at the Proceedings of the conference on fairness, accountability, and transparency.</italic></source> <publisher-loc>New York, NY</publisher-loc> (<year>2019</year>).</citation></ref>
<ref id="B3"><label>3.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Entman</surname> <given-names>RM</given-names></name></person-group>. <article-title>Framing: Toward clarification of a fractured paradigm.</article-title> <source><italic>J Commun.</italic></source> (<year>1993</year>) <volume>43</volume>:<fpage>51</fpage>&#x2013;<lpage>8</lpage>.</citation></ref>
<ref id="B4"><label>4.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nagar</surname> <given-names>N.</given-names></name></person-group> <source><italic>Framing TRUST in Artificial Intelligence (AI) Ethics Communication: Analysis of AI Ethics Guiding Principles through the Lens of Framing Theory.</italic></source> <publisher-loc>Rochester</publisher-loc>: <publisher-name>Rochester Institute of Technology</publisher-name> (<year>2022</year>).</citation></ref>
<ref id="B5"><label>5.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sivan-Sevilla</surname> <given-names>I</given-names></name></person-group>. <article-title>Complementaries and contradictions: National security and privacy risks in US federal policy, 1968&#x2013;2018.</article-title> <source><italic>Policy Internet.</italic></source> (<year>2019</year>) <volume>11</volume>:<fpage>172</fpage>&#x2013;<lpage>214</lpage>.</citation></ref>
<ref id="B6"><label>6.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Parviala</surname> <given-names>T.</given-names></name></person-group> <source><italic>EU Entering the Era of AI: A qualitative Text analysis on the European Union&#x2019;s Policy on Artificial intelligence.</italic></source> <publisher-loc>Brussels</publisher-loc>: <publisher-name>European Commission</publisher-name> (<year>2019</year>).</citation></ref>
<ref id="B7"><label>7.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>de Greeff</surname> <given-names>J</given-names></name> <name><surname>de Boer</surname> <given-names>MH</given-names></name> <name><surname>Hillerstr&#x00F6;m</surname> <given-names>FH</given-names></name> <name><surname>Bomhof</surname> <given-names>F</given-names></name> <name><surname>Jorritsma</surname> <given-names>W</given-names></name> <name><surname>Neerincx</surname> <given-names>MA</given-names></name></person-group>. <article-title>The FATE System: FAir, Transparent and Explainable Decision Making.</article-title> <source><italic>Paper presented at the AAAI Spring Symposium: Combining Machine Learning with Knowledge Engineering.</italic></source> <publisher-loc>New York, NY</publisher-loc> (<year>2021</year>).</citation></ref>
<ref id="B8"><label>8.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sarwar</surname> <given-names>H</given-names></name> <name><surname>Ishaq</surname> <given-names>MI</given-names></name> <name><surname>Amin</surname> <given-names>A</given-names></name> <name><surname>Ahmed</surname> <given-names>R</given-names></name></person-group>. <article-title>Ethical leadership, work engagement, employees&#x2019; well-being, and performance: a cross-cultural comparison.</article-title> <source><italic>J Sustain Tour.</italic></source> (<year>2020</year>) <volume>28</volume>:<fpage>2008</fpage>&#x2013;<lpage>26</lpage>.</citation></ref>
<ref id="B9"><label>9.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jobin</surname> <given-names>A</given-names></name> <name><surname>Ienca</surname> <given-names>M</given-names></name> <name><surname>Vayena</surname> <given-names>E</given-names></name></person-group>. <article-title>The global landscape of AI ethics guidelines.</article-title> <source><italic>Nat Mach Intell.</italic></source> (<year>2019</year>) <volume>1</volume>:<fpage>389</fpage>&#x2013;<lpage>99</lpage>.</citation></ref>
<ref id="B10"><label>10.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Arowolo</surname> <given-names>SO</given-names></name></person-group>. <article-title>Understanding framing theory.</article-title> <source><italic>Mass Commun Theory.</italic></source> (<year>2017</year>) <volume>3</volume>:<issue>4</issue>.</citation></ref>
<ref id="B11"><label>11.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Asplund</surname> <given-names>T.</given-names></name></person-group> <source><italic>Climate change frames and frame formation: An analysis of climate change communication in the Swedish agricultural sector.</italic></source> <publisher-loc>London</publisher-loc>: <publisher-name>Link&#x00F6;ping University Electronic Press</publisher-name> (<year>2014</year>).</citation></ref>
<ref id="B12"><label>12.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Benefo</surname> <given-names>EO</given-names></name> <name><surname>Tingler</surname> <given-names>A</given-names></name> <name><surname>White</surname> <given-names>M</given-names></name> <name><surname>Cover</surname> <given-names>J</given-names></name> <name><surname>Torres</surname> <given-names>L</given-names></name> <name><surname>Broussard</surname> <given-names>C</given-names></name><etal/></person-group> <article-title>Ethical, legal, social, and economic (ELSE) implications of artificial intelligence at a global level: a scientometrics approach.</article-title> <source><italic>AI Ethics.</italic></source> (<year>2022</year>) <volume>2</volume>:<fpage>667</fpage>&#x2013;<lpage>82</lpage>.</citation></ref>
<ref id="B13"><label>13.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Binns</surname> <given-names>R</given-names></name></person-group>. <article-title>Fairness in machine learning: Lessons from political philosophy.</article-title> <source><italic>Paper presented at the Conference on fairness, accountability and transparency.</italic></source> <publisher-loc>New York, NY</publisher-loc> (<year>2018</year>).</citation></ref>
<ref id="B14"><label>14.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Holton</surname> <given-names>R</given-names></name> <name><surname>Boyd</surname> <given-names>R</given-names></name></person-group>. <article-title>&#x2018;Where are the people? What are they doing? Why are they doing it?&#x2019;(Mindell) Situating artificial intelligence within a socio-technical framework.</article-title> <source><italic>J Sociol.</italic></source> (<year>2021</year>) <volume>57</volume>:<fpage>179</fpage>&#x2013;<lpage>95</lpage>.</citation></ref>
<ref id="B15"><label>15.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Caliskan</surname> <given-names>A</given-names></name></person-group>. <article-title>Beyond Big Data: What Can We Learn from AI Models? Invited Keynote.</article-title> <source><italic>Paper presented at the Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security.</italic></source> <publisher-loc>New York, NY</publisher-loc> (<year>2017</year>).</citation></ref>
<ref id="B16"><label>16.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Friedler</surname> <given-names>SA</given-names></name> <name><surname>Scheidegger</surname> <given-names>C</given-names></name> <name><surname>Venkatasubramanian</surname> <given-names>S</given-names></name></person-group>. <article-title>The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making.</article-title> <source><italic>Commun ACM.</italic></source> (<year>2021</year>) <volume>64</volume>:<fpage>136</fpage>&#x2013;<lpage>43</lpage>.</citation></ref>
<ref id="B17"><label>17.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Caplar</surname> <given-names>N</given-names></name> <name><surname>Tacchella</surname> <given-names>S</given-names></name> <name><surname>Birrer</surname> <given-names>S</given-names></name></person-group>. <article-title>Quantitative evaluation of gender bias in astronomical publications from citation counts.</article-title> <source><italic>Nat Astron.</italic></source> (<year>2017</year>) <volume>1</volume>:<issue>0141</issue>.</citation></ref>
<ref id="B18"><label>18.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hleg</surname> <given-names>A.</given-names></name></person-group> <source><italic>Ethics guidelines for trustworthy AI. B-1049 Brussels.</italic></source> <publisher-loc>Brussels</publisher-loc>: <publisher-name>European Commission</publisher-name> (<year>2019</year>).</citation></ref>
<ref id="B19"><label>19.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Goffman</surname> <given-names>E.</given-names></name></person-group> <source><italic>Frame analysis: An essay on the organization of experience.</italic></source> <publisher-loc>Cambridge, MA</publisher-loc>: <publisher-name>Harvard University Press</publisher-name> (<year>1974</year>).</citation></ref>
<ref id="B20"><label>20.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>O&#x2019;Neill</surname> <given-names>S</given-names></name> <name><surname>Williams</surname> <given-names>HT</given-names></name> <name><surname>Kurz</surname> <given-names>T</given-names></name> <name><surname>Wiersma</surname> <given-names>B</given-names></name> <name><surname>Boykoff</surname> <given-names>M</given-names></name></person-group>. <article-title>Dominant frames in legacy and social media coverage of the IPCC Fifth Assessment Report.</article-title> <source><italic>Nat Clim Change.</italic></source> (<year>2015</year>) <volume>5</volume>:<fpage>380</fpage>&#x2013;<lpage>5</lpage>.</citation></ref>
<ref id="B21"><label>21.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Carabantes</surname> <given-names>M</given-names></name></person-group>. <article-title>Black-box artificial intelligence: an epistemological and critical analysis.</article-title> <source><italic>AI Soc.</italic></source> (<year>2020</year>) <volume>35</volume>:<fpage>309</fpage>&#x2013;<lpage>17</lpage>.</citation></ref>
<ref id="B22"><label>22.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Aftab</surname> <given-names>J</given-names></name> <name><surname>Sarwar</surname> <given-names>H</given-names></name> <name><surname>Kiran</surname> <given-names>A</given-names></name> <name><surname>Qureshi</surname> <given-names>MI</given-names></name> <name><surname>Ishaq</surname> <given-names>MI</given-names></name> <name><surname>Ambreen</surname> <given-names>S</given-names></name><etal/></person-group> <article-title>Ethical leadership, workplace spirituality, and job satisfaction: moderating role of self-efficacy.</article-title> <source><italic>Int J Emerg Mark.</italic></source> (<year>2022</year>) <pub-id pub-id-type="doi">10.1108/IJOEM-07-2021-1121</pub-id> <comment>[Epub ahead of print]</comment>.</citation></ref>
<ref id="B23"><label>23.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chien</surname> <given-names>S</given-names></name> <name><surname>Doyle</surname> <given-names>R</given-names></name> <name><surname>Davies</surname> <given-names>AG</given-names></name> <name><surname>Jonsson</surname> <given-names>A</given-names></name> <name><surname>Lorenz</surname> <given-names>R</given-names></name></person-group>. <article-title>The future of AI in space.</article-title> <source><italic>IEEE Intell Syst.</italic></source> (<year>2006</year>) <volume>21</volume>:<fpage>64</fpage>&#x2013;<lpage>9</lpage>.</citation></ref>
<ref id="B24"><label>24.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chong</surname> <given-names>D</given-names></name> <name><surname>Druckman</surname> <given-names>JN</given-names></name></person-group>. <article-title>Framing theory.</article-title> <source><italic>Annu Rev Polit Sci.</italic></source> (<year>2007</year>) <volume>10</volume>:<fpage>103</fpage>&#x2013;<lpage>26</lpage>.</citation></ref>
<ref id="B25"><label>25.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>D&#x2019;angelo</surname> <given-names>P</given-names></name></person-group>. <article-title>News framing as a multiparadigmatic research program: A response to Entman.</article-title> <source><italic>J Commun.</italic></source> (<year>2002</year>) <volume>52</volume>:<fpage>870</fpage>&#x2013;<lpage>88</lpage>.</citation></ref>
<ref id="B26"><label>26.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Scheufele</surname> <given-names>DA</given-names></name></person-group>. <article-title>Framing as a theory of media effects.</article-title> <source><italic>J Commun.</italic></source> (<year>1999</year>) <volume>49</volume>:<fpage>103</fpage>&#x2013;<lpage>22</lpage>.</citation></ref>
<ref id="B27"><label>27.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ingram</surname> <given-names>K</given-names></name></person-group>. <article-title>AI and ethics: Shedding light on the black box.</article-title> <source><italic>Int Rev Inf Ethics.</italic></source> (<year>2020</year>):<fpage>28</fpage>.</citation></ref>
<ref id="B28"><label>28.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Xu</surname> <given-names>W</given-names></name></person-group>. <article-title>Toward human-centered AI: a perspective from human-computer interaction.</article-title> <source><italic>Interactions.</italic></source> (<year>2019</year>) <volume>26</volume>:<fpage>42</fpage>&#x2013;<lpage>6</lpage>.</citation></ref>
<ref id="B29"><label>29.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Harris</surname> <given-names>J</given-names></name> <name><surname>Anthis</surname> <given-names>JR</given-names></name></person-group>. <article-title>The moral consideration of artificial entities: a literature review.</article-title> <source><italic>Sci Eng Ethics.</italic></source> (<year>2021</year>) <volume>27</volume>:<issue>53</issue>.</citation></ref>
<ref id="B30"><label>30.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hagendorff</surname> <given-names>T</given-names></name></person-group>. <article-title>The ethics of AI ethics: An evaluation of guidelines.</article-title> <source><italic>Minds Mach.</italic></source> (<year>2020</year>) <volume>30</volume>:<fpage>99</fpage>&#x2013;<lpage>120</lpage>.</citation></ref>
<ref id="B31"><label>31.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hern&#x00E1;ndez</surname> <given-names>D</given-names></name> <name><surname>Cano</surname> <given-names>J-C</given-names></name> <name><surname>Silla</surname> <given-names>F</given-names></name> <name><surname>Calafate</surname> <given-names>CT</given-names></name> <name><surname>Cecilia</surname> <given-names>JM</given-names></name></person-group>. <article-title>AI-enabled autonomous drones for fast climate change crisis assessment.</article-title> <source><italic>IEEE Internet Things J.</italic></source> (<year>2021</year>) <volume>9</volume>:<fpage>7286</fpage>&#x2013;<lpage>97</lpage>.</citation></ref>
<ref id="B32"><label>32.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Druckman</surname> <given-names>JN</given-names></name></person-group>. <article-title>The implications of framing effects for citizen competence.</article-title> <source><italic>Polit Behav.</italic></source> (<year>2001</year>) <volume>23</volume>:<fpage>225</fpage>&#x2013;<lpage>56</lpage>.</citation></ref>
<ref id="B33"><label>33.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Markus</surname> <given-names>AF</given-names></name> <name><surname>Kors</surname> <given-names>JA</given-names></name> <name><surname>Rijnbeek</surname> <given-names>PR</given-names></name></person-group>. <article-title>The role of explainability in creating trustworthy artificial intelligence for health care: a comprehensive survey of the terminology, design choices, and evaluation strategies.</article-title> <source><italic>J Biomed Inform.</italic></source> (<year>2021</year>) <volume>113</volume>:<issue>103655</issue>.</citation></ref>
<ref id="B34"><label>34.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Do&#x0161;ilovi&#x0107;</surname> <given-names>FK</given-names></name> <name><surname>Br&#x010D;i&#x0107;</surname> <given-names>M</given-names></name> <name><surname>Hlupi&#x0107;</surname> <given-names>N</given-names></name></person-group>. <article-title>Explainable artificial intelligence: A survey.</article-title> <source><italic>Paper presented at the 2018 41st International convention on information and communication technology, electronics and microelectronics (MIPRO).</italic></source> <publisher-loc>New York, NY</publisher-loc> (<year>2018</year>).</citation></ref>
<ref id="B35"><label>35.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Datzov</surname> <given-names>NL</given-names></name></person-group>. <article-title>The Role of Patent (In) Eligibility in Promoting Artificial Intelligence Innovation.</article-title> <source><italic>UMKC L Rev.</italic></source> (<year>2023</year>) <volume>92</volume>:<issue>1</issue>.</citation></ref>
<ref id="B36"><label>36.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hugosson</surname> <given-names>B</given-names></name> <name><surname>Dinh</surname> <given-names>D</given-names></name> <name><surname>Esmerson</surname> <given-names>G.</given-names></name></person-group> <source><italic>Why you should care: Ethical AI principles in a business setting: A study investigating the relevancy of the Ethical framework for AI in the context of the IT and telecom industry in Sweden.</italic></source> <publisher-loc>Brussels</publisher-loc>: <publisher-name>European Commission</publisher-name> (<year>2019</year>).</citation></ref>
<ref id="B37"><label>37.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Saetra</surname> <given-names>HS</given-names></name> <name><surname>Coeckelbergh</surname> <given-names>M</given-names></name> <name><surname>Danaher</surname> <given-names>J</given-names></name></person-group>. <article-title>The AI ethicist&#x2019;s dilemma: fighting Big Tech by supporting Big Tech.</article-title> <source><italic>AI and Ethics</italic></source> (<year>2022</year>) <volume>2</volume>:<fpage>15</fpage>&#x2013;<lpage>27</lpage>.</citation></ref>
<ref id="B38"><label>38.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schnack</surname> <given-names>H.</given-names></name></person-group> <source><italic>Bias, noise, and interpretability in machine learning: From measurements to features Machine learning.</italic></source> <publisher-loc>London</publisher-loc>: <publisher-name>Elsevier</publisher-name> (<year>2020</year>). p. <fpage>307</fpage>&#x2013;<lpage>28</lpage>.</citation></ref>
<ref id="B39"><label>39.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shneiderman</surname> <given-names>B</given-names></name></person-group>. <article-title>Human-centered artificial intelligence: Reliable, safe &#x0026; trustworthy.</article-title> <source><italic>Int J Hum Comput Interact.</italic></source> (<year>2020</year>) <volume>36</volume>:<fpage>495</fpage>&#x2013;<lpage>504</lpage>.</citation></ref>
<ref id="B40"><label>40.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Siau</surname> <given-names>K</given-names></name> <name><surname>Wang</surname> <given-names>W</given-names></name></person-group>. <article-title>Artificial intelligence (AI) ethics: ethics of AI and ethical AI.</article-title> <source><italic>J Database Manage.</italic></source> (<year>2020</year>) <volume>31</volume>:<fpage>74</fpage>&#x2013;<lpage>87</lpage>.</citation></ref>
<ref id="B41"><label>41.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>von Eschenbach</surname> <given-names>WJ</given-names></name></person-group>. <article-title>Transparency and the black box problem: Why we do not trust AI.</article-title> <source><italic>Philos Technol.</italic></source> (<year>2021</year>) <volume>34</volume>:<fpage>1607</fpage>&#x2013;<lpage>22</lpage>.</citation></ref>
<ref id="B42"><label>42.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Warner</surname> <given-names>R</given-names></name> <name><surname>Sloan</surname> <given-names>RH</given-names></name></person-group>. <article-title>Making artificial intelligence transparent: Fairness and the problem of proxy variables.</article-title> <source><italic>Crim Just Ethics.</italic></source> (<year>2021</year>) <volume>40</volume>:<fpage>23</fpage>&#x2013;<lpage>39</lpage>.</citation></ref>
<ref id="B43"><label>43.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Whittlestone</surname> <given-names>J</given-names></name> <name><surname>Nyrup</surname> <given-names>R</given-names></name> <name><surname>Alexandrova</surname> <given-names>A</given-names></name> <name><surname>Cave</surname> <given-names>S</given-names></name></person-group>. <article-title>The role and limits of principles in AI ethics: Towards a focus on tensions.</article-title> <source><italic>Paper presented at the Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society.</italic></source> <publisher-loc>New York, NY</publisher-loc> (<year>2019</year>).</citation></ref>
<ref id="B44"><label>44.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pitney</surname> <given-names>AM</given-names></name> <name><surname>Penrod</surname> <given-names>S</given-names></name> <name><surname>Foraker</surname> <given-names>M</given-names></name> <name><surname>Bhunia</surname> <given-names>S</given-names></name></person-group>. <article-title>A systematic review of 2021 microsoft exchange data breach exploiting multiple vulnerabilities.</article-title> <source><italic>Paper presented at the 2022 7th International Conference on Smart and Sustainable Technologies (SpliTech).</italic></source> <publisher-loc>New York, NY</publisher-loc> (<year>2022</year>).</citation></ref>
<ref id="B45"><label>45.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wilson</surname> <given-names>N.</given-names></name></person-group> <source><italic>Understanding the Battle for AI in Warfare through the Practices of Assemblage: A Case Study of Project Maven.</italic></source> <publisher-loc>Brussels</publisher-loc>: <publisher-name>European Commission</publisher-name> (<year>2020</year>).</citation></ref>
</ref-list>
</back>
</article>
