Get the latest delivered to your inbox
Privacy Policy

Now Reading

AB: AI Ethics and Regulation: How Investors Can Navigate the Maze

From potentially brand-damaging ethical risks to regulatory uncertainty, AI poses challenges for investors. But there is a path forward.

AB: AI Ethics and Regulation: How Investors Can Navigate the Maze

From potentially brand-damaging ethical risks to regulatory uncertainty, AI poses challenges for investors. But there is a path forward.

Published 06-19-24

Submitted by AllianceBernstein

Aerial view of a hedge-maze.

By Saskia Kort-Chick| Director of Social Research and Engagement—Responsibility and Jonathan Berkow| Director of Data Science—Equities

Artificial intelligence (AI) poses many ethical issues that can translate into risks for consumers, companies and investors. And AI regulation, which is developing unevenly across multiple jurisdictions, adds to the uncertainty. The key for investors, in our view, is to focus on transparency and explainability.

The ethical issues and risks of AI begin with the developers who create the technology. From there, they flow to the developers’ clients—companies that integrate AI into their businesses—and on to consumers and society more broadly. Through their holdings in AI developers and companies that use AI, investors are exposed to both ends of the risk chain.

AI is developing quicky, far ahead of most people’s understanding of it. Among those trying to catch up are global regulators and lawmakers. At first glance, their activity in the AI area has grown quickly in the last few years; many countries have released related strategies and others are close to introducing them (Display).

In reality, the progress has been uneven and is far from complete. There is no uniform approach to AI regulation across jurisdictions, and some countries introduced their regulations before ChatGPT launched in late 2022. As AI proliferates, many regulators will need to update and possibly expand the work they’ve already done.

For investors, the regulatory uncertainty compounds AI’s other risks. To understand and assess how to deal with these risks, it helps to have an overview of the AI business, ethical and regulatory landscape.

Info graphic: Global Development of AI Policies and Regulation Activity Is High, but Progress Is Uneven

Data Risks Can Damage Brands

AI involves an array of technologies directed toward performing tasks normally done by humans and performing them in a human-like way. AI and business can intersect through generative AI, which includes various forms of content generation, including video, voice, text and music; and large language models (LLMs), a subset of generative AI focused on natural language processing. LLMs serve as foundational models for various AI applications—such as chatbots, automated content creation, and analyzing and summarizing large volumes of information—that companies are increasingly using in their customer engagement.

As many companies have found, however, AI innovations may involve potentially brand-damaging risks. These can arise from biases inherent in the data on which LLMs are trained and have resulted, for example, in banks inadvertently discriminating against minorities in granting home-loan approvals, and in a US health insurance provider facing a class-action lawsuit alleging that its use of an AI algorithm caused extended-care claims for elderly patients to be wrongfully denied.

Bias and discrimination are just two of the risks that regulators target and that should be on investors’ radars; others include intellectual property rights and privacy considerations concerning data. Risk-mitigation measures—such as developer testing of the performance, accuracy and robustness of AI models, and providing companies with transparency and support in implementing AI solutions—should also be scrutinized.

Dive Deep to Understand AI Regulations

The AI regulatory environment is evolving in different ways and at different speeds across jurisdictions. The most recent developments include the European Union (EU)’s Artificial Intelligence Act, which is expected to come into force around mid-2024, and the UK government’s response to a consultation process triggered last year by the launch of the governmemt’s AI regulation white paper.

Both efforts illustrate how AI regulatory approaches can differ. The UK is adopting a principles-based framework that existing regulators can apply to AI issues within their respective domains. In contrast, the EU act introduces a comprehensive legal framework with risk-graded compliance obligations for developers, companies, and importers and distributors of AI systems.

Investors, in our view, should do more than drill down into the specifics of each jurisdiction’s AI regulations. They should also familiarize themselves with how jurisdictions are managing AI issues using laws that predate and stand outside AI-specific regulations—for example, copyright law to address data infringements and employment legislation in cases where AI has an impact on labor markets.

Info graphic pie chart: Using AI Responsibly: An All-Around View for Investors.

Fundamental Analysis and Engagement Are Key

A good rule of thumb for investors trying to assess AI risk is that companies that proactively make full disclosures about their AI strategies and policies are likely to be well prepared for new regulations. More generally, fundamental analysis and issuer engagement—the basics of responsible investment—are crucial to this area of research.

Fundamental analysis should delve not only into AI risk factors at the company level but also along the business chain and across the regulatory environment, testing insights against core responsible-AI principles (Display).

Engagement conversations can be structured to cover AI issues not only as they affect business operations, but from environmental, social and governance perspectives, too. Questions for investors to ask boards and management include the following:

  • AI integration: How has the company integrated AI into its overall business strategy? What are some specific examples of AI applications within the company?
  • Board oversight and expertise: How does the board ensure it has sufficient expertise to effectively oversee the company’s AI strategy and implementation? Are there any specific training programs or initiatives in place?
  • Public commitment to responsible AI: Has the company published a formal policy or framework on responsible AI? How does this policy align with industry standards, ethical AI considerations, and AI regulation?
  • Proactive transparency: Does the company have any proactive transparency measures in place to withstand future regulatory implications?
  • Risk management and accountability: What risk management processes does the company have in place to identify and mitigate AI-related risks? Is there delegated responsibility for overseeing these risks?
  • Data challenges in LLMs: How does the company address privacy and copyright challenges associated with the input data used to train large language models? What measures are in place to ensure input data is compliant with privacy regulations and copyright laws, and how does the company handle restrictions or requirements related to input data?
  • Bias and fairness challenge in generative AI systems: What steps does the company take to prevent and/or mitigate biased or unfair outcomes from its AI systems? How does the company ensure that the output of any generative AI systems used are fair, unbiased, and do not perpetuate discrimination or harm to any individual or group?
  • Incident tracking and reporting: How does the company track and report on incidents related to its development or use of AI, and what mechanisms are in place for addressing and learning from these incidents?
  • Metrics and Reporting: What metrics does the company use to measure the performance and impact of its AI systems, and how are these metrics reported to external stakeholders? How does the company maintain due diligence in monitoring the regulatory compliance of its AI applications?

Ultimately, the best way for investors to find their way through the maze is to stay grounded and skeptical. AI is a complex and fast-moving technology. Investors should insist on clear answers and not be unduly impressed by elaborate or complicated explanations.

The authors would like to thank Roxanne Low, ESG Analyst with AB’s Responsible Investing team, for her research contributions.

The views expressed herein do not constitute research, investment advice or trade recommendations and do not necessarily represent the views of all AB portfolio-management teams. Views are subject to revision over time.

Learn more about AB’s approach to responsibility here.

AllianceBernstein logo

AllianceBernstein

AllianceBernstein

AllianceBernstein (AB) is a leading global investment management firm that offers high-quality research and diversified investment services to institutional investors, individuals, and private wealth clients in major world markets. We believe corporate responsibility, responsible investing and stewardship are intertwined. To be effective stewards of our clients’ assets, we strive to invest responsibly—assessing, engaging on and integrating material issues, including environmental, social and governance (ESG), and climate change considerations in most of our actively managed strategies. We also believe that strive to hold ourselves as a firm to similar practices that we ask of issues. Our stewardship practices, investment strategy and decision-making are guided by our purpose, mission and values.

Our purpose—pursue insight that unlocks opportunity—inspires our firm to act responsibly. While opportunity means something different to each of our stakeholders; it always means considering the unique goals of each stakeholder. AB’s mission is to help our clients define and achieve their investment goals, explicitly stating what we do to unlock opportunity for our clients. We became a signatory to the Principles for Responsible Investment (PRI) in 2011. This began our journey to formalize our commitment to identify responsible ways to unlock opportunities for our clients through integrating material ESG factors throughout most of our actively managed equity and fixed-income client accounts, funds and strategies. AB also engages issuers where it believes the engagement is in the best financial interest of its clients.

Because we are an active manager, our differentiated insights drive our ability to deliver alpha and design innovative investment solutions. ESG and climate issues are important elements in forming insights and in presenting potential risks and opportunities that can have an effect on the performance of the companies and issuers that we invest in and the portfolios that we build.

Our values provide a framework for the behaviors and actions that deliver on our purpose and mission. Values align our actions. Each value emerges from the firm’s collective character—yet is also aspirational.

  • Invest in One Another means that we have a strong organizational culture where diversity is celebrated and mentorship is critical to our success. When we invest in one another, we empower our employees to reach their potential, so that they can help our clients realize theirs. This enables us to partner with clients to design and deliver improved investment outcomes.
  • Strive for Distinctive Knowledge means that we collaboratively identify creative solutions to clients’ economic, ESG and climate- related investment challenges through our expertise in a wide range of investment disciplines, close collaboration among our investment experts and creative solutions.
  • Speak with Courage and Conviction informs how we engage our AB colleagues and issuers. We seek to learn from other parts of our business to strengthen our own views. And we engage issuers for insight and action by sharing ideas and best practices.
  • Act with Integrity—Always is the bedrock of our relationships and has specific meaning for our business. Unlike many other asset managers, we’re singularly focused on providing asset management and research to our clients. We don’t engage in activities that could be distracting, or create conflicts—such as investment banking, insurance writing, commercial banking or proprietary trading for our own account. We are unconflicted and fully accountable.

As of September 30, 2023, AB had $669B in assets under management, $458B of which were ESG-integrated. Additional information about AB may be found on our website, www.alliancebernstein.com.

More from AllianceBernstein

Join today and get the latest delivered to your inbox