Submitted by Lenovo
Ada Lopez, Senior Manager, Global Product Diversity Office, Lenovo
The past year has seen artificial intelligence (AI) become a dinner-table topic of conversation around the world, thanks to bots such as ChatGPT, which dazzles users with its ability to compose lifelike text and even computer code. But what happens when AI makes wrong decisions?
It’s a serious issue. Bias – and gender bias in particular – is common in AI systems, leading to a variety of harms, from discrimination and reduced transparency, to security and privacy issues. In the worst cases, wrong AI decisions could damage careers and even cost lives. Without dealing with AI’s bias problem, we risk an imbalanced future – one in which AI will never reach its full potential as a tool for the greater good.
AI is only as good as the data sets it is trained on. Much data is skewed towards men, as is the language used in everything from online news articles to books. Research shows that training AI on Google News data leads to associating men with roles such as ‘captain’ and ‘financier’, whereas women are associated with ‘receptionist’ and ‘homemaker’.
As a result, many AI systems, trained on such biased data and often created by largely male teams, have had significant problems with women, from credit card companies which seem to offer more generous credit to men, to tools screening for everything from COVID to liver disease. These are areas where wrong decisions can damage people’s financial or physical health.
This is compounded by the fact that just 22% of professionals in AI and data science are women, according to the World Economic Forum’s research. Gender itself is also becoming a more complex topic, thanks to non-binary and transgender expressions, leading to more potential for bias in many different forms.
AI is a powerful tool that offers us the chance to solve previously unsolvable problems from cancer to climate change – but unless the bias issue is addressed, AI risks being untrustworthy, and ultimately irrelevant. If AI professionals cannot confront the bias issue, these tools will not be useful, and the artificial intelligence industry risks another ‘AI Winter’, as seen in the 1970s, when interest in the technology dried up.
Dealing with data
Going forward, businesses will increasingly rely on AI technology to turn their data into value. According to Lenovo’s Data for Humanity report, 88% of business leaders say that AI technology will be an important factor in helping their organisation unlock the value of its data over the next five years.
So how will business leaders deal with the problem of bias? For the first time in history, we have this powerful technology that is entirely created from our own understanding of the world. AI is a mirror that we hold up to ourselves. We shouldn’t be shocked by what we see in this mirror. Instead, we should use this knowledge to change the way we do things. That starts with ensuring that the way our organisations work is fair in terms of gender representation and inclusion – but also by paying attention to how data is collected and used.
Whenever you start collecting data, processing it, or using it, you risk inserting bias. Bias can creep in anywhere: if there is more data for one gender, for example, or if questions were written by men.
For business leaders, thinking about where data comes from, how it’s used, and how bias can be combatted will become increasingly important.
Technical solutions will also play an important part. Data scientists don’t have the luxury of going through every line of text used in a training model.
There are two solutions to this: one is having many more people to test the model and spot problems. But the better solution is to have more efficient tools to find bias, either in the data which the AI is fed with, or in the model itself. With ChatGPT, for example, the researchers use a mental learning model to annotate potentially problematic data. The AI community needs to focus on this. Tools to provide greater transparency in the way AI works will also be important.
It also helps if we consider the broader context. The tools we use today are already creating bias in the models we will apply in the future. We might think that we have ‘solved’ a bias issue now, but in 50 years, for example, new tools or pieces of evidence might change completely how we look at certain things. This was the case with the history of Rett syndrome diagnosis, where data was primarily collected from girls. The lack of data on boys with the disorder introduced bias into data modelling several years later and led to inaccurate diagnoses and treatment recommendations for boys.
Similarly, in 100 years, humans might work for only three days a week. That would mean that data from now is skewed towards a five-day way of looking at things. Data scientists and business leaders must take context into account. Understanding social context is equally important for businesses operating in multiple territories today.
Mastering such issues will be one of the touchstones of responsible AI. For business leaders using AI technology, being conscious of these issues will grow in importance, along with public and regulatory interest. By next year, 60% of AI providers will offer a means to deal with possible harm caused by the technology alongside the tech itself, according to Gartner.
Business leaders must plan thoroughly for responsible AI and create their own definition of what this means for their organisation, by identifying the risks and assessing where bias can creep in. They need to engage with stakeholders to understand potential problems and distinguish how to move forward with best practices. Using AI responsibly will be a long journey, and one that will require constant attention from leadership.
The rewards of using AI responsibly, and rooting out bias wherever it creeps in, will be considerable, allowing business leaders to improve their reputation for trust, fairness and accountability, while delivering real value to their organisation, to customers and to society as a whole.
Businesses need to deal with this at board level to ensure bias is dealt with and AI is used responsibly across the whole organisation. This could include launching their own Responsible AI board to ensure that all AI applications are evaluated for bias and other problems. Leaders also need to address the broader problem of women in STEM, particularly in data science. Women – especially those in leadership roles – will be central to solving the issue of gender bias in AI.
An AI-driven future
Understanding the problem of gender bias and working towards effective ways of dealing with it will be vitally important to forward-thinking organisations hoping to use AI to unlock the value of their data.
Thinking carefully about how AI is used across an organisation, using tools to detect bias and ensure transparency, will help. But business leaders also need to take a broader view of where their data comes from, how it is used, and what steps are being taken to avoid bias. Doing so will be essential to unlocking the value of their data – and creating an inclusive future where AI can work to its fullest potential.
Lenovo (HKSE: 992) (ADR: LNVGY) is a US$70 billion revenue global technology powerhouse, ranked #171 in the Fortune Global 500, employing 75,000 people around the world, and serving millions of customers every day in 180 markets. Focused on a bold vision to deliver smarter technology for all, Lenovo has built on its success as the world’s leading PC player by expanding into new growth areas of infrastructure, mobile, solutions and services. This transformation together with Lenovo’s world-changing innovation is building a more inclusive, trustworthy, and sustainable digital society for everyone, everywhere. To find out more visit https://www.lenovo.com, and read about the latest news via our StoryHub.
More from Lenovo