Artificial Intelligence: Balancing Inclusivity & Diversity with Accuracy

Date:

OpenAI, the creator of ChatGPT and the undisputed leader in commercial AI technology. Google, however, has been responsible for much of the work on LLMs – large language models, the de facto engines of AI tools – and it is an attractive horse to back for reining in OpenAI. Google, of course, released its own version of ChatGPT, called Gemini, in December 2023, but it’s been a bit of a shaky start. 

One of the most embarrassing episodes for Google was the ‘mistakes’ of its Gemini AI image generator. If you weren’t aware, an AI image generator is a text-to-image creator. Give it a prompt, such as “Create an image of a man sitting on a unicorn,” and it will create that image. OpenAI’s image creator is DALL-E, and it has been widely celebrated. Google’s Gemini image creator saw a series of missteps that saw it be pulled from the market. 

Google had to take its AI generator off the market 

Those “missteps” were based on the premise of diversity. People had asked the generator to create an image of a typical 16th-century pope, and it would produce an image of a woman of color. Another “mistake” saw it show Black people when asked to produce images of Second World War Nazi soldiers. The images went viral on social media, causing Google to pull the image generator for retraining. At the time of writing, it has still not been relaunched. 

Now, the purpose here is not to disparage Google. In fact, some of its intentions were justifiable. It wanted to create an AI that recognised the diversity of the globe. Yet, Google admitted that it sacrificed accuracy for this pursuit. It also admitted it caused offence. It did, however, shine a light on how this new era of AI tools must balance inclusivity and diversity with accuracy

To explain, most level-headed people would accept that it is a good thing for an AI to create an image of a woman when asked “what does an astronaut look like?”. Women can be astronauts, so that’s understandable. But if the question were changed to “What did a typical astronaut look like in the 1960s?” you would prefer the accurate portrayal of a white male. Valentina Tereshkova was the first female astronaut in space (1963), but we must stress that the prompt asked for “typical.” 

Fintech shows how technology can be inclusive 

However, one of the main issues here is that AI goes well beyond tools like ChatGPT and image generators. AI bots, for example, are becoming more widely used in financial trading. But how do we balance that with, for instance, religious sensibilities? Those of the Islamic faith must trade within specific parameters. We know that practicing Muslims cannot trade stocks based on haram activities like alcohol and gambling. But other issues are more complicated: Is forex trading halal? Mostly, but there are certain conditions that Muslims must adhere to. Top trading platforms provide this with specialist Islamic trading accounts. Yet, it perfectly illustrates our point: One size does not fit all with AI bots. 

There will, of course, be solutions for the above when it comes to marrying Islam, AI, and trading. Those ‘guardrails’ for Islamic trading accounts are evidence that technology – fintech, in this case – must be tampered with in order to be inclusive. Market forces, i.e., the demand for trading platforms to cater to practicing Muslims, led to the creation of these special accounts, and it is hoped that the market will shape AI services in the same way. 

AI image accuracy
Artificial Intelligence: Balancing Inclusivity & Diversity with Accuracy

Yet, there are many other considerations, and not all of them are easy to quantify. Google’s pursuit of diversity was, as we said, done with noble intentions, yet the vast majority of AI bots are being developed in the United States, and, as such, their ethics are determined from a Western standpoint. Despite what you may have heard, AI bots don’t think or reason, and they are only as creative as the data they are trained on. Their ethics are programmed, and that comes from the humans behind them. There are no such things as universal ethics; it’s a matter of perception. That is a challenge for AI. 

We need different perspectives in AI

A study in 2021 covered many of the challenges in diversity and inclusion in AI, noting that, for example, Google only had around 10% of female employees. Facebook/Meta, another huge player in the AI field, had around 5%. The studies also covered the low number of women taking AI-related computer science courses at elite universities. As such, it talked of the challenge of getting female perspectives on this disruptive technology. As we said, ethics are about perspective, and much of AI is being programmed by Western males. 

beautiful woman human robot artificial intelligence
Artificial Intelligence: Balancing Inclusivity & Diversity with Accuracy

One of the main areas of concern is how AI will be used – and, indeed, is already being used – in profiling. Many states use AI-based surveillance software, and some have been found to have built-in racial and gender biases. Again, as stressed, it is not always the intention to have these biases, and AI is so complicated that it is not always understood where they come from. But that does not mean the problems should remain unaddressed. 

None of this is meant to sound completely negative. The technology being created will have many benefits, and that may include meeting challenges, such as universally accessible education, that we once thought of as impossible. But there are also huge challenges that fall under the category of diversity and inclusion, such as AI’s role in racial profiling. The race to master the technology is on the way, and it is clearly going to touch all of our lives – for good or for ill. The challenges of inclusive AI must also be met. It’s in the hands of a small number of tech companies, some of whom have the right intentions. But can they execute them?

- Advertisement -