Why cultural competence is a thing when using AI tools

Generative AI – famously and infamously – gets things wrong. How to deal with that when it comes to representing actual people?

I’m afraid we start with an inexpert mirror selfie of me, taken in the bathroom with my hair just combed (okay fine, I don’t actually comb it. There are too many knots. I am perpetually on the brink of the matting that leads to dreadlocks):

And this is the image that an AI image generation platform called Tess made of me, when asked by my lovely assistant Anya to make an image of a woman (that is, me) with wild and vibrant hair (and looking at a shiny object, to illustrate a social media post about shiny object syndrome):

Dear reader, you will observe a striking difference between the real me and the AI me, and it isn’t that AI-me is somewhat younger. She is brown-skinned, while I am somewhat paler.

No matter how hard Anya tried, the Tess AI platform was insistent that someone with wild and vibrant hair must be a dark-skinned person.

A perfect illustration of bias in generative AI. Bias, in this case, that doesn’t offend me at all because it is so un-serious. I am after all privileged and white, and having an AI misunderstand my un-stereotypical hair is not at all prejudicial to me.

It is particularly not harmful when compared to this: “adding biased generative AI to ‘virtual sketch artist’ software used by police departments could ‘put already over-targeted populations at an even increased risk of harm ranging from physical injury to unlawful imprisonment’.”

The example comes from When AI Gets It Wrong: Addressing AI Hallucinations and Bias, an article on the MIT Sloan Teaching & Learning Technologies website, where there’s an excellent AI Resource Hub.

What to do about the issue of bias in AI

MIT has some advice, which gets repeated over and over again in any discussion about Generative AI (here it is, shortened a bit by me).

• Critically evaluate AI outputs: Unlike humans, AI systems do not have the ability to think or form beliefs. They use their training data, without any inherent capacity for reasoning or reflection. Users must approach AI outputs with a critical eye and evaluate them with human judgement.

• Diversify your sources: It’s important to check the accuracy of AI-generated content. The most important strategy is to cross-reference AI output with reliable sources such as expert publications. Also consider comparing outputs from multiple AI platforms to get a better sense of the quality of results that each can produce.

These are excellent points, but I don’t think they get to the heart of the matter. We’re talking here about the way in which Gen AI replicates the biases and nastinesses of the real world.

So the first step in working with AI-generated content of any kind is to be aware of your own biases. I know you think you don’t have biases and prejudices – but you do, we all do. Even if it just the certainty that all Australian cricketers are arrogant (take my South African word for it). This article I wrote for a client on unconscious bias is worth a read:

Conscious inclusion – a guide to reducing unconscious bias in the workplace

Unconscious bias means you won’t see the problem with Gen AI content if you can’t see the problem in the real world.

Developing cultural competence is a good place to start:

AltoPartners Guide to Diversity, Equity and Inclusion: Cultural competence and diversity initiatives

If you want to dive a little deeper, try this test for implicit bias  – it has some nuances that don’t work in the South African context, but it still pushes you to think about what’s going on in your own head.

Once you have that clear, you’ll be able to evaluate the content you get from AI more critically, and carefully.

This reflection on bias is some of what you’ll get in fortnightly Sensible Woman’s Guide blog posts – a look at the process of bringing your human self to whatever this new wave of tech throws at us.

What else you’ll get

Pointers to really good resources (like the MIT sources cited above) and practical examples of AI tools and how they might work for you.

Let’s get to this edition’s offerings.

What to play with in the next two weeks

The issue of picture generation by AI tools is contentious. If the likes of ChatGPT’s Dall-E tool are being trained on the work of human artists, then it is argued that that’s stealing, exploitation and copyright infringement. Not to mention that the pictures of are often just plain creepy.

I’d suggest you take a look at a site called Tess – which bills itself as both ethical and beautiful. The concept is this:

Tess is the world’s first properly-licensed AI image generator. Our mission is to empower creative people to leverage AI ethically. To this end, we’ve built a platform that allows creators to generate images in a consistent visual style, and for the artists behind the styles to be fairly compensated for their work.

To put it briefly, Tess enters partnerships with artists and pays them an advance royalty for the right to train AI picture generation tools (called “models”) on their work. For people who want to generate an image for their own use (be that a blog post or a school newsletter), a sign-in is required and you can then make images using various stylised models.

Here’s the site’s blog post on how to use it: How to Use Tess

Thing is: you’ll need to pay. The starter package is $20 a month. (When I first used Tess, it was possible to generate some images for free, but that limit was hit fairly fast.)

Now, I know our impulse over years of using the internet is that everything should be free. But if you want people like artists and journalists and writers to not to be screwed over by AI (and the advert-driven revenue model of the big tech companies), then at some point you’re going to have to reconcile yourself to paying for things.

Lecture over.

The questions to ask yourself:

Do I want free AI images? Then use the free tools – but take your chances with weirdness and the ethical considerations around exploitation of artists and copyright and so on.

Do I need artwork enough in the course of my work that it’s worth paying for? Then consider Tess.

If you don’t want ethical headaches, and won’t or can’t pay for Tess, there are always legitimate sources of free stock images – my guide to that: How to find a free picture | Safe Hands

(With thanks Hazel Bird, who first pointed me in the direction of Tess here: Copyediting and AI: a manifesto)

THE QUESTION YOU WERE AFRAID TO ASK FOR FEAR OF LOOKING SILLY

What is AI anyway?

Britannica explains it nicely:

artificial intelligence (AI), the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.

Give me some examples:

There are bunch of everyday examples, most of them built on machine learning (when software learns things autonomously):

  • Streaming platforms like Netflix making recommendations about what to watch
  • Your phone using your face as an unlocking method
  • A spell checker in a word processing programme

What’s all the fuss about now?

In the last couple of years, tools like ChatGPT have boomed – mainly I think because of their ability to “talk” to people, and to help with everyday tasks (try asking for a recipe based on your available ingredients and cooking method).

These new fun and scary things (often called Generative AI) are almost always built on Large Language Models (LLMs), which work by analysing massive datasets of language, allowing them to recognise and interpret human language or other complex data.

The thing you need to know: When someone at a dinner party goes on a bit about how evil AI is and how it is going to take over the world, remain resolutely silent. But know that AI is a very big and inaccurate term for a wide range of tools and functions and kinds of programming, some of which are truly useful. And do your own quiet research to find out what kind of AI is being talked about in any given instance. (You will observe that I used the term “AI” in just such an inaccurate way in this newsletter: shortcuts can be useful. But if we are to get to grips with this big change that’s happening, understanding the foundations can be helpful.)

OTHER THINGS I’VE WRITTEN

Why that new toy is not as shiny as you think it is | Safe Hands
Thoughts on what it means to be a white South African | Safe Hands
How to find a free picture | Safe Hands

WANT MORE LIKE THIS BLOG POST?

I’ll be writing articles like this every two weeks, and you can get them in your email by subscribing: The Sensible Woman’s Guide to AI and Content Creation

Main picture: Google DeepMind, Unsplash. The caption reads: “An artist’s illustration of artificial intelligence (AI). This image explores machine learning as a human-machine system, where AI has a symbiotic relationship with humans. It was created by Aurora Mititelu as part of the Visualising AI project launched by Google DeepMind.”

2 Comments

  1. Thank you, I enjoyed reading this post, and have taken a peek at some of the others mentioned. Look forward to delving further.

Comments are closed