Figuring out which AI tools to use – it’s not a pretty picture

Doing the right thing is complicated. Figuring out which AI tools to use is not just about being cool; it’s a consumer decision too.

Two weeks ago, I promised to do some research to try to figure out which of the much-hyped generative AI tools we should use or even support with our hard-earned money, which we should use with caution, and which we should be actively boycotting.

I did do some research and I do have some tentative answers. And, dear reader, you will be unsurprised to learn that we are not in particularly safe hands here.

I’ll do the detail down below, but here, in very broad outline, is what I found:

  • The big legacy tech companies are the furthest along the line in public statements about how they view their corporate social responsibility. That’s to be expected since they’ve been around for a long time now and have been pushed and prodded to get their act together (or at least to appear to have their act together).
  • The new kids on the block are all over the place – some have a fair amount of information on how they view their roles in the wider world, and some are abysmally lacking in information.

In general, there’s no clear winner, no one company that I can point to and say these are the good guys; if you are going to patronize (and pay for!) any one tool, this would be it. When you make your choices, you are going to be looking for the best in a bad bunch.

Here then is the detail. I’ll summarise what I was looking for; outline my fairly haphazard research methodology; and then summarise what I found, using the same set of reporting parameters for each of the AI tools.

What I was looking for:

I was trying to find out if any given AI-tool-producing company had public pronouncements on corporate social responsibility (CSR) as represented in this list of topics:

  • the social and economic disruption that they are likely to cause because of job displacement. Are they investing in reskilling programmes?
  • the accessibility of their tools to people in the Global South / developing economies
  • their impact on the environment

My research “methodology”

Step one: I gave the list of companies / tools to Jack (my son and occasional research assistant) and asked him to look on the company’s website to see if he could find anything relating to the list of topics. I didn’t give him the URLs of the websites – I just wanted to know what he could find by looking in the way that a tech-savvy Gen Z might look.

Step two: I had a look at his findings, and did a little more digging, using search operator terms in some cases (that’s where you put something like this into your browser’s URL field: site: www.safehands.co.za search term). I often found more information doing this.

Step three: In cases where I had very little information, I then hunted around on the website, found an email address and sent off the questions.

Step four: I turned the questions into a prompt, and in each case, asked the company’s own AI tool to find me the information and to give me the URLs of what it had found.

I did all this in an attempt to replicate the way in which a potential AI user or customer might try to find information about a company. In general, most people would probably only do step one, and give up. I pushed it a bit beyond that because I can and have the skills.

Limitations of my results

At this point, all I can do is report on what it is the companies say about themselves. I don’t have the resources to do the investigative work that would uncover whether they actually do anything they say. But as consumers we have to start somewhere – for me, for now, I am happy to make some judgements and choices based on what I can see at face value.

I also didn’t necessarily read every word of every page that I found on those various websites – in fact, I read very few pious lists. As you’ll see, there’s a winnowing process below, based on the simple criterion of there being anything at all to see.

What I found

 What follows is a very short summary of information that I gathered and which now sits in a 17-page Word document. (I can tidy that up and send it to you if you’d like – contact me here). For each company, I’ve reported what I found based on the four steps outlined above, which I am calling:

Step one: Easy-to-find information

Step two: Digging

Step three: Reaching out

Step four: What the AI said

GOOGLE GEMINI

Step one: Easy-to-find information

A page on sustainability was easy to find: Google Sustainability

Step two: Digging

I found a blog page, also about sustainability: Our 2024 Environmental Report

Step three: Reaching out

I didn’t email Google, as we had already found evidence of commitment to sustainability at the very least.

Step four: What the AI said

Gemini’s answer brought up a lot of information! There were statements of AI principles (Our principles), commitments to offering skills training, their Next Billion Users project and, as noted above, multiple materials on sustainability.

MICROSOFT COPILOT

Step one: Easy-to-find information

Jack said: “Same as Google, nothing specific about AI. Or I couldn’t find it.” He did find a sustainability page: Microsoft Sustainability

Step two: Digging

I confess I quailed at the prospect of trying to find things in the vast Microsoft empire, and it was because of this dread that I conceived the idea of asking the companies’ own AIs for help.

Step three: Reaching out

As we had already found evidence of commitment to sustainability I didn’t email them.

Step four: What the AI said

Copilot reports that Microsoft has commitments of various sorts on all three of the topics I asked about. The answers were vague and generic but I did get a long list of places to look:

Microsoft commits to skilling one million people for digital skills through Artificial Intelligence skilling initiative in South Africa – Source EMEA

Microsoft 365 Copilot Skilling Center

Driving inclusion and accessibility with Microsoft 365 Copilot | The Microsoft Cloud Blog

Introducing Copilot in Microsoft Sustainability Manager – Microsoft Industry Blogs

Sustainability – Microsoft Adoption

META AI

Step one: Easy-to-find information

I completely forgot that Meta (known to plebs as Facebook / Instagram / WhatsApp) even had an AI tool when I gave the list of companies to Jack, so he didn’t do the initial search.

Step two: Digging

Hard as I tried, I could not find anything sensible on their CSR as it relates to AI.

Step three: Reaching out

I emailed them (and it was not easy to find any email address at all) on March 7. As of March 12, I have not had an answer.

Step four: What the AI said

Um – not much. It could not find any public statements, and referred me to the corporate website.

OPENAI

Step one: Easy-to-find information

Jack found this: OpenAI Charter and said: “Extremely vague as you can see, if there is anything in this website about the impact they have, I can’t find it.”

Step two: Digging

It was as he said – nothing obvious, anywhere.

Step three: Reaching out

I emailed them on March 7; no answer by March 12.

Step four: What the AI said

ChatGPT said: “OpenAI has addressed these topics in various public statements, blog posts, and research papers” and provided links to web pages that theoretically addressed all three topics. On job displacement, it suggested a blog post by Sam Altman from 2021 Moore’s Law for Everything. For accessibility of tools in the Global South, it provided a link which it said was a “blog on expanding global access” but which in fact went to an overview of its research. The same link was provided to answer the environmental question: Research | OpenAI. I had the distinct sense of being sent to pages that might, perhaps, possibly, have information. In other words. ChatGPT itself found very little evidence of CSR commitments.

ANTHROPIC – CLAUDE

Step one: Easy-to-find information

Three links came up:

Responsible Disclosure Policy \ Anthropic

Anthropic Trust Center

Newsroom \ Anthropic

Jack’s comment: “Again nothing about the topics you requested, or I can’t find them, but Anthropic is keen on user safety.”

Step two: Digging

I found an extensive page detailing Anthropic’s voluntary commitments: Anthropic’s Transparency Hub: Voluntary Commitments – they are wide-ranging and detailed, and have a lot of information about economic and social impact. I was not able to find any information on environmental impact.

Step three: Reaching out

I didn’t email Anthropic on the grounds that I had found substantial information already.

Step four: What the AI said

Claude was vague: “I don’t have complete, up-to-date information on Anthropic’s public statements regarding these specific issues. My knowledge cutoff is October 2024, and I don’t have access to search or a comprehensive database of Anthropic’s public statements.”

GROK (xAI)

Step one: Easy-to-find information

Again, this one wasn’t on my original list.

Step two: Digging

I could find nothing on any of the issues.

Step three: Reaching out

I emailed them on March 7; no answer by March 12.

Step four: What the AI said

Grok also had trouble finding anything: “AI’s public statements are sparse and high-level, focusing on their mission to accelerate human discovery rather than grappling with downstream effects like job displacement, accessibility, or environmental impact… Compared to industry peers, they lag in transparency on these societal issues – a point critics might seize on, especially given Musk’s vocal presence elsewhere.”

PERPLEXITY

Step one: Easy-to-find information

Jack was short and to the point: “Nothing, or I couldn’t find it.”

Step two: Digging

I found a page addressing security (Your security is our top priority)  and nothing else.

Step three: Reaching out

I emailed them on March 7; no answer by March 12.

Step four: What the AI said

On social disruption, the answer was: “While there is no specific mention of investing in reskilling programs, the company highlights the potential of AI to create new job opportunities that complement technological advancements.” It could find no specific public statements on accessibility and environmental impact. It did provide a list of 30 source links which I can supply if you are interested!

Conclusions

It’s important to note that there are underlying ownership issues to take into account. As an example, there’s a collaboration between Microsoft and OpenAI, in which OpenAI’s advanced models, such as GPT-3 and GPT-4, have been integrated into Microsoft’s cloud services and other products. Both Google and Amazon have shares in Anthropic (Google owns 14 percent of generative AI business Anthropic)

If we leave that to one side, and look at the trends emerging from my cursory look at the AI companies, these are the tentative conclusions.

If you are going to use an AI tool from one of the big legacy companies, your choice is between Google and Microsoft. Both have a range of more-or-less easy to find statements about their position on the issues that AI raises. Your position on this, I guess, would depend on which of them you think is less “evil”. (You might for instance have in mind Google’s recent abandonment of a long-standing commitment to not use artificial intelligence technology in weapons or surveillance – though read the article: other AI companies are in the toxic mess too).  

If you are going to use a new kid on the block, you should rule out xAI on the basis that it appears to have no public stance on CSR at all (not to mention its association with X, owned by Elon Musk). Perplexity is not far behind in opaqueness and ChatGPT is only marginally better. Anthropic alone seems to be making an attempt to address social and economic issues.

It appears that we are between a rock and a hard place – not one of the AI tools available to us all is covered in ethical glory. And yet to say we won’t use AI is like saying, all those centuries ago, that we won’t read books because we think wood engraving needs to be preserved as a way of transmitting information.

What I am going to do

For myself, I am glad to have ruled out at least some of the contenders. We have learned that any decisions we make about AI tools to use or not use is essentially taking a dip in very muddy waters.

Having digested everything I found, and seeing that Google and Anthropic are the least awful choices, I’m going to confine myself to using those two AI services for the next two weeks, and see if I can get the same amount of work done as I would if I did my usual thing, which is to hop from tool to tool.

Next up in the Sensible Woman’s Guide – I’ll report back on my Google/Anthropic experiment and give an overview of the tools mentioned above from the point of view of usefulness and pricing. I won’t include Meta AI or Grok in that: they have been ruled out, for now, as places I want to play.

USEFUL RESOURCE: Top LLM Companies: 10 Powerful Players in the Digital Market

And that’s it for this week. If there’s a question you’d like me to answer, or a topic you’d like covered, contact me here. I can’t promise to answer everything (especially deeply technical questions), but I can generally get us all pointed in the right direction.

OTHER THINGS I HAVE WRITTEN

How to get away from Google search – Searching for a better search engine? Here’s my rough-and-ready guide on how to get away from Google (which may now be evil).

What about the jobs? Artificial intelligence and social responsibility | Safe Hands – I use AI tools every day – and am starting to think about paying for one of them. Which leads me to wonder about artificial intelligence companies and social responsibility.

From babies to bees: five ways in which AI is good for the world | Safe Hands – The flood of information about artificial intelligence never seems to end, and so much of it is focused on the workplace. Here’s a list of ways in which AI is good for the world.

Why cultural competence is a thing when using AI tools | Safe Hands – Generative AI – famously and infamously – gets things wrong. How to deal with that when it comes to representing actual people?

Main picture: Kelly Sikkema, Unsplash

How can I help you make order from chaos? 

Join the waiting list for the Safe Hands AI Explorer learning circle!

Sign up for my Sensible Woman’s Guide to AI and Content Creation, which comes out fortnightly.

Or sign up for my personal fortnightly email newsletter In Safe Hands (it’s a place to pause and think about things).

Book a free half hour of my time, here. Find out how I can help with writing, editing, project management, general clear thinking, technical hand holding, or an introduction to AI.

Contact me via email.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.