The end-of-year Gen AI post: be afraid, be thoughtful

We are almost done with 2025. Time to reflect on a year of generative AI and the emerging dangers – but also to think about what to do about that.

The very first edition of the Sensible Woman’s Guide to AI and Content Creation was published on December 5, 2024. That means I’ve been busy for a whole year, writing about my reflections and thoughts and hints and tips on Gen AI.

My AI journey goes further back than that though. My first inkling that something was afoot came via the late, lamented Twitter, where I remember reading a post about a new tool that could generate content. The writer pointed out how wonderful it was that people whose written English was shaky could now write emails and text messages to their clients that would look and sound professional. He was not wrong and he was talking about ChatGPT, the first iteration of which was made public in November 2022.

My own writing tells me that I was using the tool fairly extensively as early as February 2023, when I wrote this:

Perhaps it’s the circles I move in… it seems no matter what I do online, there’s going to be an email or an article or a social media post about ChatGPT.

There’s no doubt in my mind that the attention this bit of software is getting is deserved. ChatGPT is the understandable, usable face of something much bigger, the quiet revolution that machine learning and artificial intelligence have been creating while we were all thinking about something else (cat videos, for instance).

And what ChatGPT does is demonstrate that the world of “content” is dead.

That’s if you use the word “content” in the way that marketers and brands and companies and tech giants do: words or pictures or videos which are generated in order to serve the ends of our capitalist economies.

If ChatGPT can magisterially survey all the other content out there, and synthesise 500 words that would pass muster for a corporate blog, then the people who have been doing that up until now are out of a job.

Fear and learning

By early 2024, I began to wonder if I myself might soon be out of a job – if content creation is your game, then a new tech tool that can create content has to be taken seriously. Thinking that I needed to know the enemy, I joined an AI learning circle in March 2024, and still attend meetings religiously.

All the learning and thinking that I did there, and my own explorations, led me to think two things:

1. There was (and still is) an awful lot of hype and nonsense being bandied around about generative AI.

2. My own high-level skill is pattern recognition and synthesis, and my base state is pragmatic scepticism.

Might I be able to offer something sensible and helpful to other people, I wondered? And so the Sensible Woman’s Guide was born.

What have I learned on my Gen AI journey?

Over the last year, I’ve offered some “how to” posts and some about the impact of AI on the world as we know it.

But I think it may be more useful to list, as briefly as I can, the more philosophical questions I’ve been pondering. I was pushed in that direction by watching this very long Diary of a CEO podcast: AI Expert: We Have 2 Years Before Everything Changes! We Need To Start Protesting! – Tristan Harris, in which Harris, technology ethicist and creator of the Center for Humane Technology, cogently lays out the reasons why we need to think long and hard about the direction in which AI is taking us.

Essentially, he says that AI as it is currently being created poses a catastrophic, existential risk to humanity, driven by powerful corporate incentives and a lack of public awareness and democratic consent.

I urge you to ignore the clickbait YouTube title and find 2 hours and 20 minutes and watch the video. (Or, at the very least, this short version.)

Failing that, here’s my very brief outline of some of the arguments and ideas:

Major tech companies are not simply racing to build better chatbots; their mission is to build Artificial General Intelligence (AGI), which is described in the video as the creation of an instance of artificial intelligence that can replace all forms of human cognitive labour. It’s true that this could bring an explosion of scientific and technological development (a cure for cancer, for instance, or a world in which no one has to work).

But Harris and his interviewer Steven Bartlett cite private conversations with top leaders in the AI industry which indicate that in fact this is not the goal. Instead, the people in this race want godlike powers; they see AGI as an “infinite prize” like the Ring from Lord of the Rings. So we get a reckless, winner-takes-all attitude that prioritises speed over safety.

Harris says: “I have heard from one of the co-founders of one of the most powerful of these companies when faced with the idea that what if there’s a 20% chance that everybody dies and gets wiped out by this, but an 80% chance that we get utopia, he said, well, I would clearly accelerate and go for the utopia… It’s crazy. People should feel you do not get to make that choice on behalf of me and my family. We didn’t consent to have six people make that decision on behalf of eight billion people. We have to stop pretending that this is okay or normal. It’s not normal. And the only way that this is happening and they’re getting away with it is because most people just don’t really know what’s going on.”

And that utopia? Harris asks: “What is the incentive for the people who’ve consolidated all that wealth to redistribute it to everybody else? When has a small group of people concentrated all the wealth in the economy and ever consciously redistributed it to everybody else? When has that happened in history?”

(All of which explains the unease I feel whenever I read about AGI. My questions always: Do we want that? Why do we want it?)

The path forward: Choosing a different future

Harris insists that the outcome is not inevitable – humanity has coordinated on existential threats before, such as the Nuclear Non-Proliferation Treaty and the Montreal Protocol (which reversed the ozone hole), proving that global coordination is possible even amid rivalry.

But when asked what people can do in their homes to be part of a change, I felt that Harris was not all that helpful. He suggested that people should only vote for individuals who make AI a Tier 1 political issue, and that sharing the video widely would help. (Which I am doing here). But other than that there was nothing specific to help people navigate this new world we live in.

Nevertheless, I think his analysis of the situation needs to be taken seriously. It certainly had me thinking: so should I just stop using these tools? Find a way to go live in an off-the-grid commune somewhere? Neither of those are realistic options. But when I thought about it I realised that all the writing and training I have been doing are on par with at least part of Harris’s mission, which is the call to help people understand what is happening.

So – here’s a summary of what I’ve been on about over the last year. I do think that these point the way to a human and ethical use of AI. They reflect the effort I’ve been making over the last year to help people to understand the ramifications of generative AI.

My six ways to approach Gen AI (as of December 2025)

These were originally generated by NotebookLM, into which I put every single Sensible Woman column that I have written so far. Then I took the whole thing apart and put it together again.

They are phrased as “rules”, things I think people should be doing in their use of generative AI. If they sound bossy, pretend a man is writing them (then they’d just be authoritative):

Reclaim the tool-user mindset: Gen AI is a tool. We are makers who must bring our whole human selves, skills, and thought processes to the interaction. That often means being a leader or a manager (which many people hate) – we are the boss, Gen AI is the worker.

Universal scepticism and caution as a default mindset: Gen AI models are not human. They are prediction engines trained to produce plausible text, have no built-in understanding of truth or facts and are capable of generating convincing falsehoods. We need to abandon automatic trust in the virtual world. We must adopt a journalistic mindset, ask “Really?” when we read AI outputs or look at pictures or videos we see online. And remember that our data security and privacy may be at risk. These tools are essentially “black boxes”, which lack transparency about data collection, processing, and sharing. The sensible approach is to treat AI tools as if they were social media: never share passwords, highly sensitive secrets, or proprietary company information, and generally confine shared information to what is non-personal and non-confidential.

Know the trade-offs you are making: Humans are “cognitive misers” who try to expend as little mental effort as possible (known as cognitive offload). AI is super useful for managing information overload and speeding up mundane tasks, but you run the risk of diminishing critical thinking and creativity. So – make conscious choices. Do the “hard work” yourself when your goal is to learn or develop your own thinking.

Be aware of bias: AI models are trained on massive datasets that are often imbued with patriarchal, colonial, classist, and Western-centred biases. Use iterative and intersectional prompting. This means looking at an AI output and then explicitly instructing it to examine issues through lenses like race, gender, class, and geography.

Ethical AI choices are complex consumer decisions: Try to assess companies based on their corporate social responsibility (CSR) statements regarding job displacement, for example. There’s also the question of the growing global energy demand from data centres. Research suggests, though, that worrying about the climate impact of text-based AI searches is wasted time compared to focusing on systemic energy transitions. Be intentional about when and why you use AI, and cut down on time spent on digital technologies generally (because they all go back to those water- and energy-hungry data centres).

Navigate by “aliveness” and valuing imperfection: A crucial way of thinking about how we want to live is the concept of aliveness – the state of being fully present. We should proactively and thoughtfully choose to integrate technology only if it seems likely to serve our highest values. Life’s meaning is in its inherent limitations, imperfections, and constraints, not in constantly optimising or escaping them through technology.

Finally, a way of thinking about this

NotebookLM came up with an analogy that works for me. The ethical landscape of choosing and using AI tools is like buying groceries from a supermarket chain. You wonder if the chain might have questionable sourcing or labour practices so you do the research to learn as much as you can about that. But you still need to eat. The answer isn’t to abandon the supermarket entirely, but rather to read the labels and choose the best available product from the least ethically opaque company. Consciously decide when to support a small, local farm (decentralised AI) or when to cook a difficult meal entirely from scratch (doing the mental work yourself) to preserve your skills and celebrate that aliveness. 

That’s the end of a long post! This is the last Sensible Woman column for the year. It’ll be back in mid-January when I look forward to reading the labels, cooking the meals from scratch and getting myself out into the non-digital world.

Main picture: Real life is imperfect. Isabella Fischer, Unsplash

Other things I have written

AI and humanity – a mission statement – I’ve been thinking about AI and humanity for a while. Here are three things I think about how we humans can use AI to enhance our lives.

Let’s go back to Gen AI basics – The Sensible Woman’s guide to AI has covered a lot of ground. Here, I make a list of Gen AI basic concepts…

How Gen AI reveals the mad, bad world we live in – The advent of a host of chatty bots has made clearer to us the outlines of the world we now live in. And the picture is not pretty.

ChatGPT has killed content, and that’s a good thing – A February 2024 post about the impact of generative AI on the world of content creation.

The real danger of AI (and it is not new) – There are two words in the phrase “artificial intelligence”. And it is only one of them poses the real dangers we face… (Written in May 2024.)

How can I help you make order from chaos? 

Join the Safe Hands AI Explorer learning circle!

Sign up for my Sensible Woman’s Guide to AI and Content Creation, which comes out fortnightly.

Or sign up for my personal fortnightly email newsletter In Safe Hands (it’s a place to pause and think about things).

Book a free half hour of my time here. Find out how I can help with writing, editing, project management, general clear thinking, technical hand holding, or an introduction to AI.

Contact me via email.

Comments are closed.