Harvard Business Review has recently published the top 100 GenAI use cases for 2025. While there is a slightly worrying finding that the top use case is personal therapy and companionship (and more on this below), it’s also interesting to see a new use case for 2025 which is ‘organise my life’. Where Generative AI can help, it claims, is by offering personal scheduling, task management, and goal-setting to help people make best use of their time. For instance, creating to-do and shopping lists, breaking down complex goals into manageable tasks, focusing on motivation, and even helping coaches and consultants work with clients to achieve these kinds of goals. The third use case is also a new one and it relates to finding purpose: identifying meaningful goals and direction in life. GenAI helps by guiding self-reflection, offering insights based on personal values, and helping individuals explore paths that align with their passions and aspirations.
This all sounds amazing, and for the wider aims of making life better, avoiding stress and burnout, and finding work-life balance, the idea of organising everyday tasks and identifying purpose seem intuitively likely to help. But does this really work, and should we trust it?
It’s important to remember what generative AI (ChatGPT, Claude, and so on – other systems are available!) is actually doing. In short, it’s scouring the internet for information and pulling it together far more rapidly than you could ever do yourself, but it is also learning from the input it gets from users – you and me – and refining its results based on how useful we tell it they are. So if you ask a question about what to call your new business, for instance, it will learn from your follow-up questions how successful its first attempts are, and continue in iterative fashion until you seem happy, taking that information back to the next iteration of a similar question. If you register an account, all your previous information is being used to tailor what you get next.
Members of the British Association for Counselling and Psychotherapy are, perhaps understandably, concerned about AI “therapy”. In the US, the state of Illinois banned AI chatbots from taking on the role of standalone therapists, based on evidence that it can heighten negative aspects of the questions users ask it, particularly if they are vulnerable to psychosis. Unlike a real therapist, AI doesn’t often challenge unhelpful beliefs; like a small child, it aims to please its audience, and its information is also limited to online text, image and video, without the nuance of everyday interaction or the benefit of real interactions with troubled clients.
Its personal touch is also slightly disarming. For the purpose of research for this post, I asked ChatGPT a question about job satisfaction, and it started with an acknowledgement to make me feel better, ‘That sounds really heavy’, followed by an attempt to make me feel better by noting that this is a common problem. So far, so good? But, as others such as Markus Brinsa have noted, this isn’t real empathy: it’s based on what other people often say in that context, and it might not get it right. It has no idea of the intentions behind my question, my state of mind in asking it, or what I’m going to do with the information it provides. Its advice ranged from telling me to count to ten and get outdoors more through to quitting my job – some of which might be helpful advice for some circumstances, but not all for all!
Finding purpose is perhaps less problematic, as the technology can offer some useful tools to help make decisions. But it can only go on what you offer it and what it can glean online. Newer versions are getting better at asking questions and using the responses to guide future output, but if you’re struggling with something, you may not know the right questions or be able to spot the right answers. Even in the less contentious world of personal efficiency and task management, the technology is coming up with a best guess as to what might work based on the information out there. The suggestions might be useful, and might work for you, but you are probably aware of many of them already. Tackling my to-do-list one thing at a time or trying the Pomodoro technique are hardly novel insights for a challenging set of professional obligations, and ChatGPT isn’t giving me anything useful about the underlying issues that might be manifesting themselves in finding it hard to keep on top of things.
I don’t think humans are being replaced by technology just yet, but it is concerning how easily this generic information can be accessed and how it could be misused, particularly amongst the frazzled or the vulnerable. The technology can be a useful tool, but remember it’s just one voice, and it may not always be acting in our best interests. As Arthur Brooks has recently eloquently explained, AI certainly isn’t going to make us happier. So let’s take it with a pinch of salt and, however useful we might think AI is, keep on talking to real people too.
Leave a Reply