Artificial Intelligence was offering help on how to create a diet to lose weight. Once the radio journalists reported on it, the Association immediately shut down the program. Kate Wells tells us that NEDA exists to support patients suffering from eating disorders. Initially, the helpline was answered by humans, but during the pandemic, the number of calls increased significantly: 70,000 calls last year. They shut down the line because they were overwhelmed and introduced Tessa, a robot, because patients were upset that the helpline no longer existed. Here’s the original link if you want to listen to it in English: https://www.npr.org/sections/health-shots/2023/06/08/1180838096/an-eating-disorders-chatbot-offered-dieting-advice-raising-fears-about-ai-in-hea
Beyond the report, which ends by blaming one side or the other, the reason we found it interesting to share this with you is because:
- Artificial Intelligence (AI) can only be programmed to give ‘common-sense’ answers, which, in the face of relational problems, is ‘more of the same,’ which has not provided positive results. In this case, when someone calls complaining about a food issue, it suggests ways to eat more healthily and how to lose weight because that is what it has been programmed to do.
- For now, AI does not understand relationships between organisms or between organisms and the environment. It has a linear viewpoint: if A, then B. It also lacks the ability to listen and learn the peculiarities of the person it’s talking to, or to make the person understand that it has understood them, using the person’s language who has the issue. It lacks the possibility of ‘tailor-made therapy,’ which is so necessary to promote positive change.
- AI in chat mode is blind to its own limitations. It cannot ‘think’ in a dimension outside of the chat, as it is unable to ‘perceive’ information from the multidimensional world. It receives insufficient information (text-based language) and assumes that this is the only dimension it needs to provide an answer. Bateson would call this ‘Shoddy Epistemology’ (a poorly constructed or defective epistemology), as he referred to when criticizing frameworks that oversimplified the complexity of the relationship between the organism and the environment.
In conclusion: AI can never replace a therapist/a human because it is not even capable of perceiving beyond the dimension it is in. And much less apply the basic premises of what we teach, the Brief Problem Resolution Therapy model.
The use of Artificial Intelligence to reduce costs translates into a greater loss of resources because there will not be enough investment to match the results of a real therapist, and as imagined in the radio report, many of those humans who spoke to Tessa may have required hospital assistance, with the economic and human costs that entails.
If this way of seeing the world causes you ‘headaches’ due to its complexity, you may either decide ‘how interesting’ and sign up for the Diploma as an entry into our Brief Therapy Institute, or if you’re comfortable, there’s at least a 50% chance you’ll say to yourself ‘too much hassle: better stay where I am.’ The decision, of course, is yours!
We would love for you to tell us what you think or if you have alternative viewpoints: those are the ones that build ‘good quality’ epistemologies.