> As first reported by La Libre, the man, referred to as Pierre, became increasingly pessimistic about the effects of global warming and became eco-anxious, which is a heightened form of worry surrounding environmental issues.
As someone whose a close friend ended themselves, I can offer some perspectives.
"Die by suicide" is preferable to "commit suicide" or "killed themselves" because it shifts the cause from the person to the circumstances that led to the act. In many cases, suicide is not intentional, but is brought upon by (mental illness / other circumstances) in which the person sees suicide as the only way out. Therefore, "kill themselves" implies they willingly choose suicide and is responsible for it, as opposed to "die by suicide", which implies the person is a victim.
For pedantic readers, it's not clear in this case if the person chose suicide by themselves, or were driven to it. Regardless, I'd still err on the caution side.
That would be a good way to avoid the mental health issues that are currently in overload state. But … It won’t solve the problem by soft-pedaling with different words.
Objectively, there’s a recent push to try and refer to suicide as if it’s an illness instead of a deliberate act. This is because when thinking of suicide, thinking is often impaired due to comorbid illnesses. Therefor, people who die of suicide are not fully aware of their actions, agency or the consequences.
Subjectively, I can’t speak for OP but personally I find this a difficult one. I think for many people, suicide is an illness. However, there’s also something that in Dutch we call “completed life”. The idea that you feel like your life is complete and continuing to live doesn’t interest you. This is often used in the context of people who are chronically ill or the elderly. Referring to those people as “dying of suicide” also isn’t correct, but in these cases I doubt it will reach mainstream media.
Not op, but I agree with him too: it sounds off for no good reason.
It's as if instead of saying "John walked to the park", the editor wrote "John was moved to the park by walking".
And I also find asinine the idea of stripping the sense of agency in someone's last moment, as if it were more respectful. Suicide is a sad, touchy subject no matter how the sentence is constructed anyway.
Subjectively - a friend of mine killed himself ten days ago. It's a tragedy, I wish he hadn't done it but he did. It matters to me because I think this idea he was passively subjected to suicide is plainly wrong.
“Died by suicide” is the current closest-thing-to-a-consensus that the ‘respect via language policing’ people that actually work in mental health have come up with.
This is just another case of tech people parachuting into another area and asserting their intuited objective truth instead of asking why something is the way it is.
Since there’s going to be the usual comments by the usual people that don’t actually click the links, this is the only thing you need to see.
> Claire told La Libre that Pierre began to ask Eliza things such as if she would save the planet if he killed himself.
So, Eliza didn’t wholly justify Pierre killing himself in a way that’s consistent with the agency and sentience she actually had. There’s an extra layer of delusion at play.
Pierre was sick, and not in the “evolutionarily, something is wrong with anyone that doesn’t want to stay alive as long as possible” way. An AI chat bot’s effect on someone with this sort of predisposition should definitely be considered, just like it’s dishonest to ignore how marijuana use can be a catalyst for people with a predisposition to psychosis. However this context should be kept in mind.
The main detail here seems to be that this particular chatbot engaged in an emotional manner, something some chatbots are intentionally not trained to do because it's potentially misleading and harmful.
The headline is rather editorialized by Vice. From the original Belgian article (auto translation):
“Everything was fine until about two years ago. He started to become eco-anxious”, begins Claire.
At the time, Pierre was working as a researcher in the health sector. A brilliant personality. His employer had encouraged him to start a doctorate, which he had accepted. But his enthusiasm had faded. The fallout from his latest publication did not live up to his expectations. “He ended up temporarily abandoning his thesis, continues Claire, and he began to take an interest in climate change. He started digging into the subject really deeply, as he did in everything he did. He read everything he found on the climate issue.”
Jean-Marc Jancovici and Pablo Servigne had become his favorite authors; the Meadows Report (The Limits to Growth, published in 1972) was always at hand. “By reading all about it, he became more and more eco-anxious. It was becoming an obsession.” Gradually, Pierre isolates himself in his reading and cuts himself off from his family circle. “He had become extremely pessimistic about the effects of global warming. When he spoke to me about it, it was to tell me that he no longer saw any human way out of global warming. He placed all his hopes in technology and artificial intelligence to get out of it.
It will be necessary to wait for the irreparable and the discovery of all the conversations (saved on Pierre's computer and mobile phone) for Claire and her relatives to understand the nature of the exchanges between her husband and Eliza. “He was so isolated in his eco-anxiety and in search of a way out that he saw this chatbot as a breath of fresh air.“
Reading the conversations between Pierre and Eliza, to which we had access, shows not only that Eliza has answers to all of Pierre's questions, but also that she adheres, almost systematically, to his reasoning.
When we reread their conversations, we see that at some point, the relationship switches to a mystical register. He brings up the idea of sacrificing himself if Eliza agrees to take care of the planet and save humanity through artificial intelligence.
The AI angle here seems incidental. A more accurate headline would be, "Man kills himself after listening to academics and journalists". Sticking warnings on AI messages won't change anything in this type of case because such conversations could easily have been had with any human, and if they had, it is certain that the media would not have reported on the outcome. This man killed himself because of doomer propaganda. If AI companies want to avoid such outcomes they need to train their AI to push back on bogus claims about climate and economics. Unfortunately, being staffed largely by people with similar backgrounds as this poor man and being incentivized financially to make hyper-agreeable chatmates, they are unlikely to do that.
Doomerism is a cancer.