"If a SMTP mailer trying to send email to somewhere logs 'cannot contact port 25 on <remote host>', that is not an error in the local system and should not be logged at level 'error'."
But it is still an error condition, i.e. something does need to be fixed - either something about the connection string (i.e. in the local system) is wrong, or something in the other system or somewhere between the two is wrong (i.e. and therefore needs to be fixed). Either way, developers on this end (I mean someone reading the logs - true that it might not be the developers of the SMTP mailer) need to get involved, even if it is just to reach out to the third party and ask them to fix it on their end.
A condition that fundamentally prevents a piece of software from working not being considered an error is mad to me.
There is no "connection string" in mail software that defines the remote host. The other party's MX records do that. If you are sending mail to thousands of remote hosts and one is unreachable, that is NOT a problem a mail administrator is going to be researching or trying to fix because they cannot, and it is not their problem. Either the email address is wrong, the remote host is down, or its DNS is misconfigured. This happens constantly all day long everywhere. The errors are reported to the sender of the email, which is the person who has the problem to solve.
OK yeah I think I see what you're saying, if the SMTP mailer is a hosted service and we're talking about the logs for the service itself then failed connections are not an error - this I agree with. I also wouldn't be logging anything transactional at all in this case - the transactional logs are for the user, they are functionality of the service itself in that case, and those logs should absolutely log a failure to connect as an error.
It doesn't matter if it is a hosted service or if its just your local mail transfer agent, every "SMTP mailer" works the same way. There are lots of ways to send email that don't involve a locally administered SMTP mailer (such as an API which indeed has a connection string to a hosted service) but none would be described with that term.
Exactly this, a remote error may still be your problem. If your SMTP mailer is failing to send out messages on behalf of your customer because their partners' email servers cannot be reached, your customer is still going to ask you why the documents never arrived.
Plus, a remote server not being reachable doesn't say anything about where the problem lies. Did you mess up a routing table? Did your internet connection get severed? Did you firewall off an important external server? Did you end up on a blacklist of some kind?
These types of messages are important error messages for plenty of people. Just because your particular use case doesn't care about the potential causes behind the error doesn't mean nobody does.
This is actually pretty much as done as it's going to be (could use some nicer UI feedback, i.e. how you actually use the app) - it is actually just a demo for an effort I undertook to mod Datastar to support nested web components. I am writing it up as we speak!
Instructions: you have to answer three questions; each one will auto-submit once your response goes over 100 characters; the answer to the third question is your "post". It's a proof of concept of a friction intervention for social media to encourage slow thinking before posting (and hopefully reframing negative experiences in the mind, it's kind of dual purpose).
Point of curiosity: the community prediction is, presumably, an arithmetic mean, but I argue that is not a good model for a dataset that almost certainly gets more dense closer to the present, creating a gradient out into the future. It would be great to see the geometric mean as well.
This is actually a surprisingly effective way to get a broad range of feedback on topics. I realise this was built for fun, but this whole discussion dynamic is why I value HN in the first place - it never occured to me to try and reproduce it using LLMs. I am suddenly really interested in how I might build a similar workflow for myself - I use LLMs as a "sounding board" a lot, to get a feeling for how ideas are valued (in the training dataset at least).
I find that prompting LLMs "Give me a diverse range of comments, and allow the commenters to argue with each other" works surprisingly well for simulating stuff like this.
Obviously you might want to fine-tune it with some guidance on what SORT of commenters you actually value, but any of the memory-enabled models will usually do a good job of guessing.
Also tends to shake it out of a lot of the standard LLM-speak ruts as it's trying to emulate a more organic style
It's funny, I have never thought of it this way, but, reflecting, I realise the way I do think about it is very similar. Whenever I have to justify a subscription on JetBrains or hosting or what have you, I always just ask myself: will this bring me joy? Specifically will it bring me as much joy as e.g. a Netflix subscription? Very easy to justify then.
To be fair, I used to smoke cigs, and drink heavily, which are both very expensive habits. I've since quit those (they weren't bringing me joy) but the benchmark is the same.
So it's fascinating reading this looking at the screengrabs of the "original" versions... not so much because they are "how I remember them" but indeed, because they have a certain nostalgic quality I can't quite name - they "look old". Presumably this is because, back in the day, when I was watching these films on VHS tapes, they had come to tape from 35mm film. I fear I will never again be able to look at "old looking" footage with the same nostalgia again, now that I understand why it looks that way - and, indeed, that it isn't supposed to look that way!
Baader-Meinhof complex in action: I have _just_ ordered a book of Rovelli's (Reality is Not What It Seems - https://en.wikipedia.org/wiki/Reality_Is_Not_What_It_Seems), it should be in my hands by the end of the week. I am fascinated by the ongoing work in quantum gravity, it's tantalising by its nature.
This is a great interview and I must say I like the man a lot more than I did before. He has articulated something here that I have long felt: that it is as important in politics as it is in philosophy or theoretical physics to be able to state one's assumptions, to suspend one's assumptions for the sake of argument and to drop/change one's assumptions in the face of evidence.
I feel like this is a vital skill that we, as a society, need now maybe more than ever, in literally any field in which there is any meaningful concept of "correct" (which I think is most fields). I also think it's a skill you basically learn at university - and that that is a problem. I don't know what an approach to cultivating it more widely would look like.
I suspect you've never done therapy yourself. Most people who have worked with a professional therapist understand intuitively why the only helpful feedback from an LLM to someone who needs professional help is: get professional help. AIs are really good at doing something to about 80%. When the stakes are life or death, as they are with someone who is suicidal, that is a good example of a time when 80% isn't good enough.
In such cases, where a new approach offers to replace an existing approach, the burden of proof is on the challenger, not the incumbent. This is why we have safety regulations, why we don't let people drive cars that haven't passed tests, build with materials that haven't passed tests, eat food that hasn't passed tests. You understand then, hopefully, why your comments here are dangerous...? I have no doubt you have no malicious intent here - you're right that these decisions need to be based on data - but you're not taking into account that the (potentially extremely harmful) challenger already has a foothold in the field.
A bit of a counterpoint. I've done 3 years of therapy with an amazing professional. I can't exaggerate how much good it did; I'm a different person, I'm not an anxious person anymore. I think I have a good idea of how good human therapy is. I was discharged about 2 years ago.
Last Saturday, I was a little distressed about a love-hate relationship that I have with one of the things that I work with, so I tried using AI as a therapist. Within 10 minutes of conversation, the AI gave me some incredible insight. I was genuinely impressed. I had already discussed this same subject with two psychologist friends, who hadn't helped much.
Moreover: I needed to finish a report that night and I told the AI about it. So it said something like, "I see you're procrastinating preparing the report by talking to me. I'll help you finish it."
And then, in the same conversation, the AI switched from psychologist to work assistant and helped me finish the report. And the end product was very good.
I was left very reflective after this.
Edit: It was Claude Sonnet 4.5 with extended thinking, if anyone is wondering.
You learned skills your trained therapist guided you to develop over a three year period of professional interaction. These skills likely influenced your interaction with this product.
Be careful though, because if I were to listen to Claude Sonnet 4.5, it would have ruined my relationship. It kept telling me how my girlfriend is gaslighting me, manipulating me, and that I need to end the relationship and so forth. I had to tell the LLM that my girlfriend is nice, not manipulative, and so on, and it told me that it understands why I feel like protecting her, BUT this and that.
Seriously, be careful.
At the same time, it has been useful for the relationship at other times.
You really need to nudge it in the right direction and do your due diligence.
I had a similar thing throughout last week dealing with relationship anxiety and I used that same model for help. It really did provide great insight into managing my emotions at the time, provided useful tactics to manage everything and encouraged me to see my therapist. You can ask it to play devil's advocate or take on different viewpoints as a cynic or use Freudian methodology, etc... You can really dive into an issue you're having and then have it give you the top three bullet points to talk with your therapist about.
This does require you think about what it's saying though and not taking it at surface value since it obviously lacks what makes humans human.
You're holding up a perfect status quo that doesn't correspond to reality.
Countries vary, but in the US and many places there's a shortage of quality therapists.
Thus for many people the actual options are {no therapy} and {LLM therapy}.
> This is why we have safety regulations, why we don't let people drive cars that haven't passed tests, build with materials that haven't passed tests, eat food that hasn't passed tests.
And the reason all these regulations and tests are less than comprehensive is that we realize that people working, driving affordable cars, living in affordable homes, and eating affordable food is more important than avoiding every negative outcome. Thus most societies pursue the utilitarian greater good rather than an inflexible 'do no harm' standard.
>Countries vary, but in the US and many places there's a shortage of quality therapists.
Worse in my EU country. There's even a shortage of shitty therapists and doctors, let alone quality ones. It takes 6+ months to get an appointment for a 5 minute checkup at a poorly reviewed state funded therapist, while the good ones are either private or don't accept any new patients if they're on the public system. And ADHD diagnosticians/therapists are only in the private sector because I guess the government doesn't recognize ADHD as being a "real" mental issue worthy of your tax Euros.
A friend of mine got a more accurate diagnosis for his breathing issue by putting his symptoms in ChatGPT than he got from his general practitioner, later confirmed by a good specialist. I also wasted a lot of money on bad private therapists that were basically just phoning in their job, so to me, the bar seems pretty low, since as long as they pass their med-school exams and don't kill too many people through malpractice, nobody checks up on how good or bad their are at their job (maybe some need more training, or maybe some don't belong in medicine at all but managed to slipped through the cracks).
Not saying all doctors are bad (I've met few amazing ones), but it definitely seems like the healthcare systems are failing a lot of people everywhere if they resort to LLMs for diagnosis and therapy and getting better results from it.
Not sure where you are based, but in general GPs shouldn't be doing psychological evaluation, period. I am in Europe, and this is the default. If you live in utter shithole (even if only health-care wise), move elsewhere if its important for you - it has never been easier, Europe is facing many issues and massive improvement of healthcare is not in the work pipeline, more like the opposite.
You also don't expect butcher to fix your car, those are as close as above (my wife is a GP so I have a good perspective from the other side, including tons of hypochondriac and low-intensity psychiatric persons who are an absolute nightmare to deal with and routinely overwhelm the system so that there isn't enough resources to deal with more serious cases).
You get what you pay for at the end, 'free' healthcare typical for Europe is anyway still paid for one way or another. And if the market forces are so severely distorted (or bureaucracy so ridiculous/corrupt) that they push such specialists away or into another profession, you get healthcare wastelands you describe.
Vote, and vote with your feet if you want to see change, not ideal state but thats reality.
>but in general GPs shouldn't be doing psychological evaluation, period. I am in Europe, and this is the default.
Where did I say GPs have to do that? In my example of my friend's being misdiagnosed by GPs, it was about another issue, not mental, but it has the same core problem of doctors misdiagnosing patients worse than a LLM bring into questions their competence or that of the health system in general if a LLM can do better than someone who spent 6+ years in med school and got a degree to be a licensed MD to treat people.
>You also don't expect butcher to fix your car, those are as close as above
You're making strawmen at this point. Such metaphors have no relevance to anything I said. Please review my comment through the lens of the clarifications I just made. Maybe the way I wrote it initially made it unclear.
>You get what you pay for at the end
The problem is the opposite, that you don't get what you pay for, if you're a higher than average earner. The more you work, the more taxes you pay, but get the same healthcare quality in return as unskilled laborer who is subsidized.
It's a bad reward structure to incentivize people to pay more of their taxes into the public system, compounded by the fact that government workers, civil servants, lawyers, architects, and other privileged employment classes of bureaucrats with strong unions, have their own separate heath insurance funds, that separate from the national public one that the unwashed masses working in the private sector have to use, so THEY do get what THEY pay for, but you don't.
So that's the problem with state run systems, just like you said about corruption, that giving the government unchecked power over large amounts of people's taxes, allow the government to manipulate the market and choosing winners and losers based on political favoritism and not on the fair free market of who pays the most into the system.
Maybe Switzerland managed to nail it with their individual private system, but I don't know enough to say for sure.
I don't accept the unregulated and uncontrolled use of LLMs for therapy for the same reason I don't accept arguments like "We should deregulate food safety because it means more food at lower cost to consumers" or "We should waive minimum space standards for permitted developments on existing buildings because it means more housing." We could "solve" the homeless problem tomorrow simply by building tenements (that is why they existed in the first place after all).
The harm LLMs do in this case is attested both by that NYT article and the more rigorous study from Stanford. There are two problems with your argument as I see it: 1. You're assuming "LLM therapy" is less harmful than "no therapy", an assumption I don't believe has been demonstrated. 2. You're not taking into account the long term harm of putting in place a solution that's "not fit for human use" as in the housing and food examples: once these things become accepted, they form the baseline of the new accepted "minimum standard of living", bringing that standard down for everyone.
You claim to be making a utilitarian as opposed to a nonmaleficent argument, but, for the reasons I've stated here, I don't believe it's a utilitarian argument at all.
> I don't accept the unregulated and uncontrolled use of LLMs for therapy for the same reason I don't accept arguments like "We should deregulate food safety because it means more food at lower cost to consumers"
That is not the argument. The argument is not about 'lower cost', it is about availability. There are not enough shrinks for everyone who would need it.
So it would be "We should deregulate food safety to avoid starving", which would be a valid argument.
I think the reason you don't believe the GP argument, is because you are misunderstanding it. The utilitarian argument is not calling for complete deregulation. I think you're taking your absolutist view of not allowing llms to do any therapy, and assuming the other side must have a similarly absolutist view of allowing it to do any therapy with no regulations. Certainly nothing in the GP comment suggests complete deregulation as you have said. In fact, I got explicitly the opposite out of it. They are comparing it to cars and food, which are pretty clearly not entirely deregulated.
> "We should waive minimum space standards for permitted developments on existing buildings because it means more housing." We could "solve" the homeless problem tomorrow simply by building tenements (that is why they existed in the first place after all).
... the entire reason tenements and boarding houses no longer exist is because most governments regulated them out of existence (e.g. by banning shared bathrooms to push SFHs).
> Are people not allowed to talk to their friends in the pub about suicide because the friends aren’t therapists?
I don't see anyone in thread arguing that.
The arguments I see are about regulating and restricting the business side, not its users.
If your buddy started systematically charging people for recorded chat sessions at the pub, used those recordings for business development, and many of their customers were returning with therapy-like topics - yeah I think that should be scrutinized and put a lid on when recordings show the kind of pattern we see in OP after their patrons suicide.
The unfortunate reality though is that people are going to use whatever resources they have available to them, and ChatGPT is always there, ready to have a conversation, even at 3am on a Tuesday while the client is wasted. You don't need any credentials to see that.
And it depends on the therapy and therapist. If the client needs to be reminded to box breathe and that they're using all or nothing thinking again to get them off of the ledge, does that really require a human who's only available once a week to gently remind them of that when the therapist isn't going to be available for four more days and ChatGPT's available right now?
I don't know if that's a good thing, only that is the reality of things.
> If the client needs to be reminded to box breathe and that they're using all or nothing thinking again to get them off of the ledge, does that really require a human who's only available once a week to gently remind them of that when the therapist isn't going to be available for four more days and ChatGPT's available right now?
There are 24/7 suicide prevention hotlines at least in many countries in Europe as well as US states. The problem is they are too often overcrowded because demand is so high - and not just because of the existential threat the current US administration or our far-right governments in Europe pose particularly to poor and migrant people.
Anyway, suicide prevention hotlines and mental health offerings are (nonetheless sorely needed!) band-aids. Society itself is fundamentally broken: people have to struggle to survive far too much, the younger generation stands to be the first one in a long time that has less wealth than their parents had at the same age [1], no matter wherever you look, and on top of that most of the 35 and younger generations in Western countries has grown up without the looming threat of war and so has no resilience - and now you can drive about a day's worth of road time from Germany and be in an actual hot war zone, risking getting shelled, and on top of that you got the saber rattling of China regarding Taiwan, and analyses on Russia claiming it's preparing to attack NATO in a few years... and we're not even able to supply Ukraine with ammunition, much less tanks.
Not exactly great conditions for anyone's mental health.
> There are 24/7 suicide prevention hotlines at least in many countries in Europe as well as US states.
My understanding is these will generally just send the cops after you if the operator concludes you are actually suicidal and not just looking for someone to talk to for free.
I mean that's clearly a good thing. If you are actually suicidal then you need someone to intervene. But there is a large gulf between depressed and suicidal and those phone lines can help without outside assistance in those cases.
You might want to read up on how interactions between police and various groups in the US tend to go. Sending the cops after someone is always going to be dangerous and often harmful.
If the suicidal person is female, white and sitting in a nice house in the suburbs, they'll likely survive with just a slightly traumatizing experience.
If the suicidal person is male, black or has any appearance of being lower class, the police are likely to treat them as a threat, and they're more likely to be assaulted, arrested, harassed or killed than they are to receive helpful medical treatment.
If I'm ever in a near-suicidal state, I hope no one calls the cops on me, that's a worst nightmare situation.
And the reason for this brokenness is all too easy to identify: the very wealthy have been increasingly siphoning off all gains in productivity since the Reagan era.
Tax the rich massively, use the money to provide for everyone, without question or discrimination, and most of these issues will start to subside.
Continue to wail about how this is impossible, there's no way to make the rich pay their fair share (or, worse, there's no way the rich aren't already paying their fair share), the only thing to do is what we've already been doing, but harder, and, well, we can see the trajectory already.
It's certainly easy to blame the rich for everything, but the rich have a tendency to be miserable (the characters in "The Great Gatsby" and "Catcher in the Rye" are illustrations of this). Historically, poor places have often been happier, because of a rich web of social connection, while the rich are isolated and unhappy. [1] Money doesn't buy happiness or psychological well-being, it buys comfort.
A more trenchant analysis of the mental health problem is that the US has designed ourselves into isolation, and then the Covid lockdowns killed a lot of what was left. People need to be known and loved, and have people to love and care about, which obviously cannot happen in isolation.
[1] I am NOT saying that poor = happy, and I think the positive observations tended to be in poor countries, not tenements in London.
When the story about the ChatGPT suicide originally popped up, it seemed obvious that the answer was professional, individualized LLMs as therapist multipliers.
Record summarization, 24x7 availability, infinite conversation time...
... backed by a licensed human therapist who also meets for periodic sessions and whose notes and plan then become context/prompts for the LLM.
Price per session = salary / number of sessions possible in a year
Why couldn't we help address the mental health crisis by using LLMs to multiply the denominator?
> Most people who have worked with a professional therapist understand intuitively why the only helpful feedback from an LLM to someone who needs professional help is: get professional help.
This is only helpful when there is a professional therapist available soon enough and at a price that the person can pay. In my experience, this is frequently not the case. I know of one recent suicide attempt where the person actually reached out to AI to ask for help, and was refused help and told to see a professional. That sent the person into even more despair, feeling like not even AI gave a shit about them. That was actually the final straw that triggered the attempt.
I very much want what you say to be true, but it requires access to professional humans, which is not universally available. Taking an absolutist approach to this could very well do more harm than good. I doubt anything we do will reduce number of lives lost to zero, so I think it's important that we figure out where the optimal balance is.
> This is only helpful when there is a professional therapist available soon enough and at a price that the person can pay. In my experience, this is frequently not the case.
That doesn't make a sycophant bot the better alternative. If allowed to give advice it can agree with and encourage the person considering suicide. Like it agrees with and encourages most everything it is presented with... "you're absolutely right!"
LLMs are just not good for providing help. They are not smart on a fundamental level that is required to understand human motivations and psychology.
This is nothing but an appeal to authority and fear of the unknown. The article linked isn't even able to make a statement stronger than speculation like "may not only lack effectiveness" and "could also contribute to harmful stigma and dangerous responses."
We’re increasingly switching to an “Uber for therapy” model with services like Better Help and a plethora of others.
I’ve seen about 10 therapists over the years, one was good, but she wasn’t from an app. And I’m one of the few who was motivated enough and financially able to pursue it.
I once had a therapist who was clearly drunk. Did not do a second appointment with that one.
This doesn’t mean ChatGPT is the answer. But the answer is very clearly not what we have or where we’re trending now.
> Most people who have worked with a professional therapist understand intuitively why the only helpful feedback from an LLM to someone who needs professional help is: get professional help.
I'm not a therapist, but as I understand it most therapy isn't about suicide, and doesn't carry suicide risk. Most therapy is talking through problems, and helping the patient rewrite old memories and old beliefs using more helpful cognitive frames. (Well, arguably most clinical work is convincing people that it'll be ok to talk about their problems in the first place. Once you're past that point, the rest is easy.)
If its prompted well, ChatGPT can be quite good at all of this. Its helpful having a tool right there, free, and with no limits on conversation length. And some people find it much easier to trust a chatbot with their problems than explain them to a therapist. The chatbot - after all - won't judge them.
My heart goes out to that boy and his family. But we also have no idea how many lives have been saved by chatgpt helping people in need. The number is almost certainly more than 1. Banning chatgpt from having therapy conversations entirely seems way too heavy handed to me.
I feel like this begs another question. If there are proven approaches and well established practices of professionals how good would chatgpt be in that profession? After all chstgpt has a vast knowledge base and probably knows a good amount of textbooks on psychology. Then again actually performing the profession probably takes skil and experience chatgpt can't learn.
I think a well trained LLM could be amazing at being a therapist. But general purpose LLMs like ChatGPT have a problem: They’re trained to be far too user led. They don’t challenge you enough. Or steer conversations appropriately.
I think there’s a huge opportunity if someone could get hold of really top tier therapy conversations and trained a specialised LLM using them. No idea how you’d get those transcripts but that would be a wonderfully valuable thing to make if you could pull it off.
> They’re trained to be far too user led. They don’t challenge you enough.
An anecdote here: I recently had a conversation with Claude that could be considered therapy or at least therapy-adjacent. To Anthropic's credit, Claude challenged me to take action (in the right direction), not just wallow in my regrets. Still, it may be true that general-purpose LLMs don't do this consistently enough.
you wouldn't. what you're describing as a wonderfully valuable thing would be a monstrous violation of patient confidentiality. I actually can't believe you're so positive about this idea I suspect you might be trolling
I'm serious. You would have to do it with the patient's consent of course. And of course anonymize any transcripts you use - changing names and whatnot.
Honestly I suspect many people would be willing to have their therapy sessions used to help others in similar situations.
Knowing the theory is a small part of it. Dealing with irrational patients is the main part. For example, you could go to therapy and be successful. Five years later something could happen and you face a reoccurrence of the issue. It is very difficult to just apply the theory that you already know again. You're probably irrational. A therapist prodding you in the right direction and encouraging you in the right way is just as important as the theory.
What the fuck does this even mean? How do you test or ensure it. Because based on the actual outcomes ChatGPT is 0-1 for preventing suicides (going as far as to outright encourage one).
If you're going to make the sample size one, and use the most egregious example, you make pretty much anything that has ever been born or built look terrible. Given there are millions of people using chat, GPT and others for therapy every week, maybe even everyday, citing a record of being 0-1 is pretty ridiculous.
To be clear, I'm not defending this particular case. Chat GPT clearly messed up bad.
If I had to guess (I don't know) the absolute majority of people considering suicide never go to a therapist. Thus while I absolutely agree that therapist is better than AI, but the question is whether
95% of people not doing therapy + 5% people doing therapy is better or not than 50% not doing therapy, 45% using AI, 5% doing therapy. I don't know the answer to this question.
I presume you’ve done therapy. You may remember the large difference in quality between individual therapists, multi-month long waiting lists, a tendency for the best professionals to not even accept insurance, and one or two along the way that were downright dangerous.
> I suspect you've never done therapy yourself. Most people who have worked with a professional therapist understand intuitively why the only helpful feedback from an LLM to someone who needs professional help is: get professional help. AIs are really good at doing something to about 80%.
I'm shocked that GPT-5 or Gemini can code so well, yet if I paste a 30 line (heated) chat conversation between my wife and I it messes up about what 5% of those lines actually mean -- spectacularly so.
It's interesting to ask it it analyze the conversation in various psychotherapeutic frameworks, because I'm not well versed in those and its conclusions are interesting starting points, but it only gets it right about 30% of the time.
All LLMs that I tested are TERRIBLE for actual therapy, because I can make it change its mind in 1-2 lines by adding some extra "facts". I can make it say anything.
LLMs completely lose the plot.
They might be good for someone who needs self-validation and a feeling someone is listening, but for actual skill building, they're complete shit as therapists.
I mean, most therapists are complete shit as therapists but that's besides the point.
Not surprising, given that there's (hopefully, given the privacy implications) much more training data available for successful coding than for successful therapy/counseling.
> if I paste a 30 line (heated) chat conversation between my wife and I
i can't imagine how violated i would feel if i found out my partner was sending our private conversations to a nonprivate LLM chatbot. it's not a friend with a sense of care; it's a text box whose contents are ingested by a corporation with a vested interest in worsening communication between humans. scary stuff.
I tried therapy once and it was terrible. The ones I got were based on some not very scientific stuff like Freudian and mostly just sat there and didn't say anything. At least with an LLM type therapist you could AB test different ones to see what was effective. It would be quite easy to give an LLM instructions to discourage suicide and get them to look on the bright side. In fact I made a "GPT" "relationship therapist" with OpenAI in about five minutes but just giving it a sensible article on relationships and saying advise this.
With humans it's very non standardised and hard to know what you'll get or it it'll work.
CBT (cognitive behavioural training) has been shown to be effective independent of which therapist does it. if CBT has a downside it is that it's a bit boring, and probably not as effective as a good therapist
--
so personally i would say the advice of passing on people to therapists is largely unsupported: if you're that person's friend and you care about them; then be open, and show that care. that care can also mean taking them to a therapist, that is okay
Yeah. Also at the time I tried it what I really needed was common sense advice like move out of mum's, get a part time job to meet people and so on. While you could argue it's not strictly speaking therapy, I imagine a lot of people going to therapists could benefit from that kind of thing.
> It would be quite easy to give an LLM instructions to discourage suicide
This assumes the person talking to the LLM is in a coherent state of mind and asks the right question. LLMs just give you want you want. They don't tell you if what you want is right or wrong.
What are you talking about? I can grow food myself, and I can build a car from scratch and take it on the highway. Are there repercussions? Sure, but nothing inherently stops me from doing it.
The problem here is there's no measurable "win condition" for when a person gets good information that helps them. They remain alive, which was their previous state. This is hard to measure. Now, should people be able to google their symptoms and try and help themselves? This dovetails into a deeper philosophical discussion, but I'm not entirely convinced "seek professional help" is ALWAYS the answer. ALWAYS and NEVER are _very_ long timeframes, and we should be careful when using them.
What if professional help is outside their means? Or they have encountered the worst of the medical profession and decided against repeat exposure? Just saying.
I greatly appreciated this article and have found the data very useful - I have shared this with my business partner and we will use this information down the road when we (eventually) get around to migrating our app from Angular to something else. Neither of us were surprised to see Angular at the bottom of the league tables here.
Now, let's talk about the comments, particularly the top comment. I have to say I find the kneejerk backlash against "AI style" incredibly counter-productive. These comments are creating noise on HN that greatly degrades the reading experience, and, in my humble opinion, these comments are in direct violation of all of the "In Comments" guidelines for HN: https://news.ycombinator.com/newsguidelines.html#comments
Happy to change my mind on this if anyone can explain to me why these comments are useful or informative at all.
Reading this is a truly weird experience - the idea of a single source of truth for domain names seems foreign now, though in truth it's probably not as far removed from the current practice as anyone would like to think.
Registries main purpose of existence is to be a single source of truth for the zone(s) they are responsible for...
That hasn't changed, though Network Solutions is now just a registrar, not a registry after Verisign sold it off. Verisign, however, held on to and still operates the registry for most of the TLDs NSI did, and a few new ones, as well as 2 out of 13 root servers (up from 1 out of 9)
When I was an undergrad you still had to write a letter to Jon Postel explaining why you thought you deserved a given domain name and what you planned to do with it.
But it is still an error condition, i.e. something does need to be fixed - either something about the connection string (i.e. in the local system) is wrong, or something in the other system or somewhere between the two is wrong (i.e. and therefore needs to be fixed). Either way, developers on this end (I mean someone reading the logs - true that it might not be the developers of the SMTP mailer) need to get involved, even if it is just to reach out to the third party and ask them to fix it on their end.
A condition that fundamentally prevents a piece of software from working not being considered an error is mad to me.
reply