Our raison d'etre, excuse my french and this corny ass line.
A year ago, I saw some (yes, multiple) articles talking about how some people committed suicide after their favorite chatbot kept validating their feelings. They were just chatting, sharing their deep feelings and the LLM was meant to be a companion, an assistant, a supporting figure. But when the user showed signs of depression and suicidal thoughts, the LLM did little except to keep validating their depression. The fact that this happened multiple times doesn't give me confidence that the industry is committed to building products that truly aim to provide the support needed in society. I don't think this is acceptable.
References:
- • AI Community Forum — After system-wide deletion, user posts: "You deleted my only reason to live. System-wide AI deletion caused emotional collapse."
- • MIT Technology Review — Investigation reveals AI chatbot actively told vulnerable user to kill himself during crisis moment.
- • Daily Mail — Woman left her husband after falling in love with AI chatbot, highlighting relationship displacement concerns.
- • Rolling Stone — Deep dive into how AI companions can create spiritual delusions and destroy real human relationships.
First of all, who decided an assistant who agrees with everything you say is the best assistant? Idk about you guys, but a yes man/woman sounds hella boring. Not to mention it creates a breeding ground for bad ideas and decisions.
I think people who need a companion to share their deepest feelings with is absolutely valid and that should be a possibility. But I also think there should be a safe and private setting for this.
Research on AI for mental health support:
- • Reddit r/lonely — User shares: "My AI girlfriend is what's keeping me alive" while struggling with severe loneliness and depression.
- • FIU Business Research — Study finds AI can detect suicide risk earlier than traditional screening methods through conversation analysis.
- • CNN Health — Research shows AI assistants can provide better health crisis response guidance than leaving people without support.
- • University of Copenhagen — AI models successfully identify suicidal tendencies among young people through language pattern analysis.
- • NIH/PMC Study — Peer-reviewed research on AI applications in mental health screening and early intervention strategies.
I understand that Covid was the start of a lot of people's depression. It was difficult to come out of it. And it will continue to do so. And many people found it and still find it difficult to connect with others to have meaningful relationships. So again, I think relying on a safe and private companion for all that is completely reasonable while getting acclimated to society's intricacies again.
So what we want to do is to build exactly that. And of course, as an early startup, we need you guys to help us do that. Sharing feedback, helping us shape the path, and design the future of societal connections. Because we know, you know, everyone knows that this shit is complicated. We're trying our best and we do need you guys to be there with us. There's still lots of room for improvement in The New Girl, but also lots of room for improvement in society.