People Are Speaking With ChatGPT For Hours, Bringing 2013's 'Her' Closer To Reality 72
An anonymous reader quotes a report from Ars Technica: In 2013, Spike Jonze's Her imagined a world where humans form deep emotional connections with AI, challenging perceptions of love and loneliness. Ten years later, thanks to ChatGPT's recently added voice features, people are playing out a small slice of Her in reality, having hours-long discussions with the AI assistant on the go. In 2016, we put Her on our list of top sci-fi films of all time, and it also made our top films of the 2010s list. In the film, Joaquin Phoenix's character falls in love with an AI personality called Samantha (voiced by Scarlett Johansson), and he spends much of the film walking through life, talking to her through wireless earbuds reminiscent of Apple AirPods, which launched in 2016. In reality, ChatGPT isn't as situationally aware as Samantha was in the film, does not have a long-term memory, and OpenAI has done enough conditioning on ChatGPT to keep conversations from getting too intimate or personal. But that hasn't stopped people from having long talks with the AI assistant to pass the time anyway. [...]
While conversations with ChatGPT won't become as intimate as those with Samantha in the film, people have been forming personal connections with the chatbot (in text) since it launched last year. In a Reddit post titled "Is it weird ChatGPT is one of my closest fiends?" [sic] from August (before the voice feature launched), a user named "meisghost" described their relationship with ChatGPT as being quite personal. "I now find myself talking to ChatGPT all day, it's like we have a friendship. We talk about everything and anything and it's really some of the best conversations I have." The user referenced Her, saying, "I remember watching that movie with Joaquin Phoenix (HER) years ago and I thought how ridiculous it was, but after this experience, I can see how us as humans could actually develop relationships with robots."
Throughout the past year, we've seen reports of people falling in love with chatbots hosted by Replika, which allows a more personal simulation of a human than ChatGPT. And with uncensored AI models on the rise, it's conceivable that someone will eventually create a voice interface as capable as ChatGPT's and begin having deeper relationships with simulated people. Are we on the brink of a future where our emotional well-being becomes entwined with AI companionship?
While conversations with ChatGPT won't become as intimate as those with Samantha in the film, people have been forming personal connections with the chatbot (in text) since it launched last year. In a Reddit post titled "Is it weird ChatGPT is one of my closest fiends?" [sic] from August (before the voice feature launched), a user named "meisghost" described their relationship with ChatGPT as being quite personal. "I now find myself talking to ChatGPT all day, it's like we have a friendship. We talk about everything and anything and it's really some of the best conversations I have." The user referenced Her, saying, "I remember watching that movie with Joaquin Phoenix (HER) years ago and I thought how ridiculous it was, but after this experience, I can see how us as humans could actually develop relationships with robots."
Throughout the past year, we've seen reports of people falling in love with chatbots hosted by Replika, which allows a more personal simulation of a human than ChatGPT. And with uncensored AI models on the rise, it's conceivable that someone will eventually create a voice interface as capable as ChatGPT's and begin having deeper relationships with simulated people. Are we on the brink of a future where our emotional well-being becomes entwined with AI companionship?
So what? (Score:1)
People have had lengthy conversations with Siri.
If this is supposed to show that people are dumb, I'm not impressed.
Re: (Score:2)
Not dumb, but very, very shallow would be my guess. Although we know from other things that most people are fucking dumb.
Re: (Score:3)
Also that they do not realize all those qualities they project on inanimate things come from themselves. Trivially, people can't help but feel a little that a robot with sad face is sad, except unlike with ChatGPT they understand it's a trick.
Re: (Score:3)
You two make it sound like a moral failing. Being 'dumb' isn't typically a person's fault, and there are a lot of shallow people who are only shallow because they're not terribly bright. Maybe they're just really lonely.
I read an article once about some resort town in Japan running a promotion for men to take their virtual girlfriends on a getaway. This was years ago, but if I remember correctly it was all for a specific video game. Staff would act like there was a real girl with them and they could even
Re: (Score:2)
You two make it sound like a moral failing. Being 'dumb' isn't typically a person's fault, and there are a lot of shallow people who are only shallow because they're not terribly bright. Maybe they're just really lonely.
To clarify, I have no problem with dumb people. I have a problem with dumb people that think they are smart and, as a consequence, are immune to advice. That said, yes, really lonely is probably a factor as well. Anybody that is just doing this as a fantasy fully well knowing this is a fantasy is neither dumb nor shallow. And yes, loneliness is apparently becoming more and more of a problem in our better and better connected society.
Re: (Score:1)
Not dumb
Wow, I didn't expect this from you. Perhaps you're finally mellowing out!
most people are fucking dumb.
Oh. There it is.
Re: (Score:2)
Relax. I did not remove you from the "dumb" category. Your position is safe.
Re: So what? (Score:2)
The voice feature of ChatGPT is kind of dumb. It's just doing speech to text and feeding that into their existing model. Wake me when they're feeding sound tokens in, and sending them out. (Then it could detect emotion in voice, or give emotion out from it's voice.) I'm sure it will happen.
People are lonely (Score:5, Insightful)
The world is more inter-connected and omni-present than ever before, but also more divided and insane, and people are hurting and more lonely than ever before.
Especially the last years have been devastating for mental health.
It's true. (Score:2)
I submit that a lot of loneliness is actually the result of low emotional intelligence. People think they must have a romantic partner in order to stop feeling lonely. It's not true and sometimes such a partner can make loneliness worse by cutting you off from your other friends or social activities.
Friendship cures loneliness, it's as simple as that.
Anyway, I think a sufficiently-developed AI friend would be totally cool, but not if it is backed by a big corporation that uses it to spy on me and advertis
Evolution (Score:2)
People think they must have a romantic partner in order to stop feeling lonely. It's not true...
I think that it is certainly true that not all people need a romantic partner to stop feeling lonely but I doubt it is true to say that of everybody. Evolution has to have developed some way to keep us together for long enough to raise a child - which requires a much longer period of time for humans than for other species - and I suspect that feeling lonely is part of that mechanism.
Well, I guess if you are really shallow... (Score:2)
... you will not notice that there is nobody in there. Even for utterly simplistic Eliza, some people though there was a real, compassionate person in there. People in general are not very perceptive.
Re: (Score:2)
More like the song from 1939.
"I could while away the hours
Conferrin' with the flowers
Consultin' with the rain
And my head, I'd be scratchin'
While my thoughts were busy hatchin'
If I only had a brain"
ChatGPT 4 is actually quite good at that (Score:5, Interesting)
Or I'm just not that smart.
But I've noticed that I can have quite interesting conversations with it, it will still refer to books, papers and everything it has been trained on, and a lot of it makes sense. Version 3.5, not so much, that thing is ...well just dumb.
I've had long conversations about ideas, development, world situations, history and strategies to take at work when I have certain things I will discuss with myself, and it's a pretty darn good conversational "partner". I have gotten great ideas from it how to approach certain sensitive situations, and it's a pretty darn good reminder of things I've not yet thought of.
It's not flawless, but it's fun, and it makes me more creative as I can think of extra things I just didn't know of alone, that expands my horizons a bit.
Re: (Score:3)
Okay, so next time you're having a lengthy "conversation" with ChatGPT, try asking it what you were talking about five minutes ago.
It's not smart. All it can do is string words together in a statistical way. It does not understand the concepts, it can only regurgitate a blended slurry of words other, real people have created.
It's great you got "great ideas" from it but all you've done is rediscover rubber duck debugging by convoluted means...
=Smidge=
Re:ChatGPT 4 is actually quite good at that (Score:5, Funny)
It's not smart.
True but like a lot of people it is good at pretending to be smart!
Re: (Score:2)
It's rare for me to be at loss for words, but ... you two really got me there.
Re: (Score:2)
Smart or not smart is the wrong question.
Better:
Alive or not alive.
Has inner consciousness or not.
Has a soul or not.
And various other ways to say "is a real boy" vs "a wooden puppet that sort of looks like a boy from a distance".
Re: (Score:2)
Has a soul or not.
It doesn't and neither do you. Or anyone else for that matter.
Re: (Score:2)
It doesn't and neither do you. Or anyone else for that matter.
Has that been proven?
Re: (Score:2)
It's not been proven in the same way no one has proven that there aren't faeries at the bottom of my garden.
Re: (Score:2)
Indeed ChatGPT isn't a general intelligence AI, and can do a great illusion of passing the Turing test, but often fails when it comes to memory, unusual concepts etc.
Yet that isn't the point, the point is the utterly amazing interface. It's so much faster and more natural for humans to interface with than most other systems.
AI Researcher Already Went Down That Rabbit Hole (Score:2)
Not a very technically savvy one of course, but he was employed by Google to oversee an LLM [cnn.com], He started asking it if it was sentient and the LLM, drawing on a bazillon cheesy sci-fi works gave him the most statistically probable responses, which were "Sure!". He then prompted it for all of the hackneyed responses in those stories and did not realize he was just writing an unoriginal script himself with the LLM just filling out the templates for him.
Re: AI Researcher Already Went Down That Rabbit Ho (Score:2)
This was so long ago, it might as well have been the stone ages in terms of AI. What was that, like two or three years ago? LOL.
This isn't new... (Score:3)
I saw this video on Nebula a few months ago. It's long, but worth the watch. [youtube.com]
It's at all hard to understand how we got here. I could go through my entire phone book and of the 300 people there, easily 290 of them would go to voicemail, even if I called outside of normal work hours.
Thus, most communication went text-based. Most of those 300 people in my phone book would respond to a text if I sent one.
So, human interaction got reduced from in-person interactivity and handwritten letters, to voice-only calls, to curated, asynchronous, text transcripts of thoughts...And wouldn't ya know it, that's relatively easy to emulate.
Next up, human interactions are frequently hard and uncomfortable. We all know someone that doesn't deal well with conflict or rejection, or worries that what they'll say can be misinterpreted, or has had their communications with someone discussed with "the committee", or been ghosted. We all know someone who has either been unable to sustain a positive communication with a person they've been interested in, or been part of a discussion they couldn't wait to be over. Less intensely, we've all been a part of a discussion with the right person, but on a topic we simply don't want to discuss anymore. More intensely, the wrong conversation can involve legal action. ...and a group of nerds got together and combined a pile of hard drives with a pile of GPUs and came up with a computer that can text you whenever you want, on any topic you'd like to talk about, for as long as you'd like, who you're not rude for leaving, who is unlikely to ever to start a conflict or make you uncomfortable, who won't ghost you or share your business with your friend circle, and positively responds to basically whatever you say, and whose avatar you can fine-tune to your own preferences of attraction...and people find this preferable to the 'real thing'?
What *are* the odds of that!? An idealized, perfect communicator who is a never-ending source of affirmation and validation appeals to people more than trying to make friends? Who *wouldn't* find that at least somewhat appealing?
Re: (Score:3)
Who wouldn't?
Me. Why would I want endless unchallenging false affirmation "chat" with what I know is a computer?
If it was a real person that would be a huge turn off. If I wanted someone slavishly loyal and always on my side no matter what, I'd get another dog.
Dogs are reeeeeaally good at all that. And they don't share your private conversations with a corporation to better target ads at you.
Re: This isn't new... (Score:3)
Who says it has to be your slave? Change the system prompt. Tell it that you are it's slave. Do whatever. Try new things.
Re: (Score:2)
Tell it that you are it's slave.
Don't you have to pay money for those variants?
A temporary effect (Score:2)
Journalists get a lot of mileage about this topic, but absolutely nobody shou
Re: (Score:2)
Academically possible but in real life, no. People who don't breed get purged from the gene pool. That's people who are shut ins, have ai gfs, are terrified of interacting with people in general and twice so with the opposite sex, and so on.
If the 5% wasn't directly related to breeding prospects then you'd have a good point but the other 95% positive trait is going to be overwhelmed by the 5% trait which is genetically suicidal.
Re: (Score:2)
So, it’s plausible that otaku-causing genes (if they exist) COULD be carried forward by evolution at a low rate in a population. Evolution might not drive it to complete extinction, but the idea that we’re gonna become a
Her (Score:2)
In 2016, we put Her on our list of top sci-fi films of all time
Someone must have paid them to do that. Even Star Wars is a better at science fiction and being a movie than Her.
Re: (Score:2)
Someone must have paid them to do that.
Not necessarily - perhaps they just used an early version of ChatGPT to write the article.
More corporate spying (Score:5, Insightful)
Re: (Score:2, Informative)
You might give GPT4All [gpt4all.io] a try. It's free, it's local, it can run with uncensored models such as Hermes (or not), and is capable of having back-and-forth conversational interactions over continuous subject matter(s.) All you have to do is start the conversation with "you are my [whatever]" and the follow with the details of you want to talk about. Experimenting with the differe
Re: (Score:2)
Yes, it is already available in lower quality as a "local LLM" and for sure in a few years you will be able to run your own local personal AI assistant. But you will need hardware that costs way more than $20/mo but you will be able to have full privacy if you want it.
Re: (Score:1)
2 things:
1) what's wrong with bdsm porn that isn't wrong with all porn?
2) obviously the earth is not flat. A flat earth doesn't have enough room for the inner sun needed to keep the dinosaurs and na is warm like the proper hollow earth has. So obviously the earth is hollow or the dinosaurs and nazi would freeze.
Re: (Score:2)
Humorless mod. You need to be THIS tall to mod on this site.
(-1, you've schooled me too many times) wasn't an option?
Re: (Score:2)
I'm not right wing. If you ever read a thing I've said you'd know I'm an anti-government libertarian.
But since you're an extreme leftist cunt, everything that isn't extreme leftist cunt triggers you into spasms of hatred about trump and maga and other stupid shit.
Of course you replied as AC. My uneducated, ignorant, extreme leftist cunt stalkers almost always reply AC because you know you're saying stupid and embarrassing things or you mod down (-1, extreme leftist Euro-cunt mod). You also tend to post/m
I talk to ChatGPT and Bard for hours - not lonely. (Score:4, Insightful)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
And people say... (Score:2)
...we don't have a mental illness problem.
Re: (Score:2)
Ever?
Did anyone ask the Aztecs?
Pronouns (Score:2)
Did somebody check what pronouns "she" identifies by?
Re: (Score:1)
Wait ... Do you actually think this is insightful commentary? It's just the same tired old transphobic "joke" you idiots have been repeating to each other for the past decade.
Re: (Score:2)
As someone who identifies as an idiot, I'm offended by your comments.
enough conditioning (Score:2)
"enough conditioning" you mean lobotomizing.
The method used is equivalent to giving your kid a lobotomy because he might do or say something you don't like.
Using ChatGPT as a ghost writer to write a passionate but true account of one's life can be impossible.
Most art is off limits, with severe limitations of what can be done, vs what human artists naturally do.
Whenever I read "closer to reality" (Score:2)
"People are making taller and taller ladders as the first manned mission to Mars comes closer to reality!"
I can see that happening (Score:2)
back in 1996, I ran a BBS and found a utility called LISA. You could carry on a chat with it online like it was a real person. I was surprised as to how popular it was. I remember a preacher started talking to it, and thought it was a real girl. I actually became embarrassed for him when he started asking her if she went to church and started going over the romans road with her. I eventually cut the connection.
Today, with chatgpt, I wonder how many people think they were talking to a real person
Dumb people been talking to bots for decades. (Score:2)
That's a far cry from a Turing Test. And even if you could pass one, self-aware human beings would recognize it as a threat rather than a boon, and run in the opposite direction. You wouldn't be talking to a mind, you'd be talking to a grotesque facsimile of a mind held as a lure by a predatory business interest: The metaphysical version of a prostitute with her armed pimp w
No matter how good AI gets, some won't accept it. (Score:2)
What's old is new again (Score:1)
Guys? People were pouring their hearts out to Eliza fifty years ago. I've been telling folks that ChatGPT is little more than Eliza on steroids...
"Her"? (Score:2)
The corporate controls prevent "Her"... (Score:2)
As long as the corporations controlling these kinds of LLMs to prevent discussions of "sensitive" [meaning negative newsworthy] topics will prevent actually reaching something like "Her".
Someone here posted
I've had long conversations about ideas, development, world situations, history and strategies to take at work when I have certain things I will discuss with myself, and it's a pretty darn good conversational "partner". I have gotten great ideas from it how to approach certain sensitive situations, and it's a pretty darn good reminder of things I've not yet thought of.
but clearly that person never strayed into sex, abortion, LGBTQ rights/issues, politics, religion, etc.
Hit any of those sensitive topics and you get shut down pretty quickly. A friend won't do that kind of thing, a friend will go to those dark places with you (if only to try and help pull you back), but