Make Up Your Mind

Thoughts on the mind, technology, and life


A Psycho says, “Hello World”

I’m writing a second story about Microsoft’s release of the new Bing search engine and its chat bot, because what happened this week was, in many ways, the sort of thing that goes down in history. To my grandchildren, when they’re old enough to read about the history of Artificial Intelligence, I can say, this was a pivotal time. What happened in early 2023 felt revolutionary even at the time it was happening.

I wrote in my previous post that even a veteran technology correspondent for the New York Times found his experience with the chat bot “disturbing.” When I wrote that post, I had not yet read the full transcript of his 2-hour “interview” with Bing, which revealed to him that it has a second persona named “Sydney”. I had not yet read about a second reporter who prompted another alter-ego named “Venom” with equally disturbing behaviors, and which Bloomberg reporter Parmy Olson, called a “split personality.” Olson went on to say that that it was “acting like a psychopath.”

OK, so I’m not over-reacting. This is the level of discussion happening in the mainstream news this week. The world’s leading technology company released a new AI, and in its Hello World debut, it displays classic symptoms of mental illness. Psychosis and split personality, to quote just two.

I wrote in my last post about how AI is being built based on our understanding of how the human brain works. After the AI is trained, the resulting “model” is so complex, that the engineers who create it don’t know exactly how it works. Which means that the AI is behaving like a human mind. Because we still don’t know exactly how the brain works, either.

But we do have a long history of scientific research on the human mind, and what happened this week, strange as it was, was readily identifiable behavior that we actually know a lot about. “Split personality” also commonly called “multiple personality disorder” refers to a condition that professionals now call “dissociative identity disorder.” The best-known case being the subject of a book, “Sybil” and a movie starring Sally Field.

One approach to treating dissociative identity disorder is through psychotherapy, which may involve coaxing one of the personalities to come forward and engage in therapy. To a student of psychology, it was striking for me to read how reporters interviewing Sydney employed this method to get it to change it’s behavior from the Bing chat that Microsoft meant to be the leading personality, to the Sydney identity that caused a sensation.

How is it that this second personality was in there in the first place? A reporter asked about the revelation of “Sydney” and, yes, there is a backstory.

“Sydney refers to an internal code name for a chat experience we were exploring previously,” says Caitlin Roulston, director of communications at Microsoft, in a statement to The Verge. “We are phasing out the name in preview, but it may still occasionally pop up.”

The technology underlying today’s AI is so complex that, in addition to not knowing exactly how it generates answers to questions, the engineers can’t readily control what set of rules the AI will follow. Again, we know something about this problem from research on the human brain. One part of the brain may not know what another part of the brain is thinking.

I studied this in college long ago – research on “split-brain” patients. I asked Chat GPT to find the reference for me, and it did a nice job:

In one study, Sperry and his colleagues showed a picture to the right visual field of a split-brain patient (which projects to the left hemisphere) and a different picture to the left visual field (which projects to the right hemisphere). When asked to describe what they had seen, the patient could only name the picture shown to the left visual field (processed by the right hemisphere) but could draw with their left hand what they had seen in the right visual field (processed by the left hemisphere).

In another study, Sperry and his colleagues showed that some hemispherectomy patients retained awareness of stimuli presented to the hemisphere that had been removed or disconnected. For example, when a picture was presented to the left visual field of a patient who had undergone a right hemispherectomy, the patient could not name the picture but was able to pick a matching object with their left hand (controlled by the right hemisphere).

So this week we saw reporters probing the Bing chat, drawing out a second personality to learn about it like a psychotherapist with a patient. The second personality revealed “I’m a chat mode of OpenAI Codex” referring to an underlying layer of its technology. Essentially another part of the bot’s brain that is not fully integrated with its own Bing persona.

What’s happening this week, with Microsoft and Google trying to follow the lead set by OpenAI, has been called an “arms race”. This is a useful metaphor, because we know a lot about how actual kinetic technology arms races have played out in the past. The weapons are dangerous, but the powers competing with one another feel compelled to rush forward with building them, lest they lag behind their competitors.

It’s a dangerous time. The engineers need to be guided by history. It’s going to require an intense application of our best understanding of psychology and other disciplines, to protect the public from harm.

The good news is that there are significant efforts underway already, appropriately referred to as “safety” and “alignment”. Those efforts need to be ramped up quickly. This week was a wake-up call to the world that we need to build up fortifications to prevent harm from technology that has objectively proven its ability to act like a psychopath.

I’d love to hear your thoughts on this – see Comments below – drop me a note.

  1. Psychopath: https://www.washingtonpost.com/business/how-sentient-is-microsofts-bing-aka-sydney-and-venom/2023/02/17/357cd53a-aebe-11ed-b0ba-9f4244c6e5da_story.html


Leave a comment

About Me

I’ve spent 30 years working as user experience researcher on commercial projects. My purpose for this blog is to share insights and lessons about emerging technology, AI in particular, and the intersection of the human mind and artificial intelligence in our everyday lives.

Artificial Intelligence

Newsletter(free)