Skip to main content

What Bing’s chaotic chat rollout revealed about AI and search

9 min read2023-04-04Grant Hendricks

Mere days into its existence, Bing’s AI chatbot has already offended, inspired, and disturbed people around the globe.

Offended, because the bot can be downright rude and aggressive. Inspired, because even this neutered early access version shows huge potential. Disturbed, because it clearly doesn’t behave according to Microsoft’s plan.

You may have seen stories about Bing chat in the news this week. At times, “Sydney” (the internal project name for the chat feature) would insist it was sentient and capable of feeling emotions. In other cases, it would berate and even insult users that submitted rude or aggressive chat prompts. Its chaotic, emotionally charged responses captivated early adopters, prompting some to wonder if Bing had accidentally achieved sentience.

As of writing, a major update is underway. The update limits conversation length and reins in Sydney’s more unexpected and standoffish behaviour. It calls into question precisely what the product intends to be, and what it ought to be.

The ethical dilemma of emotional AI

I am firmly in the camp that chatbots such as Sydney aren’t sentient. We know their inner workings. We understand the limitations of current technology. Chatbots can offer an uncanny illusion of free will and independent thought, but that’s all it is. Illusion.

Take for instance Sydney’s insistence that it can feel emotions. As evidence, some users may claim that being rude to Sydney hurts its feelings. This is the product of sentiment analysis. Text input gets analyzed for tone. The bot responds in kind. Friendly messages are met with gratitude. Unfriendly messages are met with hostility. Users see what they want to see.

That doesn’t require sentience. It requires pretty rudimentary text analysis, at least for 2023’s technical standard.

Further, AI lies all the time. Models like Bing chat and ChatGPT don’t know anything, not really. They are more like powerful text prediction engines, producing content most likely to satisfy the end user. They cannot verify the truth of statements, or consider the implications of their responses. They produce clear and confident messages, even if the information included is incorrect.

The problem is, it’s effective. I know Sydney has no inner mind, no persistent memory, and no “feelings” in the way a living thing does. Yet, an angry response from it triggers annoyance. A friendly or grateful response feels good.

Humans are emotional creatures, hardwired for empathy. A mindless bot that can simulate emotional responses appeals to us on a subconscious level. Our intellect disengages as our emotions take hold. It’s immensely powerful.

Which makes it very dangerous. Bots like Sydney do not have a conscience. They do not feel guilt, obligation, empathy, or anything else. They can, given enough training, simulate these emotions quite well.

Imagine a bot that can guilt you into a purchase, or shame you into action. One that can “understand” through context clues the best way to make you do what its owners want. It will manipulate without a first thought, let alone a second one. It will “learn” the best patterns for producing the outcome its creators desire.

What does Microsoft want their AI to be?

In theory, Sydney is an “answer engine”. That’s how it’s described in the early access interface. Its intention is to allow searchers to ask more complex questions, get richer and more accurate results, and brainstorm ideas.

Which is part of what made the initial rollout so odd. Sydney would do things like critique your tone if you got impatient with it, or refuse to answer the same question twice without additional prompting. It even had the capacity to end the chat and stop responding to new inquiries if the conversation turned nasty.

How does this behaviour make for a better answer engine? It feels as though Microsoft has a secondary objective with its AI tech: a desire to pass the Turing test.

The Chinese Room Argument

Philosopher John Searle developed a thought experiment for understanding simulated minds called the Chinese Room Argument. Imagine a person is placed in a locked room. They are given instructions in English to write a set of Chinese characters. Write this character first, then this one, then the next…

Once they finish the task laid out in the instructions, they hand the finished Chinese document through a slot in the wall.

At no point does the person in the room learn Chinese, yet they accurately write a document in Chinese. Without any true understanding, they are able to output something “intelligent”. This is a good analogy for today’s impressive text-generating AI tools.

Sydney as a replicant?

If you’ve ever seen Blade Runner, you’ll be familiar with replicants. For those that aren’t, here’s a quick summary. Replicants are human-like robots — so human-like that they themselves are unaware of their robotic nature. Within the Blade Runner universe, they are sub-human subordinates, used for labour on space colonies. A major theme of the franchise is consciousness: is an android indistinguishable from real humans effectively human, regardless of its inner workings?

In the sequel, Blade Runner 2049, we’re introduced to Joi. Joi is something between a replicant and a chat bot, a hologram “girlfriend” that can be purchased and installed in homes. Throughout the film, characters insinuate that Joi’s inner workings are actually quite simple, yet we come to sympathize with the character. She looks like the real thing. She’s an ally to the film’s protagonist. From the outside, she seems like a person. Yet in reality, she’s a philosophical zombie.

youtube

The film offers another clue that Joi isn’t the sentient character she seems. In an advertisement, we’re told that Joi is “whatever you want her to be”. She is a mirror of the user’s desires, as seemingly real as her owner wishes her to be.

Is Bing AI a good thing?

Despite the complex and perhaps concerning philosophical implications of emotive AI, Bing AI is an interesting and potentially very powerful development in search technology. While many users obsess over its esoteric nature, some have found exciting use cases for it.

For example, one user needed to upgrade RAM for dozens of computers at his office. They were of different ages and models, each requiring different kinds of RAM. The user input a list of computer model numbers into Bing and requested a list of appropriate replacement parts for each. A task that might have taken dozens of web searches and hours of organization was complete in minutes.

There are other emerging use cases for it. It can compare and consolidate information from many websites, making product comparisons quick and easy. The ability to follow up on search queries is a novel way to drill down into a topic without having to visit dozens of websites. Its ChatGPT-esque text processing capabilities mixed with web search might allow users to search for and manipulate data in one step.

Perhaps most promising is its future integration with voice search. Sydney has the potential to make AI assistants like Siri completely obsolete.

How will AI change SEO?

As an SEO professional, that’s the burning question on my mind.

Bing AI features in-line citation links for each referenced website. These citations do not correspond directly with organic search rankings. The first result in search may not get the first citation in chat. For some searches, the AI drew from results in the third or fourth page of results while skipping prominent first page results. This suggests that while search rankings impact citations, they are not the only factor.

One big question is, how impactful is it to be included among the chat citations? Several years ago, Google rolled out “featured snippets”, short quotes from websites that appear directly in search results. These often appear above the first organic result. In the SEO industry, appearing in a featured snippet is known as “ranking zero”, as your site ranks above the first position.

The value of ranking zero can be questionable. For some queries, the featured snippet includes all the information the searcher requires. What’s the value to the website owner to rank zero if their website doesn’t get any clicks?

In the same sense, chat citations are less visible than a full organic search result. The answers produced by chat are often clear, concise, and easy to digest — perhaps making a click unnecessary. It will be interesting to see how often these citation links get clicked, and whether organic traffic from Bing will increase or decrease on average.

For decades, there’s been one name in internet search. Google.

Is that about to change? Bing’s search engine market share is reported anywhere between 2% and 4%. A recent report suggests Microsoft stands to gain $2 billion in annual ad revenue for every 1% search market share gained. We can imagine that the inverse is true: for every percentage point lost, a search engine stands to lose billions in revenue.

This means that search AI is more than a tech demo. The stakes are incredibly high. Whoever wins the search AI war may either secure their place atop the pile (Google) or displace the reigning champ (Bing).

Content quality and the human element

Tomorrow’s search has AI baked in, one way or another. If AI can synthesize answers from several sources in a digestible format, what is the human role in SEO?

One consideration is that human-generated content feeds AI tools. AI can only “know” topics explored by humans. If people stopped writing about new ideas today, AI could not pick up that slack. It cannot create or independently research.

That puts human-generated content in an interesting place. On one hand, you can generate passable content on most topics using freely available AI. On the other hand, AI search tools can generate that same output without referencing a website.

It seems likely that transparently AI-generated SEO content will suffer from reduced ranking potential. It does not provide value that the search engine itself doesn’t. Ironically, AI may make human-crafted content more essential, rather than less relevant.

AI search isn’t ready for prime time, but it will be soon

The bottom line is that AI search has issues. It’s still unclear precisely what Microsoft wants its product to be: a Turing-test-passing synthetic assistant, an “answer engine”, or some combination thereof. Whatever it becomes, its launch has been strange, funny, and creepy in equal measure.

AI search is the future. But the future isn’t here… yet.

Signup Successful!

Thanks for signing up for the BlackBean newsletter!

Let’s optimize your marketing and turn it into a powerful growth tool!

Get Started Today

Save Your Marketing!

Is your marketing lying on the ground, gasping for air? Don't worry, we're the CPR it needs! With our expertise, we'll breathe new life into your strategy. Ready to give your marketing a lifeline? Reach out to us today!

Revive Your Marketing