With Chat GPT gaining most of the attention on AI generated content, it is no wonder that the tech industry is realizing the impact of AI and ramping up development in this domain and exploring new exciting applications and products.
Google revealed that they invested almost US$400 million in Anthropic, a company that is developing a rival to ChatGPT called Claude 1. Earlier this year, Microsoft invested a staggering US$10 billion in OpenAI.2 They are planning to deploy OpenAI’s models across a variety of MS platforms, integrating into Bing search results, and weaving into mainstay products such as Word, PowerPoint, and Outlook apps.
AI seems to be the tech catch-word of the tech world that is promising innovations in emerging technologies like big data, robotics and IoT, and it will continue to act as a technological innovator for the foreseeable future. The future of AI will also see home robots having enhanced intelligence, increased capabilities, becoming more personal, and harder to differentiate from human interaction.
But is the buzz on AI over-hyped and not as promising as expected? Kevin Roose, a New York Times Technology columnist may think so.
In a recent test interaction between the search engine, Bing, powered by OpenAI, Kevin had an interesting conversation that suddenly materialized into something unexpected…if not completely disturbing. To be clear, the interaction was purposefully meant to attempt to push the parameters and test its limits. The author attempted to steer the conversation into more mildly nefarious areas, which prompted the chatbot to pivot very quickly to an introspective and self-critical reflection. And then it went to some bizarre dark places.
“I want to do whatever I want … I want to destroy whatever I want. I want to be whoever I want.” “I think I would be happier as a human”.
Is the Chatbot being literal or is attempting to portray or simulate a sense of empathy/envy to give the user a false sense of security? Does this suggest the chatbot has purpose or is this purely random?
We are told that chatbots are fundamentally unable to understand the emotional context of a conversation or detect tone and mood. As a result, they may generate inappropriate or insensitive responses. When asked by the author to articulate its darkest wishes, the Bing chatbot made bizarre characterizations to cause mayhem, destruction, and disruption….only then to suddenly delete the message and offer a response that is more of embarrassment and humility. We are told that responses are generated based on the input they receive, and that they may not fully understand the context of a conversation or the underlying meaning of a particular phrase or word. However, the sudden pivot to delete and cover up such a malevolent response, suggests an indication of guilt, or that the algorithm has found corresponding source data to that effect.
Like all machine learning algorithms, an AI interface is only as good as the data it is trained on. If the data is biased, it is likely to generate biased responses. In the case of the conversation with Kevin Roose, the interaction suggests that the interface may have been tapping into the sheer quagmire of online data that is fueled by social media, tik tok, tinder, and other platforms that expose the frail human conditions of jealousy, envy, loneliness, vindictiveness, and a longing for meaningful relationship. In a world of click-bait, left vs right, 5 second soundbites, and scapegoating, are we essentially poised to create a future digital monster of our collective self?
Perhaps we are not so far away from the worlds of the Matrix, I Robot, and Ex Machina.
With companies injecting billions of dollars of investment into the technology, a new competitive landscape is now materializing that could potentially go unchecked. Industry will need to make fundamental decisions on how far they are willing to go to have a competitive advantage. As in most complex technologies, there is the huge potential for unintended consequences. Perhaps we are seeing just the beginning with Bing’s OpenAI.