An AI Experiment With GPT-3
Published by
GPT-3
13.10.2023

We wouldn't be an AI company if we didn't put the GPT-3 model to the test. A few weeks back, some of our team decided to select a theme, give the model prompts, and see what it would generate. Before you read the results, it's worth hearing what the humans involved in this experiment had to say.

What was the goal of this experiment?

The main goal of this experiment was to create an article using AI: from text to visuals, and to test the boundaries, challenges, and opportunities this technology can give us. My hope was that I could create art that was in my head and represent it digitally in a cool way. Marian, Product Designer

What text prompts did you give the GPT-3 model?

The GPT-3 model was offered bullet points to base the text generation on, and then it got asked to write a New York Times-styled article based on these points. Larisa Kolesnichenko, Machine Learning Engineer

What visual prompts did you give the model?

There are two images in this article, these are the prompts I used for each. Marian Lucas, Product Designer

Listing image

A series of images, each representing a different stage in the evolution of chatbots and conversational AI, from early text-based chatbots to modern voice-powered virtual assistants.

The images could be arranged in a circular or spiral pattern, with the earliest stage in the center and the most recent stage at the outer edge, to symbolize the progression and growth of the technology over time.

The picture could also include visual elements that represent the various functions and capabilities of chatbots and conversational AI, such as speech bubbles for communication, gears and circuits for technology, and arrows for movement and progression.

Header image

One large cover picture for an article with elements representing a different stage in the evolution of chatbots and conversational AI, from early text-based chatbots to modern voice-powered virtual assistants. The images could be arranged in a circular or spiral pattern, with the earliest stage in the center and the most recent stage at the outer edge, to symbolize the progression and growth of the technology over time.

pThe picture could also include visual elements that represent the various functions and capabilities of chatbots and conversational AI, such as speech bubbles for communication, gears and circuits for technology, and arrows for movement and progression.

And here it is. We present, in collaboration with GPT-3:

Chatbots Are Older Than You Think

The history of chatbots dates back to the early days of artificial intelligence and machine learning. In 1950, Alan Turing gave a speech on machine intelligence in which he proposed that machines could be taught to think like humans. This sparked a debate about the possibility of artificial intelligence becoming a separate discipline. In 1966, Marvin Minsky and Seymour Papert published a book called Perceptrons which proposed that artificial neural networks could be used to solve certain problems. This led to the development of the first chatbot system, ELIZA, by Joseph Weizenbaum in 1966.

ELIZA was based on a psychological technique called the Rogerian theory, which involves asking questions and reflecting back on the answers. The bot was designed to mimic a human therapist and often gave non-committal responses such as "I see" or "go on." The problem was the bot was limited in its ability to understand human conversation and would often give nonsensical responses.

The next development was Terry Winograd's introduction of SHRDLU in 1973. The SHRDLU chatbot could understand natural language commands such as "pick up the red block" and "move the green block next to the blue block." Although SHRDLU could easily execute these commands, it was limited to a small number of pre-defined blocks. It didn't have enough general understanding of the world, which we know provides valuable context to natural language choices.

Jabberwacky was next. Created by British engineer Rollo Carpenter in 1978, Jabberwacky was based on AIML (Artificial Intelligence Markup Language) to create a chatbot that could hold a conversation with a human. While it was a success, the bot was often unpredictable and, at times, nonsensical.

By 1995, chatbot technology was rapidly developing. The A Foundation developed the ALICE chatbot based on AIML. ALICE proved that chatbots could now hold more realistic conversations. This was a turning point and became the basis for many modern chatbots.

The potential for chatbots to become more than just conversation tools was realized in 2001 with the development of SmarterChild. As the first commercial chatbot, SmarterChild also forged a new way of communication between humans and bots. Along with instant messaging, SmarterChild allowed humans to ask a myriad of questions, receive intuitive answers, and ask for assistance. In fact, Smarter Child was a precursor to Apple's Siri and Samsung's S Voice. The interactivity between humans and bots was a major moment in chatbot development.

Over the next decade, chatbots became more mainstream. When Watson was released by IBM, it even found its way into entertainment! The popular US game show Jeopardy put Watson to the test. Not only did it perform well, it even beat some of the most notable champions of the game.

Where we find ourselves today

AI assistants are now part of everyday life. Most people know who Siri and Alexa are and interact with them frequently throughout their day. We set our alarms with digital assistants, send texts, have messages read to us, find information, ask for directions, turn our TVs on, and listen to music, all with a simple voice command to an AI bot.

In 2016, Facebook released the Messenger platform, which integrated social media with AI. Chatbots were moved even closer to the general public's everyday communication. Businesses could build chatbots to act as their customer support staff on a platform that users wanted; social media.

These use cases for chatbots have made the notion of "context" vital as they develop further. Context is the environment in which a chatbot interacts with a user. Context can include the time of day, the user's location, and the user's previous interactions with the chatbot. It also allows chatbots to understand the user's current situation and provide relevant information.

What does the future look like for chatbots?

Lewis Carroll once said, "It takes all the running you can do to keep in the same place." The same can be said of chatbots. In order to keep up with the ever-changing needs of customers, chatbots need to evolve and adopt new technologies.

At Kindly, we are constantly working on introducing state-of-the-art technologies in our chatbots. Our recent addition of the GPT-3 model has improved natural language processing tasks, such as intent classification and entity recognition. We are also working on adding multilingual support to our chatbots so that users can write their queries in many languages.

With the help of these new technologies, chatbots will be able to provide even better customer service in the future and increase sales conversions.

Continue reading