The ELIZA Effect: Why We Love AI
Summary: Users quickly attribute human-like characteristics to artificial systems, which reflect their personality back to them. This phenomenon is called the ELIZA effect.
Conversational AI interfaces like OpenAI’s ChatGPT, Microsoft’s Bing Chat, or Google’s Bard are fun to use in part because they feel personable and somewhat human; chatting with AI in this way can feel like talking to another person. Some folks have even gone so far as to report developing feelings of attachment towards certain chatbots. Others have been so convinced of AI’s intelligence that they have confidently published their fears of sentience.Companies clearly don't see a problem with making bots seem human: Meta just announced an entire suite of AI personalities that users can interact with and learn from. But just because chatbots feel human doesn't mean they are good AI products.
Do not confuse our inherent ability to attribute human characteristics to artificial intelligence (AI) models with true technical breakthroughs. A mock virtual psychotherapist named ELIZA provides a valuable lesson to contextualize the current AI boom.
ELIZA: A Deceptively Simple Chatbot
ELIZA was developed by Joseph Weizenbaum, a professor at M.I.T., in the 1960s. ELIZA would take the position of a text-based therapist. It would ask: Is something troubling you? Then it would identify a keyword in the user’s response (I’ m feeling sad ) and repeat it back in a question such as Is it important that you’re feeling sad? or Why are you feeling sad? When ELIZA failed to identify a keyword in its simple vocabulary, it would respond with a generic phrase: please go on or what is the connection, you suppose?
Read Full Article