Did you ever have the feeling that your AI-driven navigation system, while touted as the most sophisticated, isn’t giving you the fastest route but instead one it wants you to map out?
Companies like Waze and Google Maps, Spotify and Apple Music, or Alexa and Siri, spend millions of dollars collecting endless data to improve their competence. But new Technion research suggests that the “warmth” of a system, defined as a system with your best interests at heart, plays a pivotal role in predicting consumers’ choice between AI systems — even more than competence.
In a study published in the Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, more than 1,600 participants were asked to choose between two AI systems. In one scenario, potential users chose one of two systems that recommended car insurance plans. The first used an algorithm trained on data from just 1,000 car insurance plans geared towards “helping people like them.” The other used a more sophisticated artificial neural network algorithm that took in data from one million insurance plans but was developed to help insurance agents make better offers.
Users favored the highly warm first system even when the algorithm and data were less robust and overtly incompetent. “Users wish to know who the system is accountable to and who will be prioritized when conflicting requests or needs occur,” said Zohar Gilad, Ph.D. candidate, who worked alongside Assistant Professors Ofra Amir and Liat Levontin in the Faculty of Industrial Engineering and Management. “AI designers should therefore communicate to potential users how the AI will act in such instances.”
Even though these AI systems are faceless, the findings are similar to what we know of human interactions. When judging others, warmth is often more important than competence.