The researchers suggest that a pervasive design perspective is driving the development of AI with increasingly human-like features. While this may be appealing in some contexts, it can also be problematic, particularly when it is unclear who you are communicating with. Ivarsson questions whether AI should have such human-like voices, as they create a sense of intimacy and lead people to form impressions based on the voice alone.
In the case of the would-be fraudster calling the “older man,” the scam is only exposed after a long time, which Lindwall and Ivarsson attribute to the believability of the human voice and the assumption that the confused behavior is due to age. Once an AI has a voice, we infer attributes such as gender, age, and socio-economic background, making it harder to identify that we are interacting with a computer.
The researchers propose creating AI with well-functioning and eloquent voices that are still clearly synthetic, increasing transparency.
Comments are closed.