
Humanised perceptions of virtual assistants incentivise us to trust machines more
Authored by: Ms Esandi Kalugalage
Andrew N. Liveris Academy for Innovation and Leadership at The University of Queensland, Brisbane.
What do you perceive Siri to look like? Male or female? Blonde or brunette? Dressed in casual or formal clothes? What about Alexa, or Google Assistant, or Cortana?
Giving human qualities, mannerisms and visual features to artificial intelligence driven virtual assistants can incentivise humans to trust these devices and companies more than they otherwise would, according to researchers at the University of Waterloo1.
The study asked 20 participants to share their perceptions of Alexa, Google Assistant and Siri’s personalities and were asked to describe their visual features. The findings describe Siri as disingenuous, Alexa as genuine and caring and Google as professional. It also found that people are anthropomorphising conversational agents by adding visual elements to these characteristics which increased trust in the machines and increased willingness to share information with them.
This issue of trust becomes increasingly complex considering cynics of every age are suspecting their devices of eavesdropping on private conversations. Virtual assistants are yet another way for companies like Apple and Amazon to keep tabs on your purchases and internet searches. Somewhat alarmingly, Amazon confirmed in 2019 that its conversational agents are, in fact, always listening to you, even if only passively until hearing its wake words. The company also admitted that employees listen to some customer conversations and that it holds on to Alexa’s data even if the user manually deletes it.2
However, privacy concerns have not stopped the influx of smart speakers into our homes—very possibly because humanisation of these virtual agents are making them more personal and easier to trust. According to research firm, Ovum, there will be almost as many virtual assistants as people on the planet by the end of 20213. These facts point to the tension between the desire to increase engagement with technology versus the significant risk it poses to privacy.
Anthropomorphisation of these conversational agents indicates a change in status, or an upgrade, of technology itself. When we are assigning human characteristics to these devices, we are bringing them up to our own level. The human-like voice, distinct personality and mannerisms of these agents, as well as their all-knowing cloud-based memories make these virtual assistants omniscient.
Even though the University of Waterloo study had only 20 participants, the research is unveiling parts of the meaning of humanisation, and anthropomorphisation in the context of algorithms, computers and robots. According to the findings of the research, we may be seeing these machines fully embedded into our lives in the near future; conversing, listening and even eliciting confessions from us.
The rise in popularity of virtual assistants poses the question of whether giving them synthetic or unhuman qualities would eliminate potential problems, or whether the closeness of these agents to their users is actually beneficial. Perhaps, these machines could even acquire a remarkable power over our emotions and thoughts in the future—what could this lead to?