Should Machines Speak Like Us?
Machines are getting smarter, but it doesn’t mean they should also sound more human.
Nearly every device, from the fridge in your kitchen to the car that takes you to work, is getting a voice interaction makeover. Driven by recent advances in A.I., machines we can talk to, machines that speak like human beings, are finally becoming a reality.
It makes sense. Using voice to interact with machines seems like the obvious next step for human-computer interaction. And to design these interactions to closely mimic human conversations its natural conclusion.
That’s easier said than done, though. Think of Siri that gets on our nerves by making the same silly joke over and over again. Or when the all-knowing Google Assistant is mysteriously unclear about what exactly it does not understand. Or Alexa, when it just won’t shut up.
We accept it for now, knowing that as the underlying technology matures the experience will vastly improve over time. Eventually the conversations we have with machines will become indistinguishable from those we have with other humans. Problem solved.
But it begs the question—should machines even try to simulate humans in the first place? Should they make jokes, feign emotions or act moody? Should they use sarcasm, sound dramatic or pretend they don’t know what we are talking about?
After all, even when they do, eventually, reach parity with humans, they still won’t be humans. And so, regardless of how intelligent and proficient in mimicking human behavior they become, ascribing human traits to them will, by definition, always will feel inauthentic.
Perhaps it makes more sense then, to develop their very own, machine vernacular? Not one that depends entirely on metaphors borrowed from humans but one that feels wholly distinct to machines. Such that when they do talk back to us, they’ll do so like only machines would.
So, uhmm, Machine English anyone?