Amazon, Apple, and Google are focused on growing the use of their voice assistants (Alexa, Siri, and Google Assistant, respectively). Like with more traditional apps, the business model that companies are pursuing is to drive ubiquity of the platform, and allow developers to create the real value by building voice apps (called “voice skills” for Alexa) or adding voice compatibility to their existing products. The expectation is that over time, users will become increasingly comfortable with voice technology, and voice will replace touch as the default means of interacting with technology. It will replace the touch screen, that replaced the mouse, that replaced the keyboard. The platform owners will benefit by capturing enormous amounts of data that they will find ways to monetize.
Think about it this way – today, when one of your users goes into your app, that’s a black box to the platform. They know the user is in there, but they generally can’t tell what the user is doing. With the addition of a voice layer, every time a user gives a command the device maker gets a peek into the black box, and learns a bit more about your user.
To a large extent, both developers and platform companies are learning on the fly. It’s hard to predict what types of voice skills people will come up with, what will resonate with consumers, what problems they may encounter, and how courts will resolve those issues. As a result, it’s likely that the voice assistant section of each platform’s developer agreement will be an area of frequent change.