Google Assistant will give Android app developers AI superpowers
With its dual announcements of Google Home hardware and its new Allo messaging app at Google I/O this summer, Google introduced a new way to communicate with its web services and Knowledge Graph called Google Assistant. Accessible via a conversational UI chat thread in Allo or through users’ voices using the Amazon Echo–like Google Home product, Assistant leverages Google’s deep learning capabilities to understand users’ intentions and proactively respond to their queries. And as Google Assistant evolves to be the headline feature of Google’s new Pixel smartphone running Android 7.1 Nougat, the possibilities for Android app developers with third-party access has never been greater.
Unlike Amazon’s Alexa or Siri in iOS 10 with SiriKit, Google Assistant doesn’t offer third-party API access quite yet, but Google promises the functionality is coming. When it does, Android and web developers will have a new suite of tools to not only access users wherever they are—from their Pixel devices’ home screens to the comfort of their actual homes. But not only that, Google Assistant promises to help developers field more complex and ongoing conversations than ever before, leaning on Google’s industry-learning neural network and deep learning technology to make any app smarter.
Once Google opens its Assistant to Android app developers, any app or web service can leverage its deep learning neural network to become much, much smarter.
Because of the myriad ways to interface with Assistant—closed-loop conversations using Android Nougat’s new launcher, open-ended voice queries on Google Home, or text-based conversational dialogues within Allo—the arenas that apps can begin playing in are endless. While Siri can only support a handful of app types in iOS 10, which Apple intends to expand as time goes on, Google Assistant could learn different types of intents and syntactical variables much more quickly with its enormous data sets. Soon, Assistant could help Android users navigate deep parts of the apps they love, adeptly reading on-screen content and using context clues to make logical leaps.
Google Assistant could soon help Android users perform complex actions within their favorite apps, using on-screen cues and user data to make reliable logical leaps.
What Assistant lacks in third-party app support it makes up for in accuracy and reliability, understanding and responding to a wider range of requests than nearly any competitor’s virtual assistant at this stage. And as it evolves to absorb more and more functionality, it will become smarter at understanding user intentions and performing increasingly intelligent tasks. Google Assistant has already grown beyond the constraints of explicit user requests—in the form of a chat bubble or a voice command—and can interpret on-screen content from Google Pixel devices to flesh out its behaviors. In the future, users might not even have to summon Assistant at all to benefit from its understanding of their needs.
Google is the company most heavily invested in artificial intelligence products, and seems intent on building the all-knowing supercomputer from Star Trek. Android and Google Search are platforms that facilitate these ends, collecting data about how users think and interact with technology to add to Google’s ever-expanding dataset. Ultimately, Google’s genderless, nameless assistant will grow in sophistication and proactivity to transform or perhaps supplant both Search and Android, becoming a software platform unto itself that fields and responds to any user’s intent. Until then, there is no fate but what we make.