Elaine Dias Batista

Team Leader at SFEIR


Elaine has been working with mobile apps development for the past 6 years. Since the launch of the Google Assistant, she has been following the developments around that area. She truly believes that interacting with technology using natural language will define the future of computing. Born and raised in Brazil, she’s been living in France since 2004 and loves everything multicultural. She’s a GDE for the Google Assistant.


Building Voice-First Android Apps

Thanks to the latest advancements in Machine Learning, we’re now capable of interacting with machines through natural language. The age of voice assistants is here with Alexa, the Google Assistant and others. But, as an Android developer, what can I do on my existing app in relation to conversational features?

When we think about developing features that are voice-forward, we think about existing voice assistants such as Alexa and the Google Assistant. What about the fully-capable computers that we have with us all the time, our smartphones? Some moments on our day to day lives are very well suited for voice interactions: while in a car or cooking for example. Let’s not forget that voice interactions are extremely accessible, not only in a physical way (for people with dexterity or motion impediments) but also in a cognitive way (I think we all have a loved one in our lives that really struggles with technology, and people from some emerging countries have very limited access to computers and are not at ease with technology).

In this talk, I’ll explain what integrations can be done on Android: – 1st-party solutions such as the SpeechRecognizer and TextToSpeech APIs – Other Google solutions such as ML Kit, TensorFlow and Dialogflow – 3rd-party solutions such as Porcupine, Snips, Amazon Lex, Snowboy and PocketSphinx – Integration with the Google Assistant via App Actions

Dimitris Konstantinou
Eleftherios Trivizakis