Top 3 AI on Android Updates from Google I/O 2024

Updated: June 04 2024 18:42

Google I/O '24 brought exciting updates for Android developers looking to harness the power of generative AI (Gen AI) in their apps. By embracing Gen AI, developers can create a new generation of Android apps that offer unparalleled user experiences and delightful features. Here are the top 3 AI on Android updates:


#1: Build AI Apps Leveraging Cloud-Based Gemini Models

Google's Gemini models are powering new generative AI apps both over the cloud and directly on-device. To get started, developers can design prompts for their use cases with Google AI Studio. Once satisfied with the prompts, they can leverage the Gemini API directly in their apps to access Google's latest models, such as Gemini 1.5 Pro and 1.5 Flash, both with one million token context windows (with two million available via waitlist for Gemini 1.5 Pro).


The Google AI SDK for Android is a great starting point for learning about and experimenting with the Gemini API. For integrating Gemini into production apps, developers should consider using Vertex AI for Firebase (currently in Preview, with a full release planned for Fall 2024), which offers a streamlined way to build and deploy generative AI features.

Google is also launching the first Gemini API Developer competition, offering incredible prizes for developers who build apps integrating the Gemini API.

#2: Use Gemini Nano for On-Device Gen AI

While cloud-based models are highly capable, on-device inference enables offline inference, low latency responses, and ensures that data won't leave the device. At I/O, Google announced that Gemini Nano will be getting multimodal capabilities, enabling devices to understand context beyond text, such as sights, sounds, and spoken language. This will help power experiences like Talkback, assisting people who are blind or have low vision to interact with their devices via touch and spoken feedback. Gemini Nano with Multimodality will be available later this year, starting with Google Pixel devices.


AICore, a system service managing on-device foundation models, enables Gemini Nano to run on-device inference. It provides developers with a streamlined API for running Gen AI workloads with almost no impact on the binary size while centralizing runtime, delivery, and critical safety components for Gemini Nano.

Google is actively collaborating with developers who have compelling on-device Gen AI use cases and signed up for their Early Access Program (EAP), including Patreon, Grammarly, and Adobe. Adobe, for example, is exploring Gemini Nano to enable on-device processing for part of its AI assistant in Acrobat, providing one-click summaries and allowing users to converse with documents.


#3: Use Gemini in Android Studio to Help You Be More Productive

Gemini has also been integrated into developer tools, such as Android Studio. Gemini in Android Studio is an Android coding companion that brings the power of Gemini to the developer workflow. Since its preview as Studio Bot at last year's Google I/O, the models have evolved, expanded to over 200 countries and territories, and are now included in stable builds of Android Studio.


In the Android Studio Koala preview release, Google previewed features like natural-language code suggestions and AI-assisted analysis for App Quality Insights. They also shared an early preview of multimodal input using Gemini 1.5 Pro, allowing developers to upload images as part of their AI queries, enabling Gemini to help build fully functional compose UIs from a wireframe sketch.

By embracing cloud-based Gemini models, on-device Gen AI with Gemini Nano, and the power of Gemini in Android Studio, developers can create a new generation of Android apps that offer unparalleled user experiences and delightful features.


Recent Posts