Supercharging Android Apps with On-Device AI: Gemini Nano & MediaPipe LLM Inference
The mobile AI revolution is increasingly moving on-device, driven by demands for privacy, low latency, and offline capability. In this session, I’ll demonstrate how to leverage cutting-edge on-device AI tools—including Gemini Nano, Google Edge AI SDK, and MediaPipe LLM Inference APIs—to build intelligent Android apps entirely in Kotlin and Jetpack Compose.
Key Takeaways:
- On-Device AI Landscape: Understand the shift from cloud to on-device AI, the privacy benefits, and real-world use cases for features like smart reply, summarization, image analysis, and more.
- Getting Started with Gemini Nano: Walk through integrating Google’s Gemini Nano generative model into a modern Android app, highlighting both ML Kit GenAI APIs and the experimental AI Edge SDK for custom scenarios.
- Beyond Gemini: MediaPipe, LiteRT, and Custom Models: Explore the MediaPipe ecosystem for LLM (large language model) inference on-device, and how to bring your own models using LiteRT/TensorFlow Lite for specialized tasks.
- Jetpack Compose + Kotlin: LLM-Driven UI Generation: Discover how reactive UI development with Compose and Kotlin Coroutines enables real-time, AI-powered experiences. We’ll demonstrate our internal LLM-based UI generation framework, which builds dynamic, schema-driven UIs on top of Compose—enabling adaptable interfaces generated entirely from structured prompts.
- Production Considerations: Address model size, device compatibility, privacy, and performance optimization—lessons learned from deploying AI features at scale in consumer and enterprise Android apps.