Android Developers Blog
android-developers.googleblog.com.web.brid.gy
Android Developers Blog
@android-developers.googleblog.com.web.brid.gy
News and insights on the Android platform, developer tools, and events.

[bridged from https://android-developers.googleblog.com/ on the web: https://fed.brid.gy/web/android-developers.googleblog.com ]
Introducing CameraX 1.5: Powerful Video Recording and Pro-level Image Capture
_Posted by Scott Nien, Software Engineer_ _ _ The CameraX team is thrilled to announce the release of version 1.5! This latest update focuses on bringing professional-grade capabilities to your fingertips while making the camera session easier to configure than ever before. For video recording, users can now effortlessly capture stunning slow-motion or high-frame-rate videos. More importantly, the new Feature Group API allows you to confidently enable complex combinations like 10-bit HDR and 60 FPS, ensuring consistent results across supported devices. On the image capture front, you gain maximum flexibility with support for capturing unprocessed, uncompressed DNG (RAW) files. Plus, you can now leverage Ultra HDR output even when using powerful Camera Extensions. Underpinning these features is the new SessionConfig API, which streamlines camera setup and reconfiguration. Now, let's dive into the details of these exciting new features. ## Powerful Video Recording: High-Speed and Feature Combinations CameraX 1.5 significantly expands its video capabilities, enabling more creative and robust recording experiences. ### Slow Motion & High Frame Rate Video One of our most anticipated features, slow-motion video, is now available. You can now capture high-speed video (e.g., 120 or 240 fps) and encode it directly into a dramatic slow-motion video. Alternatively, you can record at the same high frame rate to produce exceptionally smooth video. Implementing this is straightforward if you're familiar with the VideoCapture API. 1. Check for High-Speed Support: Use the new Recorder.getHighSpeedVideoCapabilities() method to query if the device supports this feature. val cameraInfo = cameraProvider.getCameraInfo(cameraSelector) val highSpeedCapabilities = Recorder.getHighSpeedVideoCapabilities(cameraInfo) if (highSpeedCapabilities == null) { // This camera device does not support high-speed video. return } 2. Configure and Bind the Use Case: Use the returned videoCapabilities (which contains supported video quality information) to build a HighSpeedVideoSessionConfig. You must then query the supported frame rate ranges via cameraInfo.getSupportedFrameRateRanges() and set the desired range. Invoke setSlowMotionEnabled(true) to record slow motion videos, otherwise it will record high-frame-rate videos. The final step is to use the regular Recorder.prepareRecording().start() to begin recording the video. val preview = Preview.Builder().build() val quality = highSpeedCapabilities .getSupportedQualities(DynamicRange.SDR).first() val recorder = Recorder.Builder() .setQualitySelector(QualitySelector.from(quality))) .build() val videoCapture = VideoCapture.withOutput(recorder) val frameRateRange = cameraInfo.getSupportedFrameRateRanges( HighSpeedVideoSessionConfig(videoCapture, preview) ).first() val sessionConfig = HighSpeedVideoSessionConfig( videoCapture, preview, frameRateRange = frameRateRange, // Set true for slow-motion playback, or false for high-frame-rate isSlowMotionEnabled = true ) cameraProvider.bindToLifecycle( lifecycleOwner, cameraSelector, sessionConfig) // Start recording slow motion videos. val recording = recorder.prepareRecording(context, outputOption) .start(executor, {}) Compatibility and Limitations High-speed recording requires specific CameraConstrainedHighSpeedCaptureSession and CamcorderProfile support. Always perform the capability check, and enable high-speed recording only on supported devices to prevent bad user experience. Currently, this feature is supported on the rear cameras of almost all Pixel devices and select models from other manufacturers. Check the blog post for more details. ### Combine Features with Confidence: The Feature Group API CameraX 1.5 introduces the Feature Group API, which eliminates the guesswork of feature compatibility. Based on Android 15's feature combination query API, you can now confidently enable multiple features together, guaranteeing a stable camera session. The Feature Group currently supports: HDR (HLG), 60 fps, Preview Stabilization, and Ultra HDR. For instance, you can enable HDR, 60 fps, and Preview Stabilization simultaneously on Pixel 10 and Galaxy S25 series. Future enhancements are planned to include 4K recording and ultra-wide zoom. The feature group API enables two essential use cases: Use Case 1: Prioritizing the Best Quality If you want to capture using the best possible combination of features, you can provide a prioritized list. CameraX will attempt to enable them in order, selecting the first combination the device fully supports. val sessionConfig = SessionConfig( useCases = listOf(preview, videoCapture), preferredFeatureGroup = listOf( GroupableFeature.HDR_HLG10, GroupableFeature.FPS_60, GroupableFeature.PREVIEW_STABILIZATION ) ).apply { // (Optional) Get a callback with the enabled features to update your UI. setFeatureSelectionListener { selectedFeatures -> updateUiIndicators(selectedFeatures) } } processCameraProvider.bindToLifecycle(activity, cameraSelector, sessionConfig) In this example, CameraX tries to enable features in this order: 1. HDR + 60 FPS + Preview Stabilization 2. HDR + 60 FPS 3. HDR + Preview Stabilization 4. HDR 5. 60 FPS + Preview Stabilization 6. 60 FPS 7. Preview Stabilization 8. None Use Case 2: Building a User-Facing Settings UI You can now accurately reflect which feature combinations are supported in your app's settings UI, disabling toggles for unsupported options like the picture below. To determine whether to gray out a toggle, use the following codes to check for feature combination support. Initially, query the status of every individual feature. Once a feature is enabled, re-query the remaining features with the enabled features to see if their toggles must now be grayed out due to compatibility constraints. fun disableFeatureIfNotSuported( enabledFeatures: Set<GroupableFeature>, featureToCheck:GroupableFeature ) { val sessionConfig = SessionConfig( useCases = useCases, requiredFeatureGroup = enabledFeatures + featureToCheck ) val isSupported = cameraInfo.isFeatureGroupSupported(sessionConfig) if (!isSupported) { // disable the toggle for featureToCheck } } Please refer to the Feature Group blog post for more information. ### More Video Enhancements * Concurrent Camera Improvements: With CameraX 1.5.1, you can now bind Preview + ImageCapture + VideoCapture use cases concurrently for each SingleCameraConfig in non-composition mode. Additionally, in composition mode (same use cases with CompositionSettings),  you can now set the CameraEffect that is applied to the final composition result. * Dynamic Muting: You can now start a recording in a muted state using PendingRecording.withAudioEnabled(boolean initialMuted) and allow the user to unmute later using Recording.mute(boolean muted). * Improved Insufficient Storage Handling: CameraX now reliably dispatches the VideoRecordEvent.Finalize.ERROR_INSUFFICIENT_STORAGE error, allowing your app to gracefully handle low storage situations and inform the user. * Low Light Boost: On supported devices (like the Pixel 10 series), you can enable CameraControl.enableLowLightBoostAsync to automatically brighten the preview and video streams in dark environments. ## Professional-Grade Image Capture CameraX 1.5 brings major upgrades to ImageCapture for developers who demand maximum quality and flexibility. ### Unleash Creative Control with DNG (RAW) Capture For complete control over post-processing, CameraX now supports DNG (RAW) capture. This gives you access to the unprocessed, uncompressed image data directly from the camera sensor, enabling professional-grade editing and color grading. The API supports capturing the DNG file alone, or capturing simultaneous JPEG and DNG outputs. See the sample code below for how to capture JPEG and DNG files simultaneously. val capabilities = ImageCapture.getImageCaptureCapabilities(cameraInfo) val imageCapture = ImageCapture.Builder().apply { if (capabilities.supportedOutputFormats .contains(OUTPUT_FORMAT_RAW_JPEG)) { // Capture both RAW and JPEG formats. setOutputFormat(OUTPUT_FORMAT_RAW_JPEG) } }.build() // ... bind imageCapture to lifecycle ... // Provide separate output options for each format. val outputOptionRaw = /* ... configure for image/x-adobe-dng ... */ val outputOptionJpeg = /* ... configure for image/jpeg ... */ imageCapture.takePicture( outputOptionRaw, outputOptionJpeg, executor, object : ImageCapture.OnImageSavedCallback { override fun onImageSaved(results: OutputFileResults) { // This callback is invoked twice: once for the RAW file // and once for the JPEG file. } override fun onError(exception: ImageCaptureException) {} } ) ### Ultra HDR for Camera Extensions Get the best of both worlds: the stunning computational photography of Camera Extensions (like Night Mode) combined with the brilliant color and dynamic range of Ultra HDR. This feature is now supported on many recent premium Android phones, such as the Pixel 9/10 series and Samsung S24/S25 series. // Support UltraHDR when Extension is enabled. val extensionsEnabledCameraSelector = extensionsManager .getExtensionEnabledCameraSelector( CameraSelector.DEFAULT_BACK_CAMERA, ExtensionMode.NIGHT) val imageCapabilities = ImageCapture.getImageCaptureCapabilities( cameraProvider.getCameraInfo(extensionsEnabledCameraSelector) val imageCapture = ImageCapture.Builder() .apply { if (imageCapabilities.supportedOutputFormats .contains(OUTPUT_FORMAT_JPEG_ULTRA_HDR) { setOutputFormat(OUTPUT_FORMAT_JPEG_ULTRA_HDR) } }.build() ## Core API and Usability Enhancements ### A New Way to Configure: SessionConfig As seen in the examples above, SessionConfig is a new concept in CameraX 1.5. It centralizes configuration and simplifies the API in two key ways: 1. No More Manual unbind() Calls: CameraX APIs are lifecycle-aware. It will implicitly “unbind” your use cases when the activity or other LifecycleOwner is destroyed. But updating use cases or switching cameras still requires you to call unbind() or unbindAll() before rebinding. Now with CameraX 1.5, when you bind a new SessionConfig, CameraX seamlessly updates the session for you, eliminating the need for unbind calls. 2. Deterministic Frame Rate Control: The new SessionConfig API introduces a deterministic way to manage the frame rate. Unlike the previous setTargetFrameRate, which was only a hint, this new method guarantees the specified frame rate range will be applied upon successful configuration. To ensure accuracy, you must query supported frame rates using CameraInfo.getSupportedFrameRateRanges(SessionConfig). By passing the full SessionConfig, CameraX can accurately determine the supported ranges based on stream configurations. ### Camera-Compose is Now Stable We know how much you enjoy Jetpack Compose, and we're excited to announce that the camera-compose library is now stable at version 1.5.1! This release includes critical bug fixes related to CameraXViewfinder usage with Compose features like moveableContentOf and Pager, as well as resolving a preview stretching issue. We will continue to add more features to camera-compose in future releases. ### ImageAnalysis and CameraControl Improvements * Torch Strength Adjustment: Gain fine-grained control over the device's torch with new APIs. You can query the maximum supported strength using CameraInfo.getMaxTorchStrengthLevel() and then set the desired level with CameraControl.setTorchStrengthLevel(). * NV21 Support in ImageAnalysis: You can now request the NV21 image format directly from ImageAnalysis, simplifying integration with other libraries and APIs. This is enabled by invoking ImageAnalysis.Builder.setOutputImageFormat(OUTPUT_IMAGE_FORMAT_NV21). ## Get Started Today Update your dependencies to CameraX 1.5 today and explore the exciting new features. We can't wait to see what you build. To use CameraX 1.5,  please add the following dependencies to your libs.versions.toml. (We recommend using 1.5.1 which contains many critical bug fixes and concurrent camera improvements.) [versions] camerax = "1.5.1" [libraries] .. androidx-camera-core = { module = "androidx.camera:camera-core", version.ref = "camerax" } androidx-camera-compose = { module = "androidx.camera:camera-compose", version.ref = "camerax" } androidx-camera-view = { module = "androidx.camera:camera-view", version.ref = "camerax" } androidx-camera-lifecycle = { group = "androidx.camera", name = "camera-lifecycle", version.ref = "camerax" } androidx-camera-camera2 = { module = "androidx.camera:camera-camera2", version.ref = "camerax" } androidx-camera-extensions = { module = "androidx.camera:camera-extensions", version.ref = "camerax" } And then add these to your module build.gradle.kts dependencies: dependencies { .. implementation(libs.androidx.camera.core) implementation(libs.androidx.camera.lifecycle) implementation(libs.androidx.camera.camera2) implementation(libs.androidx.camera.view) // for PreviewView implementation(libs.androidx.camera.compose) // for compose UI implementation(libs.androidx.camera.extensions) // For Extensions } Have questions or want to connect with the CameraX team? Join the CameraX developer discussion group or file a bug report: * CameraX developers discussion group * File a bug
android-developers.googleblog.com
November 13, 2025 at 9:10 PM
#WeArePlay: Meet the game creators who entertain, inspire and spark imagination
_Posted by Robbie McLachlan, Developer Marketing_ _ _ In our latest #WeArePlay stories, we meet the game creators who entertain, inspire and spark imagination in players around the world on Google Play. From delivering action-packed 3D kart racing to creating a calming, lofi world for plant lovers - here are a few of our favourites: _Ralf and Matt, co-founders of Vector Unit_ _San Rafael (CA), U.S._ With over 557 million downloads, Ralf and Matt’s game, Beach Buggy Racing, brings the joy of classic, action-packed kart racing to gamers worldwide. After meeting at a California game company back in the late ’90s, Matt and Ralf went on to work at major studios. Years later, they reunited to form Vector Unit, a new company where they could finally have full creative freedom. They channeled their passion for classic kart-racers into Beach Buggy Racing, a vibrant 3D title that brought a console-quality feel to phones. The fan reception was immense, with players celebrating by baking cakes and dressing up for in-game events. Today, the team keeps Beach Buggy Racing 2 updated with global collaborations and is already working on a new prototype, all to fulfill their mission: sparking joy. _Camilla, founder of Clover-Fi Games_ _Batangas, Philippines_ Camilla’s game, Window Garden, lets players slow down by decorating and caring for digital plants. While living with her mother during the pandemic, tech graduate Camilla made the leap from software engineer to self-taught game developer. Her mom’s indoor plants sparked an idea: Window Garden. She created the lofi idle game to encourage players to slow down. In the game, players water flowers and fruits, and decorate cozy spaces in their own style. With over 1 million downloads to date, this simple loop has become a calming daily ritual since its launch. The game's success earned it a “Best of 2024” award from Google Play, and Camilla now hopes to expand her studio and collaborate with other creatives. _Rodrigo, founder of Kolb Apps_ _Curitiba, Brazil_ Rodrigo's game, Real Drum, puts a complete, realistic-sounding virtual drum set in your pocket, making it easy for anyone to play. Rodrigo started coding at just 12 years old, creating software for his family's businesses. This technical skill later combined with his hobby as an amateur musician. While pursuing a career in programming, he noticed a clear gap: there were no high-quality percussion apps. He united his two passions, technology and rhythm, to create Real Drum. The result is a realistic, easy-to-use virtual set that has amassed over 437 million downloads, letting people around the world play drums and cymbals without the noise. His game has made learning music accessible to many and inspired new artists. Now, Rodrigo's team plans to launch new apps for children to continue nurturing musical creativity. Discover other inspiring app and game founders featured in #WeArePlay.
android-developers.googleblog.com
November 13, 2025 at 9:11 PM
Android developer verification: Early access starts now as we continue to build with your feedback
_Posted by Matthew Forsythe Director - Product Management, Android App Safety_ We recently announced new developer verification requirements, which serve as an additional layer of defense in our ongoing effort to keep Android users safe. We know that security works best when it accounts for the diverse ways people use our tools. This is why we announced this change early: to gather input and ensure our solutions are balanced. We appreciate the community's engagement and have heard the early feedback – specifically from students and hobbyists who need an accessible path to learn, and from power users who are more comfortable with security risks. We are making changes to address the needs of both groups. To understand how these updates fit into our broader mission, it is important to first look at the specific threats we are tackling. Why verification is important Keeping users safe on Android is our top priority. Combating scams and digital fraud is not new for us — it has been a central focus of our work for years. From Scam Detection in Google Messages to Google Play Protect and real-time alerts for scam calls, we have consistently acted to keep our ecosystem safe. However, online scams and malware campaigns are becoming more aggressive. At the global scale of Android, this translates to real harm for people around the world – especially in rapidly digitizing regions where many are coming online for the first time. Technical safeguards are critical, but they cannot solve for every scenario where a user is manipulated. Scammers use high-pressure social engineering tactics to trick users into bypassing the very warnings designed to protect them. For example, a common attack we track in Southeast Asia illustrates this threat clearly. A scammer calls a victim claiming their bank account is compromised and uses fear and urgency to direct them to sideload a "verification app" to secure their funds, often coaching them to ignore standard security warnings. Once installed, this app — actually malware — intercepts the victim's notifications. When the user logs into their real banking app, the malware captures their two-factor authentication codes, giving the scammer everything they need to drain the account. While we have advanced safeguards and protections to detect and take down bad apps, without verification, bad actors can spin up new harmful apps instantly. It becomes an endless game of whack-a-mole. Verification changes the math by forcing them to use a real identity to distribute malware, making attacks significantly harder and more costly to scale. We have already seen how effective this is on Google Play, and we are now applying those lessons to the broader Android ecosystem to ensure there is a real, accountable identity behind the software you install. Supporting students and hobbyists We heard from developers who were concerned about the barrier to entry when building apps intended only for a small group, like family or friends. We are using your input to shape a dedicated account type for students and hobbyists. This will allow you to distribute your creations to a limited number of devices without going through the full verification requirements. Empowering experienced users While security is crucial, we’ve also heard from developers and power users who have a higher risk tolerance and want the ability to download unverified apps. Based on this feedback and our ongoing conversations with the community, we are building a new advanced flow that allows experienced users to accept the risks of installing software that isn't verified. We are designing this flow specifically to resist coercion, ensuring that users aren't tricked into bypassing these safety checks while under pressure from a scammer. It will also include clear warnings to ensure users fully understand the risks involved, but ultimately, it puts the choice in their hands. We are gathering early feedback on the design of this feature now and will share more details in the coming months. Getting started with early access Today, we’re excited to start inviting developers to the early access for developer verification in Android Developer Console for developers that distribute exclusively outside of Play, and will share invites to the Play Console experience soon for Play developers. We are looking forward to your questions and feedback on streamlining the experience for all developers. Watch our video below for a walkthrough of the new Android Developer Console experience and see our guides for more details and FAQs. We are committed to working with you to keep the ecosystem safe while getting this right.
android-developers.googleblog.com
November 13, 2025 at 9:11 PM
Raising the bar on battery performance: excessive partial wake locks metric is now out of beta
_Posted by Karan Jhavar - Product Manager, Android Frameworks, Dan Brown - Product Manager, Google Play, and Eric Brenner - Software Engineer, Google Play_ _ _ A great user experience is built on a foundation of strong technical performance. We are committed to helping you create stable, responsive, and efficient apps that users love. Excessive battery drain is top of mind for your users, and together, we are taking significant steps to help you build more power-efficient apps. Earlier this year, we introduced a new beta metric in Android vitals, excessive partial wake locks, to help you identify and address sources of battery drain. This initial beta metric was co-developed in close collaboration with Samsung, combining their deep, real-world insights into user experience with battery consumption with Android's platform data. We want to thank you for providing invaluable feedback during the beta period. Powered by your input and our continued collaboration with Samsung, we have further refined the algorithm to be even more accurate and representative. We are excited to announce that this refined metric is now generally available as a new core vitals metric to all developers in Android vitals. We have defined a bad behavior threshold for excessive wake locks. Starting March 1, 2026, if your title does not meet this quality threshold, we may exclude the title from prominent discovery surfaces such as recommendations. In some cases, we may display a warning on your store listing to indicate to users that your app may cause excessive battery drain. GOOGLE PLAY'S CORE TECHNICAL QUALITY METRICS To maximize visibility on Google Play, keep your app below the bad behavior thresholds for these metrics. --- User-perceived crash rate | The percentage of daily active users who experienced at least one crash that is likely to have been noticeable User-perceived ANR rate | The percentage of daily active users who experienced at least one ANR that is likely to have been noticeable Excessive battery usage | The percentage of watch face sessions where battery usage exceeds 4.44% per hour New: Excessive partial wake locks | The percentage of user sessions where cumulative, non-exempt wake lock usage exceeds 2 hours Excessive partial wake locks newly join the technical quality bars that Play expects all titles to maintain for a great user experience This is the first in a series of new metrics designed to provide deeper insight into your app's resource utilization, enabling you to improve the experience for your users across the entire Android ecosystem. ### 1. Aligning our definition of excessive wake locks with user expectations Apps can hold wake locks to prevent the user's device from entering sleep mode, letting the apps perform background work while the screen is off. We consider a user session excessive if it holds more than 2 cumulative hours of non-exempt wake locks in a 24 hour period. These excessive sessions are a heavy contributor to battery drain. A wake lock is exempted if it is a system held wake lock that offers clear user benefits that cannot be further optimized, such as audio playback or user-initiated data transfer. The bad behaviour threshold is crossed when 5% of an app’s user sessions over the last 28 days are excessive. If your app exceeds this threshold, you will be alerted directly on your Android vitals overview page. You can read more information about our definition on the Android Developer pages. Android vitals will alert you to excessive wake lock issues and provide a table of wake lock tags to P90/ P99 duration to help you identify the source by wake lock name. To help you understand your app’s partial wake lock usage, we are enhancing the excessive partial wake locks page in Android vitals with a new wake lock names table. This table breaks down wake lock sessions by their specific tag names and durations, allowing you to easily identify long wake locks in your local development environment, like Android Studio, for easier debugging. You should investigate any wake locks with P90 or P99 durations above 60 minutes. ### 2.  Excessive wake locks and their impact on Google Play visibility If your title exceeds the bad behavior threshold for excessive wake locks, it may be ineligible for some discovery surfaces where users find new apps and games. In some cases, we may also show a warning on your store listing to inform users that your app may cause their device's battery to drain faster. Users may see a warning on your store listing if your app exceeds the bad behavior threshold. Note: The exact text and design are subject to change. We know making technical changes to your app’s code and how it works can be time consuming, so we are making the metric available for you to diagnose and fix potential issues now, with time before the Store visibility changes begin, starting from March 1, 2026. ### 3. What to do next We encourage you to take the following steps to ensure your app delivers a great experience for users: 1. Visit Android vitals: Review your app's performance on the new excessive partial wake locks metric. The metric is now visible to all developers whose apps have wake lock sessions. 2. Discover excessive partial wake locks: Use the new wake lock names table to identify excessive partial wake locks. 3. Consult the documentation: For detailed guidance on best practices and fixing common issues, please check out our technical blog post, technical video and updated developer documentation on wake locks. Thank you for your continued partnership in building high-quality, performant experiences that users can rely on every day.
android-developers.googleblog.com
November 11, 2025 at 9:11 PM
#WeArePlay: Meet the people making apps & games to improve your health
_Posted by Robbie McLachlan - Developer Marketing_ In our latest #WeArePlay stories, we meet the founders building apps and games that are making health and wellness fun and easy for everyone on Google Play. From getting heavy sleepers jumping into their mornings, to turning mental wellness into an immersive adventure game. Here are a few of our favorites: _Jay, founder of Delightroom  _ _Seoul, South Korea  _ With over 90 million downloads, Jay‘s app Alarmy helps heavy sleepers to get moving with smart, challenge-based alarms. While studying computer science, Jay’s biggest challenge wasn’t debugging code, it was waking up for his morning classes. This struggle sparked an idea: what if there were an app that could help anyone get out of bed? Jay built a basic version and showcased it at a tech event, where it quickly drew attention. That prototype evolved into Alarmy, an app that uses creative missions, like solving math problems, doing squats, or snapping a photo, to get people moving so they fully wake up. Now available in over 30 languages and 170+ countries, Jay and his team are expanding beyond alarms, adding sleep tracking and wellness features to help even more people start their day right. _Ellie and Hazel_ _, co-founders of Mind Monsters Games   _ _Cambridge, UK_ Ellie and Hazel’s game, Betwixt, makes mental wellness more fun by using an interactive story to reduce anxiety. While working in London’s tech scene and later writing about psychology, Ellie noticed a pattern: many people turned to video games to ease stress but struggled to engage with traditional meditation. That’s when she came up with the idea to combine the two. While curating a book on mental health, she met Hazel—a therapist, former world champion boxer, and game lover and together they created Betwixt, an interactive fantasy adventure that guides players on a journey of self-discovery. By blending storytelling with evidence-based techniques, the game helps reduce anxiety and promote well-being. Now, with three new projects in development, Ellie and Hazel strive to turn play into a mental health tool. _Kevin and Robin, co-founders of MapMyFitness   _ _Boulder (CO), U.S.  _ Kevin and Robin’s app, MapMyFitness, helps a global community of runners and cyclists map their routes and track their training. Growing up across the Middle East, the Philippines, and Africa, Kevin developed a fascination with maps. In San Diego, while training for his second marathon, he built a simple MapMyRun website to map his routes. When other runners joined, former professional cyclist Robin reached out with a vision to also help cyclists discover and share maps. Together they founded MapMyFitness in 2007 and launched MapMyRide soon after, blending Kevin’s technical expertise and Robin's athletic know-how. Today, the MapMy suite powers millions of walkers, runners, and riders with adaptive training plans, guided workouts, live safety tracking, and community challenges—all in support of their mission to “get everybody outside". Discover more #WeArePlay stories from founders across the globe.
android-developers.googleblog.com
November 6, 2025 at 9:11 PM
Health Connect Jetpack v1.1.0 is now available!
_Posted by Brenda Shaw, Health & Home Partner Engineering Technical Writer_ Health Connect is Android’s on-device platform designed to simplify connectivity between health and fitness apps, allowing developers to build richer experiences with secure, centralized data. Today, we’re thrilled to announce three major updates that empower you to create more intelligent, connected, and nuanced applications: the stable release of the Health Connect Jetpack library 1.1.0 and the expanded device type support. ## Health Connect Jetpack Library 1.1.0 is Now Stable We are excited to announce that the Health Connect Jetpack library has reached its 1.1.0 stable release. This milestone provides you with the confidence and reliability needed to build production-ready health and fitness experiences at scale. Since its inception, Health Connect has grown into a robust platform supporting over 50 different data types across activity, sleep, nutrition, medical records, and body measurements. The journey to this stable release has been marked by significant advancements driven by developer feedback. Throughout the alpha and beta phases, we introduced critical features like background reads for continuous data monitoring, historical data sync to provide users with a comprehensive long-term view of their health, and support for critical new data types like Personal Health records, Exercise Routes, Training Plans, and Skin Temperature. This stable release encapsulates all of these enhancements, offering a powerful and dependable foundation for your applications. ## Expanded Device Type Support Accurate data representation is key to building trust and delivering precise insights. To that end, we have significantly expanded the list of supported device types in Health Connect. This will be available in 1.2.0-alpha02. When data is written to the platform, specifying the source device is crucial metadata that helps data readers understand its context and quality. The newly supported device types include: * Consumer Medical Device: For over-the-counter medical hardware like Continuous Glucose Monitors (CGMs) and Blood Pressure Cuffs. * Glasses: For smart glasses and other head-mounted optical devices. * Hearables: For earbuds, headphones, and hearing aids with sensing capabilities. * Fitness Machine: For stationary equipment like treadmills and indoor cycles, as well as outdoor equipment like bicycles. This expansion ensures data is represented more accurately, allowing you to build more nuanced experiences based on the specific hardware used to record it. ## What's Next? We encourage all developers to upgrade to the stable 1.1.0 Health Connect Jetpack library to take full advantage of these new features and improvements. * Learn more in the official documentation and release notes. * Provide feedback and report issues on our public issue tracker. We are committed to the continued growth of the Health Connect platform. We can’t wait to see the incredible experiences you build!
android-developers.googleblog.com
November 3, 2025 at 8:13 PM
ML Kit’s Prompt API: Unlock Custom On-Device Gemini Nano Experiences
_Posted by  Caren Chang, Developer Relations Engineer, Chengji Yan, Software Engineer, and Penny Li, Software Engineer_ AI is making it easier to create personalized app experiences that transform content into the right format for users. We previously enabled developers to integrate with Gemini Nano through ML Kit GenAI APIs tailored for specific use cases like summarization and image description. Today marks a major milestone for Android's on-device generative AI. We're announcing the Alpha release of the ML Kit GenAI Prompt API. This API allows you to send natural language and multimodal requests to Gemini Nano, addressing the demand for more control and flexibility when building with generative models. Partners like Kakao are already building with Prompt API, creating unique experiences with real-world impact. You can experiment with Prompt API's powerful features today with minimal code. **Move beyond pre-built to custom on-device GenAI ****** Prompt API moves beyond pre-built functionality to support custom, app-specific GenAI use cases, allowing you to create unique features with complex data transformation. Prompt API uses Gemini Nano on-device to process data locally, enabling offline capability and improved user privacy. Key use cases for Prompt API: Prompt API allows for highly customized GenAI use cases. Here are some recommended examples: * Image understanding: Analyzing photos for classification (e.g., creating a draft social media post or identifying tags such as "pets," "food," or "travel"). * Intelligent document scanning: Using a traditional ML model to extract text from a receipt, and then categorizing each item with Prompt API. * Transforming data for the UI: Analyzing long-form content to create a short, engaging notification title. * Content prompting: Suggesting topics for new journal entries based on a user’s preference for themes. * Content analysis: Classifying customer reviews into a positive, neutral, or negative category. * Information extraction: Extracting important details about an upcoming event from an email thread. **Implementation ** Prompt API lets you create custom prompts and set optional generation parameters with just a few lines of code: Generation.getClient().generateContent( generateContentRequest( ImagePart(bitmapImage), TextPart("Categorize this image as one of the following: car, motorcycle, bike, scooter, other. Return only the category as the response."), ) { // Optional parameters temperature = 0.2f topK = 10 candidateCount = 1 maxOutputTokens = 10 }, ) For more detailed examples of implementing Prompt API, check out the official documentation and sample on Github. Gemini Nano, performance, and prototyping Prompt API currently performs best on the Pixel 10 device series, which runs the latest version of Gemini Nano (nano-v3). This version of Gemini Nano is built on the same architecture as Gemma 3n, the model we first shared with the open model community at I/O. The shared foundation between Gemma 3n and nano-v3 enables developers to more easily prototype features. For those without a Pixel 10 device, you can start experimenting with prompts today by prototyping with Gemma 3n locally or accessing it online through Google AI Studio. For the full list of devices that support GenAI APIs, refer to our device support documentation. Learn more Start implementing Prompt API in your Android apps today with guidance from our official documentation and the sample on Github.
android-developers.googleblog.com
October 31, 2025 at 2:11 AM
Kakao Mobility uses Gemini Nano on-device to reduce costs and boost call conversion by 45%
_Posted by Sa-ryong Kang and Caren Chang, Developer Relations Engineers_ _ _ Kakao Mobility is South Korea's leading mobility business, offering a range of transportation and delivery services, including taxi-hailing, navigation, bike and scooter-sharing, parking, and parcel delivery, through its Kakao T app. The team at Kakao Mobility utilized Gemini Nano via ML Kit’s GenAI Prompt API to offer parking assistance for its bike-sharing service and an improved address entry experience for its navigation and delivery services. The Kakao T app serves over 30 million total users, and its bike-sharing service is one of its most popular services. But unfortunately, many users were improperly parking the bikes or scooters when not in use. This behavior led to an influx of parking violations and safety concerns, resulting in public complaints, fines, and towing. These issues began to negatively affect public perception of both Kakao Mobility and its bike-sharing services. ** ** “By leveraging the ML Kit’s GenAI Prompt API and Gemini Nano, we were able to quickly implement features that improve social value without compromising user experience. Kakao Mobility will continue to actively adopt on-device AI to provide safer and more convenient mobility services.” — Wisuk Ryu, Head of Client Development Div To address these concerns, the team initially designed an image recognition model to notify users if their bike or scooter was parked correctly according to local laws and safety standards. Running this model through the cloud would have incurred significant server costs. In addition, the users’ uploaded photos contained information about their parking location, so the team wanted to avoid any privacy or security concerns. The team needed to find a more reliable and cost-effective solution. ** ** The team also wanted to improve the entity extraction experience for the parcel delivery service within the Kakao T app. Previously, users were able to easily order parcel delivery on a chat interface, but drivers needed to enter the address into an order form manually to initiate the delivery order—a process which was cumbersome and prone to human error. The team sought to streamline this process, making order forms faster and less frustrating for delivery personnel. ** ** Enhancing the user experience with ML Kit’s GenAI Prompt API ** ** The team tested and compared cloud-based Gemini models against Gemini Nano, accessed via ML Kit’s GenAI Prompt API. “After reviewing privacy, cost, accuracy, and response speed, ML Kit’s GenAI Prompt API was clearly the optimal choice,” said Jinwoo Park, Android application developer at Kakao Mobility. ** ** To address the issue of improperly parked bikes or scooters, the team used Gemini Nano's multimodal capability via the ML Kit GenAI API SDK to detect when a bike or scooter violates local regulations by parking on yellow tactile paving. With a carefully crafted prompt, they were able to evaluate more than 200 labeled images of parking photos while continually refining the inputs. This evaluation, measured through well-known metrics like accuracy, precision, recall, and the F1 score, ensured the feature met production-level quality and reliability standards. Now users can take a photo of their parked bike or scooter, and the app will inform them if it is parked properly, or provide guidance if it is not. The entire process happens in seconds on the device, protecting the user’s location and information. ** ** ** ** To create a streamlined entity extraction feature, the team again used ML Kit's GenAI Prompt API to process users' delivery orders written in natural language. If they had employed traditional machine learning, it would have required a large learning dataset and special expertise in machine learning. Instead, they could simply start with a prompt like, "Extract the recipient's name, address, and phone number from the message." The team prepared around 200 high-quality evaluation examples, and evaluated their prompt through many rounds of iteration to get the best result. The most effective method employed was a technique called few-shot prompting, and the results were carefully analyzed to ensure the output contained minimal hallucinations. “ML Kit’s Prompt API reduces developer overhead while offering strong security and reliability on-device. It enables rapid prototyping, lowers infrastructure dependency, and incurs no additional cost. There is no reason not to recommend it.” — Jinwoo Park, Android application developer at Kakao Mobility ** ** Delivering big results with ML Kit’s GenAI Prompt API ** ** As a result, the entity extraction feature correctly identifies the necessary details of each order, even when multiple names and addresses are entered. To maximize the feature's reach and provide a robust fallback, the team also implemented a cloud-based path using Gemini Flash. ** ** Implementing ML Kit’s GenAI Prompt API has yielded a significant amount of cost savings for the Kakao Mobility team by shifting to on-device AI. While the bike parking analysis feature has not yet launched, the address entry improvement has already delivered excellent results: * Order completion time for delivery orders has been reduced by 24%. * The conversion rate has increased by 45% for new users and 6% for existing users. * During peak seasons, AI-powered orders increase by over 200%. ** ** “Small business owners in particular have shared very positive feedback, saying the feature has made their work much more efficient and significantly reduced stress,” Wisuk added. ** ** After the image recognition feature for bike and scooter parking launches, the Kakao Mobility team is eager to improve it further. Urban parking environments can be challenging, and the team is exploring ways to filter out unnecessary regions from images. ** ** “ML Kit’s GenAI Prompt API offers high-quality features without additional overhead,” said Jinwoo. “This reduced developer effort, shortened overall development time, and allowed us to focus on prompt tuning for higher-quality results.” ** ** Try ML Kit’s GenAI Prompt API for yourself Build and deploy on-device AI in your app with ML Kit’s GenAI Prompt API to harness the capabilities of Gemini Nano.
android-developers.googleblog.com
October 31, 2025 at 2:11 AM
redBus uses Gemini Flash via Firebase AI Logic to boost the length of customer reviews by 57%
_Posted by  __Thomas Ezan, Developer Relations Engineer_ As the world's largest online bus ticketing platform, redBus serves millions of travelers across India, Southeast Asia, and Latin America. The service is predominantly mobile-first, with over 90% of all bookings occurring through its app. However, this presents a significant challenge in gathering helpful feedback from a user base that speaks dozens of different languages. Typing reviews is inconvenient for many users, and a review written in Tamil, for instance, offers little value to a bus operator who only speaks Hindi. To improve the quality and volume of user feedback, developers at redBus used Gemini Flash, a Google AI model providing low latency, to instantly transcribe and translate user voice recordings. To connect this powerful AI to their app without dealing with complex backend work, they used Firebase AI Logic. This new feature removed language barriers and simplified the review process, leading to a significant increase in user engagement and feedback quality. Simplifying user feedback with a voice-first approach The previous in-app review experience on redBus was text-based, which presented some key challenges. At our scale, reliable user reviews are critical: they build trust for travelers and give operators actionable insights. While our existing text-based system served us well, we found that customers often struggled to articulate their full experience, which resulted in our user feedback lacking the necessary detail and volume we needed to deliver maximum value to both travelers and operators. What's more, language barriers limited the usefulness of reviews, as reviews in one language were not helpful for users or bus operators who spoke another. "Our primary motivation was to leverage the expressive power of voice and overcome the language barrier to capture more authentic and detailed user feedback,” said Abhi Muktheeswarar, a senior tech lead in mobile engineering at redBus. The developer team wanted to create a frictionless, voice-first experience, so they designed a new flow where users could simply speak their review in their native language. To encourage adoption, the team implemented a prominent, animated mic button paired with a text mentioning: “Your voice matters, share your review in your own language.” This mention appears in the user’s native language, consistent with their app language settings. Using Gemini Flash, the application processes the user’s voice recording. It first transcribes the speech into text, then translates it into English, and finally analyzes the sentiment to automatically generate a star rating and predict relevant tags based on the review content. It then creates a concise summary and autofills the review form fields with the generated content. Developers chose Firebase AI Logic because it allowed them to build and ship the feature without the help from the backend team, dramatically reducing development time and complexity. “The Firebase AI SDK was a key differentiator because it was the only solution that empowered our frontend team to build and ship the feature independently,” Abhi explained. This approach enabled the team to go from concept to launch in just 30 days. During implementation, the engineers used structured output, enabling the Gemini Flash model to return well-formed JSON responses, including the transcription, translation, sentiment analysis, and star rating, making it easy to then populate the UI. This ensured a seamless user experience. Users are then shown both the original transcribed text in their own language and the translated, summarized version in English. Most importantly, the user is given full control to review and edit all AI-generated text and change the star rating before submitting the review. They can even speak again to add more content. **Driving engagement and capturing deeper user insights** The AI-powered voice review feature had a significant positive impact on user engagement. By enabling users to speak in their native language, redBus saw a 57% increase in review length and a notable increase in the overall volume of reviews. The new feature successfully engaged a segment of the user base that was previously hesitant to type a review. Since implementation, user feedback has been overwhelmingly positive: customers appreciate the accuracy of the transcription and translation, and find the AI-generated summaries to be a concise overview of their longer, more detailed reviews. Gemini Flash, although hosted in the cloud, delivered a highly responsive user experience. “A common observation from our partners and stakeholders has been that the level of responsiveness from our new AI feature is so fast and seamless that it feels like the AI is running directly on the device,” said Abhi. “This is a testament to the low latency of the Gemini Flash model, which has been a key factor in its success.” ** An easier way to build with AI** For the redBus team, the project demonstrated how Firebase AI Logic and Gemini Flash empower mobile developers to build features that would otherwise require backend implementation. This reduces dependency on server-side changes and allows developers to iterate quickly and independently. Following the success of the voice review feature, the team at redBus is exploring other use cases for on-device generative AI to further enhance their app. They also plan to use Google AI Studio to test and iterate on prompts moving forward. For Abhi, the lesson is clear: “It’s no longer about complex backend setups,” he said. “It’s about crafting the right prompt to build the next innovative feature that directly enhances the user experience.” **Get started** Learn more about how you can use Gemini and Firebase AI Logic to build generative AI features for your own app.
android-developers.googleblog.com
October 31, 2025 at 2:11 AM
New agentic experiences for Android Studio, new AI APIs, the first Android XR device and more, in our Fall episode of The Android Show
_Posted by Matthew McCullough, VP of Product Management, Android Developer_ We’re in an important moment where AI changes everything, from how we work to the expectations that users have for your apps, and our goal on Android is to transform this AI evolution into opportunities for you and your users. Today in our Fall episode of The Android Show, we unpacked a bunch of new updates towards delivering the highest return on investment in building for the Android platform. From new agentic experiences for Gemini in Android Studio to a brand new on-device AI API to the first Android XR device, there’s so much to cover - let’s dive in! ## Build your own custom Gen AI features with the new Prompt API On Android, we offer AI models on-device, or in the cloud.  Today, we’re excited to now give you full flexibility to shape the output of the Gemini Nano model by passing in any prompt you can imagine with the new Prompt API, now in Alpha. For flagship Android devices, Gemini Nano lets you build efficient on-device options where the users’ data never leaves their device. At I/O this May, we launched our on-device GenAI APIs using the Gemini Nano model, making common tasks easier with simple APIs for tasks like summarization, proofreading and image description. Kakao used the Prompt API to transform their parcel delivery service, replacing a slow, manual process where users had to copy and paste details into a form into just a simple message requesting a delivery, and the API automatically extracts all the necessary information. This single feature reduced order completion time by 24% and boosted new user conversion by an incredible 45%. ## Tap into Nano Banana and Imagen using the Firebase SDK When you want to add cutting-edge capabilities across the entire fleet of Android devices, our  cloud-based AI solutions with Firebase AI Logic are a great fit. The excitement for models like Gemini 2.5 Flash Image (a.k.a. Nano Banana) and Imagen have been incredible; now your users can now generate and edit images using Nano Banana, and then for finer control, like selecting and transforming specific parts of an image, users can use the new mask-based editing feature that leverages the Imagen model. See our blog post to learn more. And beyond image generation, you can also use Gemini multimodal capabilities to process text, audio and image input. RedBus, for example, revolutionized their user reviews using Gemini Flash via Firebase AI Logic to make giving feedback easier, more inclusive, and reliable. The old problem? Short, low-quality text reviews. The new solution? Users can now leave reviews using voice input in their native languages. From the audio Gemini Flash is then generating a structured text response enabling longer, richer and more reliable user reviews. It's a win for everyone: travelers, operators, and developers! ## Helping you be more productive, with agentic experiences in Android Studio Helping you be more productive is our goal with Gemini in Android Studio, and why we’re infusing AI across our tooling. Developers like Pocket FM have seen an impressive development time savings of 50%. With the recent launch of Agent Mode, you can describe a complex goal in natural language and (with your permission), the agent plans and executes changes on multiple files across your project. The agent’s answers are now grounded in the most modern development practices, and can even cross-reference our latest documentation in real time. We demoed new agentic experiences such as updates to Agent Mode, the ability to upgrade APIs on your behalf, the new project assistant, and we announced you’ll be able to bring any LLM of your choice to power the AI functionality inside Android Studio, giving you more flexibility and choice on how you incorporate AI into your workflow. And for the newest stable features such as Back Up and Sync, make sure to download the latest stable version of Android Studio. ## Elevating AI-assisted Android development, and improving LLMs with an Android benchmark Our goal is to make it easier for Android developers to build great experiences. With more code being written by AI, developers have been asking for models that know more about Android development. We want to help developers be more productive, and that’s why we’re building a new task set for LLMs against a range of common Android development areas. The goal is to provide LLM makers with a benchmark, a north star of high quality Android development, so Android developers have a range of helpful models to choose for AI assistance. To reflect the challenges of Android development, the benchmark is composed of real-world problems sourced from public GitHub Android repositories. Each evaluation attempts to have an LLM recreate a pull request, which are then verified using human authored tests. This allows us to measure a model’s ability to navigate complex codebases, understand dependencies, and solve the kind of problems you encounter every day. We’re finalizing the task set we’ll be testing against LLMs, and will be sharing the results publicly in the coming months. We’re looking forward to seeing how this shapes AI assisted Android development, and the additional flexibility and choice it gives you to build on Android. ## The first Android XR device: Samsung Galaxy XR Last week was the launch of the first in a new wave of Android XR devices: the Galaxy XR, in partnership with Samsung. Android XR devices are built entirely in the Gemini era, creating a major new platform opportunity for your app. And because Android XR is built on top of familiar Android frameworks, when building adaptively, you’re already building for XR. To unlock the full potential of Android XR features, you can use the Jetpack XR SDK. The Calm team provides a perfect example of this in action. They successfully transformed their mobile app into an immersive spatial experience, building their first functional XR menus on the first day and a core XR experience in just two weeks by leveraging their existing Android codebase and the Jetpack XR SDK.  You can read more about Android XR from our Spotlight Week last week. ## Jetpack Navigation 3 is in Beta The new Jetpack Navigation 3 library is now in beta! Instead of having behavior embedded into the library itself, we’re providing ‘how-to recipes’ with good defaults (nav3 recipes on github). Out of the box, it’s fully customizable, has animation support and is adaptive. Nav 3 was built from the ground up with Compose State as a fundamental building block. This means that it fully buys into the declarative programming model - you change the state you own and Nav3 reacts to that new state. On the Compose front, we’ve been working on making it faster and easier for you to build UI, covering the features you told us you needed from Views, while at the same time ensuring that Compose is performant. ## Accelerate your business success on Google Play With AI speeding up app development, Google Play is streamlining your workflow in Play Console so that your business growth can keep up with your code. The reimagined, goal-oriented app dashboard puts actionable metrics front and center. Plus, new capabilities are making your day-to-day operations faster, smarter, and more efficient: from pre-release testing with deep links validation to AI-powered analytics summaries and app strings localization. These updates are just the beginning. Check out the full list of announcements to get the latest from Play. ## Watch the Fall episode of The Android Show Thank you for tuning into our Fall episode of The Android Show. We're excited to continue building great things together, and this show is an important part of our conversation with you. We'd love to hear your ideas for our next episode, so please reach out on X or LinkedIn. A special thanks to my co-hosts,  Rebecca Gutteridge and Adetunji Dahunsi, for helping us share the latest updates.
android-developers.googleblog.com
October 31, 2025 at 2:11 AM
New tools and programs to accelerate your success on Google Play
_Posted by Paul Feng, VP of Product Management, Google Play_ _ _ _ _ _ _ Last month, we shared new updates showcasingour evolving vision for Google Play: a place where people can discover the content and experiences they love and where you can build and grow sustainable businesses. Our commitment to your success is at the heart of our continued investments. Today, we're excited to introduce a new bundle of tools and programs designed to enhance your productivity and accelerate your growth. From simplifying technical integration and localization, to offering deeper insights and creating powerful new ways to engage your audience these features will help streamline your development lifecycle. Watch our latest updates in The Android Show segment below or continue reading. You can also catch up on our latest Android developments by watching the full show. **Streamline your development and operations with new tools ** We're launching new tools to remove friction from your tedious development tasks by helping you validate deep links and scale to new markets with Gemini-powered AI. Simplify deep link validation with a built-in emulator Troubleshooting deep links can be complex and time-consuming so we’re excited to launch a new, streamlined experience that allows you to instantly validate your deep links directly within Play Console. This means you can use a built-in emulator to test a deep link and immediately see the expected user experience on the spot, just as if someone clicked the URL on a real device. Instantly validate your deep links using the new built-in emulator **Reach a global audience with Gemini-powered localization ** We’re making it easier to bring your app or game to a global audience by simplifying localization. With our latest translation service, we've integrated the power of Gemini into Play Console to offer high-quality translations for your app strings, at no cost. This service automatically translates new app bundles into your selected languages, accelerating your title to new markets. Most importantly, you always remain in full control with the ability to preview the translated app with a built-in emulator and easily edit or disable translations. **Drive growth and engagement with AI-powered insights and You tab ** We're launching new ways to help you reach and retain users, Including AI-powered insights and the new You tab for re-engagement. **Get faster insights with automated chart summaries** To help you spend less time interpreting data and more time acting on key insights, a new Gemini-powered feature on the Statistics page automatically generates descriptions of your charts. These summaries help you quickly understand key trends and events that might be affecting your metrics. For developers who use a screen reader, this feature also provides access to reporting in a way you haven't had before. Get faster insights with new Gemini-powered chart summaries Access objective-related metrics and actionable advice for audience growth Earlier this year, we launched objective-based overview pages in Play Console to consolidate your key metrics, app performance, and actionable steps across essential workflows. With dedicated pages for Test & Release, Monitor & Improve, and Monetize with Play already live, we're excited to announce the full completion of this toolkit. The new Grow users overview page is now available, giving you a comprehensive, tailored view to help you acquire new users and expand your reach. _ _Track your key audience growth metrics on the new "Grow users" overview page_ _ **Boost re-engagement with the You tab** Last month, we launched You tab, a brand new, personalized destination on the Play Store. This is where users can discover and re-engage with content from their favorite apps and games with curated rewards, subscriptions, recommendations, and updates all in one place. App developers can take advantage of this personalized destination by integrating with Engage SDK. This integration allows you to help people pick up right where they left off—like resuming a movie or playlist— or get personalized recommendations, all while seamlessly guiding them back into your app. Game developers can use this surface to showcase timely in-game events, content updates, and special offers, making it easy for players to jump right back into the action. Promotional content, YouTube video listings, and Play Points coupons are now open to all game developers for creating a rich presence on the You tab. The availability of these powerful re-engagement tools is part of our broader commitment to game quality through the new Google Play Games Level Up program. Learn more about the program's guidelines here. _Showcase in-game events and offers on the new You tab_ **Optimize your monetization strategy and track performance ** We're launching powerful new ways to configure your one-time products and track the full impact of your Play Points promotions with a new, consolidated reporting page. **Simplify catalog management for one-time products** Earlier this year, we introduced more flexible ways to configure one-time purchases. You can now offer your in-app products as limited-time rentals, and sign up for our early access program to get started with pre-orders. We've also launched a new taxonomy, building on our existing subscription model, to help you manage your catalog more efficiently. This new model unlocks significant flexibility to help you reach a wider audience and cater to different user preferences by letting you offer the same item in multiple ways. For example, you can sell an item in one country and rent it in another—helping Play better surface relevant offerings to users. Explore these new capabilities today in Play Console. _Manage your catalog more efficiently with new ways to configure one-time products_ ** ** Understand the impact and performance of Play Points promotions With Play Points recently opened to all eligible titles, you can now better understand the impact of your promotions. The new Play Points page in Play Console lets you see the total revenue, buyers and acquisitions that all Play Points promotions have generated. This reporting covers both your developer-created offers, as well as new reporting for Google-funded Play Points promotions, which includes direct and post-promotion performance metrics. New reporting for Play Points promotions The features announced today are more than just updates; they are the building blocks of a powerful growth engine for your business. We hope you start exploring these new capabilities today and continue sharing feedback so we can build the tools you need to build a thriving, sustainable business on Google Play.
android-developers.googleblog.com
October 31, 2025 at 2:11 AM
How Calm Reimagined Mindfulness for Android XR
_Posted by Stevan Silva , Sr. Product Manager, Android XR_ Calm is a leading mental health and wellness company with over 180 million downloads. When they started their development for Android XR, their core engineering team was able to build their first functional XR orbiter menus on Day 1 and a core experience in just two weeks. This demonstrates that building for XR can be an extension of existing Android development work, not something that has to be started from scratch. As a company dedicated to helping users sleep better, stress less, and live more mindfully, their extensive library has made Calm a trusted source for well-being content on Android. With the introduction of the Android XR platform, the Calm team saw an opportunity to not just optimize their existing Android app, but to truly create the next generation of immersive experiences. We sat down with Kristen Coke, Lead Product Manager, and Jamie Martini, Sr. Manager of Engineering at Calm, to dive into their journey building for Android XR and learn how other developers can follow their lead. Q: What was the vision for the Calm experience on Android XR, and how does it advance your mission? A (Kristen Coke, Lead Product Manager): Our mission is to support everyone on every step of their mental health journey. XR allows us to expand how people engage with our mindfulness content, creating an experience that wasn’t just transportive but transformative. If I had to describe it in one sentence, Calm on Android XR reimagines mindfulness for the world around you, turning any room into a fully immersive, multisensory meditation experience. We wanted to create a version of Calm that couldn’t exist anywhere else, a serene and emotionally intelligent sanctuary that users don't just want to visit, but will return to again and again. Q: For developers who might think building for XR is a massive undertaking, what was your initial approach to bringing your existing Android app over? A (Jamie Martini, Sr. Manager of Engineering): Our main goal was to adapt our Android app for XR and honestly, the process felt easy and seamless. We already use Jetpack Compose extensively for our mobile app, so expanding that expertise into XR was the natural choice. It felt like extending our Android development, not starting from scratch. We were able to reuse a lot of our existing codebase, including our backend, media playback, and other core components, which dramatically cut down on the initial work. The Android XR design guides provided valuable context throughout the process, helping both our design and development teams shape Calm’s mobile-first UX into something natural and intuitive for a spatial experience. Q: You noted the process felt seamless. How quickly was your team able to start building and iterating on the core XR experience? A (Jamie Martini, Sr. Manager of Engineering): We were productive right away, building our first orbiter menus on day one and a core XR Calm experience in about two weeks. The ability to apply our existing Android and Jetpack experience directly to a spatial environment gave us a massive head start, making the time-to-first-feature incredibly fast. Q: Could you tell us about what you built to translate the Calm experience into this new spatial environment? A (Jamie Martini, Sr. Manager of Engineering): We wanted to take full advantage of the immersive canvas to rethink how users engage with our content. Two of the key features we evolved were the Immersive Breathe Bubble and the Immersive Scene Experiences. The Breathe Bubble is our beloved breathwork experience, but brought into 3D. It’s a softly pulsing orb that anchors users to their breath with full environmental immersion. And with our Immersive Scene Experiences, users can choose from a curated selection of ambient environments designed to gently wrap around them and fade into their physical environment. This was a fantastic way to take a proven 2D concept (the mobile app’s customizable background scenes) and transform it for the spatial environment. We didn't build new experiences from scratch; we simply evolved core, proven features to take advantage of the immersive canvas. ** ** Q: What were the keys to building a visually compelling experience that feels native to the Android XR platform? ** ** A (Kristen Coke, Lead Product Manager): Building for a human-scale, spatial environment required us to update our creative workflow. ** ** We started with concept art to establish our direction, which we then translated into 3D models using a human-scale reference to ensure natural proportions and comfort for the user. ** ** Then, we consistently tested the assets directly in a headset to fine-tune scale, lighting, and atmosphere. For developers who may not have a physical device, the Android XR emulator is a helpful alternative for testing and debugging. ** ** We quickly realized that in a multisensory environment, restraint was incredibly powerful. We let the existing content (the narration, the audio) amplify the environment, rather than letting the novelty of the 3D space distract from the mindfulness core. ** ** Q: How would you describe the learning curve for other developers interested in building for XR? Do you have any advice? ** ** A (Jamie Martini, Sr. Manager of Engineering) : This project was the first step into immersive platforms for our Android engineering team, and we were pleasantly surprised. The APIs were very easy to learn and use and felt consistent with other Jetpack libraries. ** ** My advice to other developers? Begin by integrating the Jetpack XR APIs into your existing Android app and reusing as much of your existing code as possible. That is the quickest way to get a functional prototype. A (Kristen Coke, Lead Product Manager): Think as big as possible. Android XR gave us a whole new world to build our app within. Teams should ask themselves: What is the biggest, boldest version of your experience that you could possibly build? This is your opportunity to finally put into action what you’ve always wanted to do, because now, you have the platform that can make it real. Building the next generation of spatial experiences The work the Calm team has done showcases how building on the Android XR platform can be a natural extension of your existing Android expertise. By leveraging the Jetpack XR SDKs, Calm quickly evolved their core mobile features into a stunning spatial experience. If you’re ready to get started, you can find all the resources you need at developer.android.com/xr. Head over there to download the latest SDK, explore our documentation, and start building today.
android-developers.googleblog.com
October 31, 2025 at 2:11 AM
Introducing Cahier: A new Android GitHub sample for large screen productivity and creativity
_Posted by Chris Assigbe, Android Developer Relations Engineer_ Ink API is now in beta and is ready to be integrated in your app.. This milestone was made possible by valuable developer feedback, leading to continuous improvements in the API's performance, stability, and visual quality. Google apps, such as Google Docs, Pixel Studio, Google Photos, Chrome PDF, Youtube Effect Maker, and unique features on Android such as Circle to Search all use the latest APIs. To mark this milestone, we're excited to announce the launch of Cahier, a comprehensive note-taking app sample optimized for Android devices of all sizes particularly tablets and foldable phones. ## What is Cahier? Cahier ("notebook" in French) is a sample app designed to demonstrate how you can build an application that enables users to capture and organize their thoughts by combining text, drawings, and images. The sample can serve as the go-to reference for enhancing user productivity and creativity on large screens. It showcases best practices for building such experiences, accelerating developer understanding and adoption of related powerful APIs and techniques. This post walks you through the core features of Cahier, key APIs, and the architectural decisions that make the sample a great reference for your own apps. Key features demonstrated in the sample include: * Versatile note creation: Shows how to implement a flexible content creation system that supports multiple formats within a single note, including text, freeform drawings, and image attachments. * Creative inking tools: Implements a high performance, low latency drawing experience using the Ink API. The sample provides a practical example of integrating various brushes, a color picker, undo/redo functionality, and an eraser tool. * Fluid content integration with drag and drop: Demonstrates how to handle both incoming and outgoing content using drag and drop. This includes accepting images dropped from other apps and enabling users to drag content out of your app for seamless sharing. * Note organization: Mark notes as favorites for quick access. Filter the view to stay organized. * Offline first architecture:  Built with an offline first architecture using Room, ensuring all data is saved locally and the app remains fully functional without an internet connection. * Powerful multi-window and multi-instance support: Showcases how to support multi-instance, allowing your app to be launched in multiple windows so users can work on different notes side by side, enhancing productivity and creativity on large screens. * Adaptive UI for all screens: The user interface seamlessly adapts to different screen sizes and orientations using ListDetailPaneScaffold and NavigationSuiteScaffold to provide an optimized user experience on phones, tablets, and foldables. * Deep system integration: Provides a guide on how to make your app the default note-taking app on Android 14 and higher by responding to system wide Notes intents, enabling quick content capture from various system entry points. ## Built for productivity and creativity on large screens For the initial launch, we're centering the announcement on a few core features that make Cahier a key learning resource for both productivity and creativity use cases. #### A foundation of adaptivity Cahier is built to be adaptive from the ground up. The sample utilizes the material3-adaptive library specifically ListDetailPaneScaffold and NavigationSuiteScaffold to seamlessly adapt the app layout to various screen sizes and orientations. This is a crucial element for a modern Android app, and Cahier provides a clear example of how to implement it effectively. Cahier adaptive UI built with Material 3 Adaptive library. ### Showcasing key APIs and integrations The sample is focused on showcasing powerful productivity APIs that you can leverage in your own applications, including: * **Ink API** * **Notes role** * **Multi-instance , Multi-window, and Desktop windowing** * **Drag and drop** ## A Closer look at key APIs Let's dive deeper into two of the cornerstone APIs that Cahier integrates to deliver a first class note-taking experience. ### Creating natural inking experiences with the Ink API Stylus input transforms large screen devices into digital notebooks and sketchbooks. To help you build fluid and natural inking experiences, we’ve made the Ink API a cornerstone of the sample. Ink API makes it easy to create, render, and manipulate beautiful ink strokes with best in class low latency. Ink API offers a modular architecture, so you can tailor it to your app's specific stack and needs. The API modules include: * Authoring modules (Compose - views): Handle realtime inking input to create smooth strokes with the lowest latency a device can provide. In DrawingSurface, Cahier uses the newly introduced InProgressStrokes composable to handle realtime stylus or touch input. This module is responsible for capturing pointer events and rendering wet ink strokes with the lowest possible latency. * Strokes module: Represents the ink input and its visual representation.  a user finishes drawing a line, the onStrokesFinished callback provides a finalized/dry Stroke object to the app. This immutable object, representing the completed ink stroke, is then managed in DrawingCanvasViewModel. * Rendering module: Efficiently displays ink strokes, allowing them to be combined with Jetpack Compose or Android views. To display both existing and newly dried strokes, Cahier uses CanvasStrokeRenderer  in DrawingSurface for active drawing and in DrawingDetailPanePreview for showing a static preview of the note. This module efficiently draws the Stroke objects onto a Canvas. * Brush modules (Compose - views): Provide a declarative way to define the visual style of strokes. Recent updates (since the alpha03 release) include a new dashed line brush, particularly useful for features like lasso selection. DrawingCanvasViewModel holds the state for the currentBrush. A toolbox in DrawingCanvas allows users to select different brush families (like StockBrushes.pressurePen() or StockBrushes.highlighter()) and change colors. The ViewModel updates the Brush object, which is then used by the InProgressStrokes composable for new strokes. * Geometry modules (Compose - views): Support manipulating and analyzing strokes for features like erasing and selecting. The eraser tool within the toolbox and functionality in DrawingCanvasViewModel rely on the geometry module. When the eraser is active, it creates a MutableParallelogram around the path of the user's gesture. The eraser then checks for intersections between the shape and bounding boxes of existing strokes to determine which strokes to erase, making the eraser feel intuitive and precise. * Storage module: Provides efficient serialization and deserialization capabilities for ink data, leading to significant disk and network size savings. To save drawings, Cahier persists the Stroke objects in its Room database. In Converters, the sample uses the storage module’s encode function to serialize the StrokeInputBatch (the raw point data) into a ByteArray. The byte array, along with brush properties, is saved as a JSON string. The decode function is used to reconstruct the strokes when a note is loaded. Beyond these core modules, recent updates have expanded the Ink API's capabilities: * New experimental APIs for custom BrushFamily objects empower developers to create creative and unique brush types, providing the possibilities for tools like Pencil and Laser Pointer brushes. Cahier leverages custom brushes, including the unique music brush showcased below, to illustrate advanced creative possibilities. Rainbow laser created with Ink API's custom brushes. Music brush created with Ink API's custom brushes. * Native Jetpack Compose interoperability modules streamline the integration of inking functionalities directly within your Compose UIs for a more idiomatic and efficient development experience. Ink API offers several advantages that make it the ideal choice for productivity and creativity apps over a custom implementation: * Ease of use: Ink API abstracts away the complexities of graphics and geometry, allowing you to focus on Cahier's core features. * Performance: Built-in low latency support and optimized rendering ensure a smooth and responsive inking experience. * Flexibility: The modular design allows you to pick and choose the components needed, which enables seamless integration of the Ink API into Cahier's architecture. #### Ink API has already been adopted across many Google apps, including for markup in Docs and for Circle to Search as well as partner apps like Orion Notes, and PDF Scanner. “Ink API was our first choice for Circle-to-Search (CtS). Utilizing their extensive documentation, integrating the Ink API was a breeze, allowing us to reach our first working prototype w/in just one week. Ink's custom brush texture and animation support allowed us to quickly iterate on the stroke design.” - Jordan Komoda, Software Engineer - Google. ### Becoming the default notes app with notes role Note-taking is a core capability that enhances user productivity on large screen devices. With the notes role feature, users can access  your compatible apps from the lock screen or while other apps are running. This feature identifies and sets system wide default note-taking apps and grants them permission to be launched for capturing content. #### Implementation in Cahier Implementing the notes role involves a few key steps, all demonstrated in the sample: 1. Manifest declaration: First, the app must declare its capability to handle note-taking intents. In AndroidManifest.xml, Cahier includes an <intent-filter> for the android.intent.action.CREATE_NOTE action. This signals to the system that the app is a potential candidate for the notes role. 2. Checking role status: SettingsViewModel uses Android's RoleManager to determine the current status. SettingsViewModel checks whether the notes role is available on the device (isRoleAvailable) and whether Cahier currently holds that role (isRoleHeld). This state is exposed to the UI using Kotlin flows. 3. Requesting the role: In the Settings.kt file, a Button is displayed to the user if the role is available but not held. When clicked, the button calls the requestNotesRole function in the ViewModel. The function creates an intent to open the default app settings screen where the user can select Cahier. The process is managed using the rememberLauncherForActivityResult API, which handles launching the intent and receiving the result. 4. Updating the UI: After the user returns from the settings screen, the ActivityResultLauncher callback triggers a function in the ViewModel to update the role status, ensuring the UI accurately reflects whether the app is now the default. Learn how to integrate the notes role in your app in our create a note-taking app guide. Cahier launched in a floating window as the default note-taking app on a Lenovo tablet. #### A major step forward: Lenovo enables notes role We're thrilled to announce a major step forward for large screen Android productivity: Lenovo has enabled support for Notes Role on tablets running Android 15 and higher! With this update, you can now update your note-taking apps to allow users with compatible Lenovo devices to set them as default, granting seamless access from the lock screen and unlocking system level content capture features. This commitment from a leading OEM demonstrates the growing importance of the notes role in delivering a truly integrated and productive user experience on Android. ### Multi-instance, multi-windowing, and desktop windowing Productivity on a large screen is all about managing information and workflows efficiently. That's why Cahier is built to fully embrace Android's advanced windowing capabilities, providing a flexible workspace that adapts to user needs. The app supports: * Multi-windowing: The fundamental ability to run alongside another app in split-screen or free-form mode. This is essential for tasks like referencing a web page while taking notes in Cahier. * Multi-instance: This is where true multitasking shines. Cahier allows users to open multiple, independent windows of the app simultaneously. Imagine comparing two different notes side by side or referencing a text note in one window while working on a drawing in another. Cahier demonstrates how to manage these separate instances, each with its own state, turning your app into a powerful, multifaceted tool. * Desktop windowing: When connected to an external display, Android desktop mode transforms a tablet or foldable into a workstation. Because Cahier is built with an adaptive UI and supports multi-instance, the app performs beautifully in this environment. Users can open, resize, and position multiple Cahier windows just like on a traditional desktop, enabling complex workflows that were previously out of reach on mobile devices. Cahier running in desktop window mode on Pixel Tablet. Here’s how we implemented these features in Cahier: To enable multi-instance, we first needed to signal to the system that the app supports being launched multiple times by adding the PROPERTY_SUPPORTS_MULTI_INSTANCE_SYSTEM_UI property to MainActivity ‘s declaration in AndroidManifest: <activity     android:name="com.example.cahier.MainActivity"     android:exported="true"     android:label="@string/app_name"     android:theme="@style/Theme.MyApplication"     android:showWhenLocked="true"     android:turnScreenOn="true"     android:resizeableActivity="true"     android:launchMode="singleInstancePerTask">     <property         android:name="android.window.PROPERTY_SUPPORTS_MULTI_INSTANCE_SYSTEM_UI"         android:value="true"/>     ... </activity> Next, we implemented the logic to launch a new instance of the app. In CahierHomeScreen.kt, when a user opts to open a note in a new window, we create a new Intent with specific flags that instruct the system on how to handle the new activity launch. The combination of FLAG_ACTIVITY_NEW_TASK, FLAG_ACTIVITY_MULTIPLE_TASK, and FLAG_ACTIVITY_LAUNCH_ADJACENT ensures the note opens in a new, separate window alongside the existing one. fun openNewWindow(activity: Activity?, note: Note) {     val intent = Intent(activity, MainActivity::class.java)     intent.putExtra(AppArgs.NOTE_TYPE_KEY, note.type)     intent.putExtra(AppArgs.NOTE_ID_KEY, note.id)     intent.flags = Intent.FLAG_ACTIVITY_NEW_TASK or Intent.FLAG_ACTIVITY_MULTIPLE_TASK or         Intent.FLAG_ACTIVITY_LAUNCH_ADJACENT     activity?.startActivity(intent) } To support multi-window mode, we needed to signal to the system that the app supports resizability by setting the Manifest’s <activity> or <application> element. <activity     android:name="com.example.cahier.MainActivity"     android:resizeableActivity="true"     ...> </activity> The UI itself being built with the Material 3 adaptive library enables it to adapt seamlessly in multi-window scenarios like Android’s split screen mode. To enhance user experience, we added support for drag and drop. See below how we implemented this in Cahier. ### Drag and drop A truly productive or creative app doesn’t function in isolation; it interacts seamlessly with the rest of the device's ecosystem. Drag and drop is a cornerstone of this interaction, especially on large screens where users are often working across multiple app windows. Cahier fully embraces this by implementing intuitive drag and drop functionality for both adding and sharing content. * Effortless Importing: Users can drag images from other applications—like a web browser, photo gallery, or file manager—and drop them directly onto a note canvas. For this, Cahier uses the dragAndDropTarget modifier to define a drop zone, check for compatible content (like image/*), and process the incoming URI. * Simple sharing: Content inside Cahier is just as easy to share as content from other apps. Users can long-press an image within a text note, or long-press the entire canvas of a drawing note and image composite, and drag it out to another application. #### Technical deep dive: Dragging from the drawing canvas Implementing the drag gesture on the drawing canvas presents a unique challenge. In our DrawingSurface, the composables that handle live drawing input (the Ink API's InProgressStrokes) and the Box that detects the long-press gesture to initiate a drag are sibling composables. By default, the Jetpack Compose pointer input system is designed so that just one sibling composable —the first one in declaration order that overlaps the touch location—receives the event. In Cahier’s case, we want our drag-and-drop input handling logic to have a chance to run and potentially consume inputs before the InProgressStrokes composable uses all unconsumed input for drawing and then consumes that input. If we don’t arrange things in the right order, our Box won’t detect the long-press gesture to start a drag, or InProgressStrokes won’t receive the input to draw. To solve this, we created a custom pointerInputWithSiblingFallthrough modifier, and we put our Box using that modifier before InProgressStrokes in the composable code. This utility is a thin wrapper around the standard pointerInput system but with one critical change: it overrides the sharePointerInputWithSiblings() function to return true. This tells the Compose framework to allow pointer events to pass through to sibling composables, even after being consumed. internal fun Modifier.pointerInputWithSiblingFallthrough(     pointerInputEventHandler: PointerInputEventHandler ) = this then PointerInputSiblingFallthroughElement(pointerInputEventHandler) private class PointerInputSiblingFallthroughModifierNode(     pointerInputEventHandler: PointerInputEventHandler ) : PointerInputModifierNode, DelegatingNode() {     var pointerInputEventHandler: PointerInputEventHandler         get() = delegateNode.pointerInputEventHandler         set(value) {             delegateNode.pointerInputEventHandler = value         }     val delegateNode = delegate(         SuspendingPointerInputModifierNode(pointerInputEventHandler)     )     override fun onPointerEvent(         pointerEvent: PointerEvent,         pass: PointerEventPass,         bounds: IntSize     ) {         delegateNode.onPointerEvent(pointerEvent, pass, bounds)     }     override fun onCancelPointerInput() {         delegateNode.onCancelPointerInput()     }     override fun sharePointerInputWithSiblings() = true } private data class PointerInputSiblingFallthroughElement(     val pointerInputEventHandler: PointerInputEventHandler ) : ModifierNodeElement<PointerInputSiblingFallthroughModifierNode>() {     override fun create() = PointerInputSiblingFallthroughModifierNode(pointerInputEventHandler)     override fun update(node: PointerInputSiblingFallthroughModifierNode) {         node.pointerInputEventHandler = pointerInputEventHandler     }     override fun InspectorInfo.inspectableProperties() {         name = "pointerInputWithSiblingFallthrough"         properties["pointerInputEventHandler"] = pointerInputEventHandler     } } Here’s how it's used in DrawingSurface: Box(     modifier = Modifier         .fillMaxSize()         // Our custom modifier enables this gesture to coexist with the drawing input.         .pointerInputWithSiblingFallthrough {             detectDragGesturesAfterLongPress(                 onDragStart = { onStartDrag() },                 onDrag = { _, _ -> /* consume drag events */ },                 onDragEnd = { /* No action needed */ }             )         } ) // The Ink API's composable for live drawing sits here as a sibling. InProgressStrokes(...) With this in place, the system correctly detects both the drawing strokes and the long-press drag gesture simultaneously. Once the drag is initiated, we create a shareable content:// URI with FileProvider and pass the URI to the system's drag and drop framework using view.startDragAndDrop(). This solution ensures a robust and intuitive user experience, showcasing how to overcome complex gesture conflicts in layered UIs. ## Built with modern architecture Beyond specific APIs, Cahier demonstrates crucial architectural patterns for building high-quality, adaptive applications. ### The presentation layer: Jetpack Compose and adaptability The presentation layer is built entirely with Jetpack Compose. As mentioned, Cahier adopts the material3-adaptive library for UI adaptability. State management follows a strict Unidirectional Data Flow (UDF) pattern, with ViewModel instances used as data containers that hold note information and UI state. ### The data layer: Repositories and Room For the data layer, Cahier uses a _NoteRepositor y_ interface to abstract all data operations. This design choice cleanly allows the app to swap between a local data source (Room) and a potential future remote backend. The data flow for an action like editing a note is straightforward: 1. The Jetpack Compose UI triggers a method in the ViewModel. 2. The ViewModel fetches the note from NoteRepository, handles the logic, and passes the updated note back to the repository. 3. NoteRepository saves the update to a Room database. ### Comprehensive input support To be a true productivity powerhouse, an app must handle a variety of input methods flawlessly. Cahier is built to be compliant with large screen input guidelines and supports: * Stylus: Integration with the Ink API, palm rejection, registration for the notes role, stylus input in text fields, and immersive mode. * Keyboard: Support for most common keyboard shortcuts and combinations (like ctrl+click, meta+click) and clear indication for keyboard focus. * Mouse and trackpad: Support for right-click and hover states. Support for advanced keyboard, mouse, and trackpad interactions is a key focus for further improvements. ## Get started today We hope Cahier serves as a launchpad for your next great app. We built it to be a comprehensive, open source resource that demonstrates how to combine an adaptive UI, powerful APIs like Ink and notes role, and a modern, adaptive architecture. Ready to dive in? * Explore the code: Head over to our GitHub repository to explore the Cahier codebase and see the design principles in action. * Build your own: Use Cahier as a foundation for your own note-taking, document markup, or creative application. * Contribute: We welcome your contributions! Help us make Cahier an even better resource for the Android developer community. Check out the official developer guides and start building your next generation productivity and creativity app today. We can't wait to see what you create!
android-developers.googleblog.com
October 30, 2025 at 7:32 AM
High-Speed Capture and Slow-Motion Video with CameraX 1.5
_Posted by Leo Huang, Software Engineer_ Capturing fast-moving action with clarity is a key feature for modern camera apps. This is achieved through high-speed capture—the process of acquiring frames at rates like 120 or 240 fps. This high-fidelity capture can be used for two distinct purposes: creating a high-frame-rate video for detailed, frame-by-frame analysis, or generating a slow-motion video where action unfolds dramatically on screen. Previously, implementing these features with the Camera2 API was a more hands-on process. Now, with the new high-speed API in CameraX 1.5, the entire process is simplified, giving you the flexibility to create either true high-frame-rate videos or ready-to-play slow-motion clips. This post will show you how to master both. For those new to CameraX, you can get up to speed with the CameraX Overview. * * * ## The Principle Behind Slow-Motion The fundamental principle of slow-motion is to capture video at a much higher frame rate than it is played back. For instance, if you record a one-second event at 120 frames per second (fps) and then play that recording back at a standard 30 fps, the video will take four seconds to play. This "stretching" of time is what creates the dramatic slow-motion effect, allowing you to see details that are too fast for the naked eye. To ensure the final output video is smooth and fluid, it should typically be rendered at a minimum of 30 fps. This means that to create a 4x slow-motion video, the original capture frame rate must be at least 120 fps (120 capture fps ÷ 4 = 30 playback fps). Once the high-frame-rate footage is captured, there are two primary ways to achieve the desired outcome: * Player-handled Slow-Motion (High-Frame-Rate Video): The high-speed recording (e.g., 120 fps) is saved directly as a high-frame-rate video file. It is then the video player's responsibility to slow down the playback speed. This gives the user flexibility to toggle between normal and slow-motion playback. * Ready-to-play Slow-Motion (Re-encoded Video): The high-speed video stream is processed and re-encoded into a file with a standard frame rate (e.g., 30 fps). The slow-motion effect is "baked in" by adjusting the frame timestamps. The resulting video will play in slow motion in any standard video player without special handling. While the video plays in slow motion by default, video players can still provide playback speed controls that allow the user to increase the speed and watch the video at its original speed. The CameraX API simplifies this by giving you a unified way to choose which approach you want, as you'll see below. * * * ## The New High-Speed Video API The new CameraX solution is built on two main components: * Recorder#getHighSpeedVideoCapabilities(CameraInfo): This method lets you check if the camera can record in high-speed and, if so, which resolutions (Quality objects) are supported. * HighSpeedVideoSessionConfig: This is a special configuration object that groups your VideoCapture and Preview use cases, telling CameraX to create a unified high-speed camera session. Note that while the VideoCapture stream will operate at the configured high frame rate, the Preview stream will typically be limited to a standard rate of at least 30 FPS by the camera system to ensure a smooth display on the screen. ### Getting Started Before you start, make sure you have added the necessary CameraX dependencies to your app's build.gradle.kts file. You will need the camera-video artifact along with the core CameraX libraries. // build.gradle.kts (Module: app) dependencies {     val camerax_version = "1.5.1"     implementation("androidx.camera:camera-core:$camerax_version")     implementation("androidx.camera:camera-camera2:$camerax_version")     implementation("androidx.camera:camera-lifecycle:$camerax_version")     implementation("androidx.camera:camera-video:$camerax_version")     implementation("androidx.camera:camera-view:$camerax_version") } ### A Note on Experimental APIs It's important to note that the high-speed recording APIs are currently experimental. This means they are subject to change in future releases. To use them, you must opt-in by adding the following annotation to your code: @kotlin.OptIn(ExperimentalSessionConfig::class, ExperimentalHighSpeedVideo::class) * * * ## Implementation The implementation for both outcomes starts with the same setup steps. The choice between creating a high-frame-rate video or a slow-motion video comes down to a single setting. ### 1. Set up High-Speed Capture First, regardless of your goal, you need to get the ProcessCameraProvider, check for device capabilities, and create your use cases. The following code block shows the complete setup flow within a suspend function. You can call this function from a coroutine scope, like lifecycleScope.launch. // Add the OptIn annotation at the top of your function or class @kotlin.OptIn(ExperimentalSessionConfig::class, ExperimentalHighSpeedVideo::class) private suspend fun setupCamera() {     // Asynchronously get the CameraProvider     val cameraProvider = ProcessCameraProvider.awaitInstance(this)     // -- CHECK CAPABILITIES --     val cameraInfo = cameraProvider.getCameraInfo(CameraSelector.DEFAULT_BACK_CAMERA)     val videoCapabilities = Recorder.getHighSpeedVideoCapabilities(cameraInfo)     if (videoCapabilities == null) {         // This camera device does not support high-speed video.         return     }     // -- CREATE USE CASES --     val preview = Preview.Builder().build()     // You can create a Recorder with default settings.     // CameraX will automatically select a suitable quality.     val recorder = Recorder.Builder().build()     // Alternatively, to use a specific resolution, you can configure the     // Recorder with a QualitySelector. This is useful if your app has     // specific resolution requirements or you want to offer user     // preferences.     // To use a specific quality, you can uncomment the following lines.     // Get the list of qualities supported for high-speed video.     // val supportedQualities = videoCapabilities.getSupportedQualities(DynamicRange.SDR)     // Build the Recorder using the quality from the supported list.     // val recorderWithQuality = Recorder.Builder()     //     .setQualitySelector(QualitySelector.from(supportedQualities.first()))     //     .build()     // Create the VideoCapture use case, using either recorder or recorderWithQuality     val videoCapture = VideoCapture.withOutput(recorder)     // Now you are ready to configure the session for your desired output... } * * * ### 2. Choosing Your Output Now, you decide what kind of video you want to create. This code would run inside the setupCamera() suspend function shown above. #### Option A: Create a High-Frame-Rate Video Choose this option if you want the final file to have a high frame rate (e.g., a 120fps video). // Create a builder for the high-speed session val sessionConfigBuilder = HighSpeedVideoSessionConfig.Builder(videoCapture)     .setPreview(preview) // Query and apply a supported frame rate. Common supported frame rates include 120 and 240 fps. val supportedFrameRateRanges =     cameraInfo.getSupportedFrameRateRanges(sessionConfigBuilder.build()) sessionConfigBuilder.setFrameRateRange(supportedFrameRateRanges.first()) Option B: Create a Ready-to-play Slow-Motion Video Choose this option if you want a video that plays in slow motion automatically in any standard video player. // Create a builder for the high-speed session val sessionConfigBuilder = HighSpeedVideoSessionConfig.Builder(videoCapture)     .setPreview(preview) // This is the key: enable automatic slow-motion! sessionConfigBuilder.setSlowMotionEnabled(true) // Query and apply a supported frame rate. Common supported frame rates include 120, 240, and 480 fps. val supportedFrameRateRanges =    cameraInfo.getSupportedFrameRateRanges(sessionConfigBuilder.build()) sessionConfigBuilder.setFrameRateRange(supportedFrameRateRanges.first()) This single flag is the key to creating a ready-to-play slow-motion video. When setSlowMotionEnabled is true, CameraX processes the high-speed stream and saves it as a standard 30 fps video file. The slow-motion speed is determined by the ratio of the capture frame rate to this standard playback rate. For example: * Recording at 120 fps will produce a video that plays back at 1/4x speed (120 ÷ 30 = 4). * Recording at 240 fps will produce a video that plays back at 1/8x speed (240 ÷ 30 = 8). * * * ## Putting It All Together: Recording the Video Once you have configured your HighSpeedVideoSessionConfig and bound it to the lifecycle, the final step is to start the recording. The process of preparing output options, starting the recording, and handling video events is the same as it is for a standard video capture. This post focuses on high-speed configuration, so we won't cover the recording process in detail. For a comprehensive guide on everything from preparing a FileOutputOptions or MediaStoreOutputOptions object to handling the VideoRecordEvent callbacks, please refer to the VideoCapture documentation. // Bind the session config to the lifecycle cameraProvider.bindToLifecycle(     this as LifecycleOwner,     CameraSelector.DEFAULT_BACK_CAMERA,     sessionConfigBuilder.build() // Bind the config object from Option A or B ) // Start the recording using the VideoCapture use case val recording = videoCapture.output     .prepareRecording(context, outputOptions) // See docs for creating outputOptions     .start(ContextCompat.getMainExecutor(context)) { recordEvent ->         // Handle recording events (e.g., Start, Pause, Finalize)     } * * * ## Google Photos Support for Slow-Motion Videos When you enable setSlowMotionEnabled(true) in CameraX, the resulting video file is designed to be instantly recognizable and playable as slow-motion in standard video players and gallery apps. Google Photos, in particular, offers enhanced functionality for these slow-motion videos, when the capture frame rate is 120, 240, 360, 480 or 960fps: * Distinct UI Recognition in Thumbnail: In your Google Photos library, slow-motion videos can be identified by specific UI elements, distinguishing them from normal videos. | ---|--- Normal video thumbnail| Slow-motion video thumbnail * Adjustable Speed Segments during Playback: When playing a slow-motion video, Google Photos provides controls to adjust which parts of the video play at slow speed and which play at normal speed, giving users creative control. The edited video can then be exported as a new video file using the Share button, preserving the slow-motion segments you defined. | ---|--- Normal video playback| Slow-motion video playback with editing controls * * * ### A Note on Device Support CameraX's high-speed API relies on the underlying Android CamcorderProfile system to determine which high-speed resolutions and frame rates a device supports. CamcorderProfiles are validated by the Android Compatibility Test Suite (CTS), which means you can be confident in the device's reported video recording capabilities. This means that a device's ability to record slow-motion video with its built-in camera app does not guarantee that the CameraX high-speed API will function. This discrepancy occurs because device manufacturers are responsible for populating the CamcorderProfile entries in their device's firmware, and sometimes necessary high-speed profiles like CamcorderProfile.QUALITY_HIGH_SPEED_1080P and CamcorderProfile.QUALITY_HIGH_SPEED_720P are not included. When these profiles are missing, Recorder.getHighSpeedVideoCapabilities() will return null. Therefore, it's essential to always use Recorder.getHighSpeedVideoCapabilities() to check for supported features programmatically, as this is the most reliable way to ensure a consistent experience across different devices. If you try to bind a HighSpeedVideoSessionConfig on a device where Recorder.getHighSpeedVideoCapabilities() returns null, the operation will fail with an IllegalArgumentException. You can confirm support on Google Pixel devices, as they consistently include these high-speed profiles. Additionally, various devices from other manufacturers, such as the Motorola Edge 30, OPPO Find N2 Flip, and Sony Xperia 1 V, also support the high-speed video capabilities. * * * ### Conclusion The CameraX high-speed video API is both powerful and flexible. Whether you need true high-frame-rate footage for technical analysis or want to add cinematic slow-motion effects to your app, the HighSpeedVideoSessionConfig provides a unified and simple solution. By understanding the role of the setSlowMotionEnabled flag, you can easily support both use cases and give your users more creative control.
android-developers.googleblog.com
October 30, 2025 at 7:32 AM
Material 3 Adaptive 1.2.0 is stable
_Posted by Rob Orgiu, Android Developer Relations Engineer_ We’re excited to announce that Material 3 Adaptive 1.2.0 is now stable! This release continues to build on the foundations of previous versions, expanding support to more breakpoints for window size classes and new strategies to place display panes automatically. # What’s new in Material 3 Adaptive 1.2.0 This stable release is built on top of WindowManager 1.5.0 support for large and extra large breakpoints, and introduces the new reflow and levitate strategies for ListDetailPaneScaffold and SupportingPaneScaffold. ## New window size classes: Large and Extra-large WindowManager 1.5.0 introduced two new breakpoints for width window size class to support even bigger windows than the Expanded window size class. The Large (L) and Extra-large (XL) breakpoints can be enabled by adding the following parameter to the currentWindowAdaptiveInfo() call  in your codebase: currentWindowAdaptiveInfo(supportLargeAndXLargeWidth = true) --- This flag enables the library to also return L and XL breakpoints whenever they’re needed. ## New adaptive strategies: reflow and levitate Arranging content and display panes in a window is a complex task that needs to take into account many factors, starting with window size. With the new Material 3 Adaptive library, two new technologies can help you achieve an adaptive layout with minimal effort. With reflow, panes are rearranged when window size or aspect ratio changes, placing a second pane to the side of the first one when the window is wide enough, or reflow the second pane underneath the first pane whenever the window is taller. This technique applies also when the window becomes smaller: content reflows to the bottom. Reflowing a pane based on the window size While reflowing is an incredible option in many cases, there might be situations in which the content might need to be either docked to a side of the window or levitated on top of it. The levitate strategy not only docks the content, but also allows you to customize features like draggability, resizability, and even the background scrim. Levitating a pane from the side to the center based on the aspect ratio Both the flow and levitate strategies can be declared inside the Navigator constructor using the adaptStrategies parameter, and both strategies can be applied to list-detail and supporting pane scaffolds: val navigator = rememberListDetailPaneScaffoldNavigator<Nothing>(        adaptStrategies = ListDetailPaneScaffoldDefaults.adaptStrategies(            detailPaneAdaptStrategy = AdaptStrategy.Reflow(                reflowUnder = ListDetailPaneScaffoldRole.List            ),            extraPaneAdaptStrategy = AdaptStrategy.Levitate(                alignment = Alignment.Center            )        )    ) --- To learn more about how to leverage these new adaptive strategies, see the Material website and the complete sample code on GitHub.
android-developers.googleblog.com
October 30, 2025 at 7:32 AM
5 things you need to know about publishing and distributing your app for Android XR
_Posted by Jan Kleinert, Android Developer Relations Engineer_ > > Samsung Galaxy XR is here, powered by Android XR! This blog post is part of our Android XR Spotlight Week, where we provide resources—blog posts, videos, sample code, and more—all designed to help you learn, build, and prepare your apps for Android XR. Today, we're focusing on one of the last steps in your development journey, ensuring these experiences successfully reach your users. Publishing correctly ensures your app is packaged efficiently, discovered by the right devices, and presented in the best possible light. Here are 5 things you need to know about publishing and distributing your app for Android XR on Google Play. ## 1. Uphold quality with the Android XR app quality guidelines One of the most important steps before publishing is ensuring your app delivers a safe, comfortable, and performant user experience. Following the Android XR App Quality Guidelines helps ensure that your app provides users with a great experience on devices like the Galaxy XR. ## Why quality matters These guidelines build upon the large screen app quality guidelines, and focus on critical XR-specific criteria including: * Safety and comfort: This is paramount. These guidelines help you avoid causing motion sickness by setting standards for camera movement and frame rates, and by limiting visual elements like strobing. * Performance: Your app must hit performance metrics, such as target frame rates, to prevent lag and ensure a fluid, comfortable experience. * Interaction: The guidelines specify recommended minimum sizes for interactive targets (e.g., 48dp minimum, 56dp recommended) to work well with eye-tracking and hand-tracking inputs. * * * ## 2. Configure your app manifest correctly The AndroidManifest.xml file describes important information about your app. The Android build tools, Android system, and Google Play use this information to know what kind of experience you've built and which hardware features it requires. Proper configuration is vital for correct device targeting and app launch. ## Specify which Android XR SDK your app uses In your app manifest, include android.software.xr.api.spatial or android.software.xr.api.openxr to indicate whether you're building with the Jetpack XR SDK or building with OpenXR or Unity. SDK used| Manifest declaration ---|--- Jetpack XR SDK| android.software.xr.api.spatial OpenXR or Unity| android.software.xr.api.openxr If your app is built using OpenXR or Unity, you must set the android:required attribute to true. For apps built with the Jetpack XR SDK, set android:required attribute to true if your app is published to the Android XR dedicated release track and set android:required attribute to false if your app is published to the mobile release track. ## Set the activity start mode Use the android.window.PROPERTY_XR_ACTIVITY_START_MODE property on your main activity to define the default user environment: Start mode| Purpose| SDK ---|---|--- XR_ACTIVITY_START_MODE_HOME_SPACE| Launches your app in Home Space, the shared multitasking environment.| Jetpack XR SDK XR_ACTIVITY_START_MODE_FULL_SPACE_MANAGED| Launches in Full Space, a full-immersion, single-app environment.| Jetpack XR SDK XR_ACTIVITY_START_MODE_FULL_SPACE_UNMANAGED| Launches in Full Space, a full-immersion, single-app environment. Note that apps built with OpenXR or Unity always run in Full Space.| OpenXR or Unity ## Check for optional hardware features at runtime Avoid setting optional XR features (like hand tracking or controllers) to android:required="true" unless they are truly required for your app. If a device doesn't support a required feature, Google Play will hide your app from that device. If you have features set as required but your app could operate without them, then you could unnecessarily limit your audience. Instead, check for advanced features dynamically at runtime using the PackageManager class with hasSystemFeature(): Kotlin val hasHandTracking = packageManager.hasSystemFeature("android.hardware.xr.input.hand_tracking") if (hasHandTracking) {     // Enable high-fidelity hand tracking features } else {     // Provide a fallback experience } This ensures your app is broadly compatible and leverages advanced features when they're available. * * * ## 3. Use Play Asset Delivery (PAD) to deliver large assets Immersive apps and games often contain large assets that might exceed the standard size limits. Use Play Asset Delivery (PAD) to manage large, high-fidelity assets. PAD offers flexible delivery modes: install-time, fast follow, and on demand for progressive download of content. Apps that are built for Android XR are allowed to deliver additional asset packs: instead of a cumulative total of 4 GB for asset packs delivered on demand or fast follow, these apps are afforded a higher cumulative total of 30 GB. For developers building with Unity, use Unity Addressables along with Play Asset Delivery to manage asset packs. * * * ## 4. Showcase your app with spatial video previews To capture the attention of users browsing the Play Store on their XR headsets, you can provide an immersive preview of your app using a spatial video asset. This must be a 180°, 360°, or stereoscopic video. On Android XR devices, the Play Store will automatically display this as an immersive 3D preview, allowing users to experience the depth and scale of your content before they install the app. * * * ## 5. Choose your Google Play release track Google Play provides two pathways for publishing your Android XR app, both using the same Play Console account: ## Option A: Continue on the mobile release track (for spatialized mobile apps) If you are adding spatial XR features to an existing mobile app, you can often bundle the XR features or content into your existing Android App Bundle (AAB). This approach is ideal if your app maintains most of its core functionality across both mobile and XR devices, and you can continue publishing the same AAB to the mobile track. Review this guidance to be sure you are properly configuring your app's manifest file to support this use case. ## Option B: Publish to the dedicated Android XR release track If you are building a brand-new app for XR or if the XR version is functionally too different for a single AAB, you should publish to the Android XR dedicated release track. Apps published to the Android XR dedicated release track are only visible to Android XR devices that support the android.software.xr.api.spatial feature or the android.software.xr.api.openxr feature, giving you control over distribution. By following this guidance, you can help ensure your innovative Android XR apps provide a quality user experience, are packaged efficiently, are delivered smoothly using PAD, and are targeted to the devices that can run them. Happy publishing!
android-developers.googleblog.com
October 30, 2025 at 7:32 AM
Set a reminder: Tune in on October 30 for our Fall episode of The Android Show, live from Droidcon London
_Posted by The Android Team_ _ _ _ _ In just a few days, on Thursday, October 30th at 10AM PT, we’ll be dropping our Fall episode of The Android Show, on YouTube and on developer.android.com! This time, we’ll be live from Droidcon London, where we’ll be unpacking some of the latest agentic experiences for Gemini in Android Studio designed to help you be more productive, plus doing live demos of Jetpack Compose and more. And with the recent launch of Galaxy XR, we’ll be diving into the world of Android XR plus how building adaptive lets you easily extend to XR devices as well as foldables, tablets and large screens. Get your #AskAndroid questions answered live! We’ve assembled a team of experts from across Android to answer your #AskAndroid questions live from London on building excellent apps, across devices; you can start sharing your questions now using #AskAndroid, and tune in to see if they are answered live on the show! The Android Show is your conversation with the Android developer community, and this episode will be co-hosted by Rebecca Gutteridge and Adetunji Dahunsi. You'll hear the latest from the developers and engineers who build Android. Don’t forget to tune in live on October 30 at 10AM PT, live on YouTube and on developer.android.com/events/show!
android-developers.googleblog.com
October 30, 2025 at 7:32 AM
Optimizing Performance for Android XR with Unity
_Posted by Luke Hopkins, Developer Relations Engineer_ > > Samsung Galaxy XR is here, powered by Android XR! This blog post is part of our Android XR Spotlight Week, where we provide resources—blog posts, videos, sample code, and more—all designed to help you learn, build, and prepare your apps for Android XR. This week, Samsung launched Galaxy XR, built in collaboration with Google and Qualcomm. This is an exciting time for developers, and we wanted to help you get the best performance you can out of your XR app. While poor performance in games and apps on non-XR devices can be frustrating for the user, in the world of XR performance isn’t just optional, it’s fundamental to the success of your app. If you miss your frame rate target in XR, it can cause far more serious problems like motion sickness. In this guide, we'll walk you through the essential performance optimizations you need to understand for Android XR development. You'll learn which features deliver the biggest performance gains, when to use them, and how they work together to help you hit your framerate targets. Here’s what we’re aiming for: * Minimum: 72fps (part of our play quality guidelines) * Optional: 90fps with an 11ms budget per frame For more information on why it's important to maintain such a high frame-rate check out our performance guidelines. ## XR-Specific Performance Features We’re going to start by covering two XR-specific performance features: Foveated Rendering and Vulkan Subsampling. ### Foveated Rendering Foveated rendering is an optimization that has two modes. The first is a static mode that renders the center of the screen at a higher resolution, and progressively lowers the resolution the further out you look. The second is the eye-tracking mode that specifically renders the area where you're looking in full detail, while reducing the quality displayed in your peripherals. It essentially mimics how human vision works — where we only see fine detail in the specific area we’re focusing on. Foveated rendering significantly cuts the GPU workload without sacrificing the perceived image quality for the user. The beauty of foveated rendering is that users won't notice the reduced quality in their peripheral vision, but your GPU will certainly notice the improved performance. Imagine you're building a museum experience with intricate 3D artifacts. Without foveated rendering, you’d struggle to maintain 90fps trying to render everything in ‘field of view’. With foveated rendering, you can keep those high-poly details where the user’s looking, but the background environment renders at a lower quality. Your users won't notice the difference, but you'll have the headroom to add more detail to your scene. ### Vulkan Subsampling Vulkan Subsampling is foveated rendering's best friend. While foveated rendering decides what to render at different quality levels, Vulkan Subsampling handles how to efficiently render the different quality levels using Fragment Density Maps. When combined with foveated rendering, Vulkan Subsampling gives you an extra 0.5ms of performance. It also helps smooth out jagged edges in your peripheral vision, making the overall image look cleaner. For example, in a flight simulator game where users focus on instruments and controls, combining foveated rendering with Vulkan Subsampling means the detailed controls render sharply, but the peripheral cockpit structure uses fewer resources. That extra 0.5ms doesn’t sound like much, but it's the difference between having room for an extra interactive element or dropping frames during intense moments. ## GPU Features for Complex Scenes Besides Foveated Rendering and Vulkan Subsampling, there are some GPU features that reduce unnecessary strain through smart instancing and culling. These are particularly effective for complex scenes with repeated geometry or significant occlusion. ### GPU Resident Drawer The GPU Resident Drawer automatically uses GPU instancing to reduce draw calls and free up CPU processing time. So, instead of the CPU telling the GPU about each object individually, the GPU batches similar objects together. This feature is most effective for large scenes with repeated meshes, like trees in a forest, furniture in an office building, or props scattered throughout an environment. Picture a forest scene with 200 trees using the same base mesh. Without the GPU Resident Drawer, you’ve got 200 draw calls eating up the GPU, therefore freeing up the CPU. When you enable this feature, the GPU will intelligently instance those trees, which should reduce it to just 5-10 draw calls. That's a massive GPU savings you can then invest in gameplay logic or physics calculations. ### GPU Occlusion Culling GPU Occlusion Culling uses the GPU instead of the CPU to identify and skip rendering hidden objects. It automatically detects what's occluded (hidden) behind other objects, so you're not wasting your GPU on things the user can't see. This feature is particularly powerful in interior spaces with multiple rooms, dense environments, or architectural scenes where walls, floors, and objects naturally block the view. As an example, let’s say you're building a multi-room house experience. When the user is in the living room, why waste GPU cycles rendering the fully detailed kitchen that's completely hidden behind a wall? GPU Occlusion Culling automatically skips rendering those hidden objects, giving you more performance budget for what's actually visible. ## Monitoring Your Performance It’s not enough to just use these features. You also need to measure your optimizations, so you can quantify their impact and verify your changes are actually working. ### Performance Metrics API The Performance Metrics API provides real-time monitoring of your apps memory usage, CPU performance, and GPU performance. It gives you comprehensive data from compositor and runtime layers, so you can see exactly what's happening in your application. Establish a baseline before making your changes, apply an optimization, measure the impact, and iterate. This data-driven approach means you know you’re actually improving performance rather than guessing. Before enabling foveated rendering, your GPU frame time might be 13ms, which is over your 11ms budget. Enable foveated rendering, measure again, and hopefully you see it drop to 9ms. That's 4ms of headroom you've gained to add more detail to your scene, improve visual quality elsewhere, or simply ensure smoother performance across a wider range of content. Without these metrics, you're optimizing blind. The Performance Metrics API tells you the truth about what's actually helping your specific use case. ### Frame Debugger The Frame Debugger is Unity's built-in tool for understanding exactly how your scene is being rendered, frame by frame. It shows you the sequence of draw calls and lets you step through them to verify your optimizations are working correctly. Want to confirm the SRP Batcher is working? Look for 'RenderLoopNewBatcher' entries in the Frame Debugger. Checking if the GPU Resident Drawer is batching properly? Look for 'Hybrid Batch Group' entries. These visual confirmations help you understand whether your optimization settings are actually taking effect. Step through the first 50 draw calls of your scene. If you see similar objects being drawn individually instead of batched, that's telling you that your instancing or batching isn't working correctly. The Frame Debugger makes these issues immediately visible so you can address them. ## Additional Optimizations As well as the optimizations we’ve covered above, our full performance guide also covers a few other additional optimizations. Here’s a quick summary: * URP Settings: Disable HDR and Post Processing for mobile XR. These features provide minimal visual impact compared to their performance cost on mobile hardware, so you'll get measurable performance gains with barely perceptible visual differences. * SRP Batcher: Reduces CPU overhead for scenes with many materials using the same shader variant. By minimising render-state changes between draw calls, you can significantly reduce CPU time spent on rendering. * Display Refresh Rate: Dynamically adjust between 72fps and 90fps based on scene complexity. Lower the framerate during complex sequences to maintain stability, then increase it during simpler moments for ultra-smooth interaction. * Depth/Opaque Textures: Disable these unless specifically needed for shader effects. They cause unnecessary GPU copying operations that waste performance without providing benefit for most applications. * URP Render Scale: This setting allows you to render at a reduced resolution for performance benefits or to upscale rendering for enhanced visual quality. For step-by-step instructions on these and more optimizations, check our complete Unity Performance Guide for Android XR. ## Conclusion The performance of your XR app isn't just a technical checkbox. It's the difference between a comfortable, engaging experience and one that makes users feel sick or uncomfortable. The optimizations we've covered are your toolkit for hitting those critical framerate targets on the newest XR devices. Here's your roadmap: 1. Start with Foveated Rendering and Vulkan Subsampling. These XR-specific features deliver immediate and noticeable GPU savings. 2. Add GPU Resident Drawer and Occlusion Culling if you have complex scenes with repeated geometry or interior spaces. 3. Monitor everything with the Performance Metrics API to ensure your changes are actually helping 4. Explore additional URP optimizations for extra performance headroom It’s vital to measure continuously and iterate. Not every optimization will benefit every project equally, so use the Performance Metrics API to get a clear idea of what actually helps your specific use case. ## What's next: expanding your skills Ready to dive deeper? Check out these resources: * Unity Performance Guide for Android XR - Complete step-by-step implementation instructions for all features covered here. * Getting Started with Unity and Android XR - Set up your development environment and start building. * Android XR Developer Documentation - Comprehensive guides for all Android XR features
android-developers.googleblog.com
October 30, 2025 at 7:32 AM
Getting started with Unity and Android XR
_Posted by Luke Hopkins - Developer Relations Engineer_ > > Samsung Galaxy XR is here, powered by Android XR! This blog post is part of our Android XR Spotlight Week, where we provide resources—blog posts, videos, sample code, and more—all designed to help you learn, build, and prepare your apps for Android XR. There’s never been a better time to get into XR development. Last December, we announced Android XR, Google's new Android platform built on open standards such as OpenXR and Vulkan, which makes XR development more accessible than it’s ever been. And when combined with Unity’s existing XR tools, you get a powerful and mature development stack. This makes it possible to create and deploy XR apps that work across multiple devices. No matter whether you’ve done XR development before or not, we want to help you get started. This blog will get you up and running with Android XR and Unity development. We’ll focus on the practical steps to configure your environment, understand the package ecosystem, and start building. By the end of this blog, you’ll have a good understanding of: * The package ecosystem * Essential setup steps * Input methods * Privacy and permissions * Composition layers ## Unity for Android XR development You might choose Unity for its cross-platform compatibility, allowing you to build once and deploy to Android XR and other XR devices. When using Unity, you benefit from its mature XR ecosystem and tooling. It already has established packages such as XR Interaction Toolkit, OpenXR plugin, XR composition layers, XR Hands, an extensive asset store full of XR-ready components and templates, and XR simulation and testing tools. And since Unity 6 was released last November, you’ll also benefit from its improved Universal Render Pipeline (URP) performance, better Vulkan graphics support, and enhanced build profiles. Here are some sample projects to get an idea of what can be done: * Unity’s VR Project Template * VR Multiplayer Template * Android XR Samples for Unity ## Essential setup: your development foundation ### Unity 6 requirements and installation You’ll need Unity 6 to create your app, as earlier versions don’t support Android XR. Install Unity Hub first, then Unity 6 with the Android Build Support module, following these steps. ### Android XR build profiles: simplifying configuration Unity build profiles are project assets that store your platform-specific settings and configurations. So instead of needing to manually set up 15-20 different settings across multiple menus, you can use a build profile to do this automatically. You can create your own build profiles, but for now we recommend using the dedicated Android XR build profile we created. You can select your build profile by selecting File > Build Profile from your Unity project. For full instructions, see the Develop for Android XR workflow page. If you make any changes of your own, you can then create a new build profile to share with your team. This way you ensure consistent build experience across the board. ### After these steps you can build and run your APK for Android XR devices. ### Graphics API: why Vulkan matters Once you have your Unity project set up with an Android XR build profile, we first recommend making sure you have Vulkan set as your graphics API. Android XR is built as a Vulkan-first platform. In March 2025, Google announced that Vulkan is now the official graphics API for Android. It’s a modern, low-level graphics API that helps developers maximize the performance of modern GPUs and unlocks advanced features like ray-tracing and multithreading for realistic and immersive gaming visuals. These standards provide the best compatibility for your existing applications and ease the issues and costs of porting. And it makes it possible to enable advanced Android XR features such as URP Application Space Warp and foveated rendering. Unity 6 handles Vulkan automatically, so when you use the Android XR build profile, Unity will configure Vulkan as your graphics API. This ensures you get access to all the advanced Android XR features without any manual configuration. You can verify your graphics API settings by going to ‘Edit’ >’ Project Settings’ > ‘Player’ > ‘Android tab’ > ‘Other settings’ > ‘Graphics APIs’. ### ### Understanding the package ecosystem There are two different packages you can use for Android XR in Unity. One is by using the Android XR Extensions for Unity, and the other is using the Unity OpenXR: Android XR package. These may sound like the same thing, but bear with me. The Unity OpenXR: Android XR package is the official Unity package for Android XR support. It provides the majority of Android XR features, made available through OpenXR standards. It also enables AR Foundation integration for mixed reality features. The primary benefit of using the Unity OpenXR: Android XR package is that it offers a unified API for supporting XR devices. Whereas the Android XR Extensions for Unity is Google’s XR package, designed specifically for developing for Android XR devices. It supplements the Unity OpenXR package with additional features such as environment blend modes, scene meshing, image tracking, and body tracking. The tradeoff is that you can only develop for Android XR devices. Which one you choose will depend on your specific needs, but we generally recommend going with the Unity OpenXR: Android XR, as it gives you far more flexibility for the devices your app will be compatible with, and then based on your application requirements you can then add Android XR Extensions for Unity. ### How to install packages To add a new package, with your project open in Unity, select ‘Window’ > ‘Package Management’ > ‘Package Manager’. From here you can install these packages from the ‘Unity Registry’ tab: * ‘Open XR: Android XR’ * ‘XR Interaction Toolkit’ * ‘XR Hands’ You can install the Android XR for unity package via Github by selecting the ➕ icon, selecting ‘Install package from git URL’, then entering ‘https://github.com/android/android-xr-unity-package.git’ ## Required OpenXR features Now you have the packages you need installed, let’s enable some core features in order to get our project working. You can enable OpenXR setting for Android: _‘Edit’ - > ‘Project Settings’ -> ‘XR Plugin Management’ -> Click the Android and enable OpenXR_ _ _ ** ** Next we need to enable support for: ‘Android XR support’, we will cover other OpenXR features as we need them. For now we just need Android XR support to be enabled. ## Input Android XR supports input for Hands, Voice, Eye tracking, Keyboard and Controllers. We recommend installing the XR Interaction Toolkit and XR Hands as these contain the best prefabs for getting started. By using these prefabs, you’ll have everything you need to support Hands and Controllers in your app. Once the XR Hands and XR Interactive toolkit are both installed, I recommend importing the Starter Assets and Hands Interaction Demo. Then you need to enable the Hand Interaction and Khronos Simple Controller profiles, and turn on the Hand Tracking Subsystem and Meta Hand Tracking Aim features. You can edit these settings by going to _‘Edit’ > ‘Project Settings’ > XR Plug-in Management’ > ‘OpenXR’_ We’d also recommend Unity’s prefab, XR Origin, that represents the user's position and orientation in XR space. This contains the camera rig and tracking components needed to render your XR experience from the correct viewpoint. The simplest way to add this prefab is to import it from the hands integration demo we imported earlier which can be found here _‘Hands Integration Toolkit’ > ’Hand Interaction’ > ’Prefabs’ > ’XR Origin’_ I recommend using this Prefab over the ‘XR Origin’ option in your game objects as it uses the XR Input Modality Manager which automatically switches between users hands and controllers. This will give you the best success for switching between hands and controllers. ## Privacy and permissions: building user trust Whatever you build, you’ll need to capture runtime permissions from the users. That’s because scene understanding, eye tracking, face tracking and hand tracking provide access to data that may be more sensitive to the user. These capabilities provide deeper personal information than traditional desktop or mobile apps, so the runtime permissions ensure your users have full control over what data they choose to share. So, to keep with Android's security and privacy policies, Android XR has permissions for each of these features. For example, if you use the XR Hands package for custom hand gestures, you will need to request the hand tracking permission (see below) as this package needs to track a lot of information about the user's hands. This includes things like tracking hand joint poses and angular and linear velocities; **Note: For a full list of extensions that require permissions, check out information on theXR developer website.** const string k_Permission = "android.permission.HAND_TRACKING"; #if UNITY_ANDROIDvoid Start(){    if (!Permission.HasUserAuthorizedPermission(k_Permission))    {        var callbacks = new PermissionCallbacks();        callbacks.PermissionDenied += OnPermissionDenied;        callbacks.PermissionGranted += OnPermissionGranted;         Permission.RequestUserPermission(k_Permission, callbacks);    }} void OnPermissionDenied(string permission){    // handle denied permission} void OnPermissionGranted(string permission){    // handle granted permission}#endif // UNITY_ANDROID --- ** ** ### Enhancing visual quality with composition layers A Composition Layer is the recommended way to render UI elements. They make it possible to display elements at a much higher quality compared to Unity’s standard rendering pipeline as everything is directly rendered to the platform's compositor. For example, if you’re displaying text, the standard Unity rendering is more likely to have blurry text, soft edges, and visual artifacts. Whereas with composition layers, the text will be clearer, the outlines will be sharper, and the experience will be better overall. As well as text, it also renders video, images, and UI elements at a much higher quality. It does this by utilising native support for the runtime’s compositor layers. _To turn on Composition Layers, open Package Manager, select ‘Unity Register’, then install ‘XR Composition Layers’._ ** ** ## Build and Run Now that you have your OpenXR packages installed and features enabled, a prefab setup for hand and head movement you can now build your scene and deploy directly to your headset for testing. ## What's next: expanding your skills Now that you've got your Android XR development environment set up and understand the key concepts, here are the next steps to continue your XR development journey: Essential resources for continued learning: * Android XR developer documentation - comprehensive guides for all Android XR features * Unity XR development manual - Unity's official XR development resources Sample projects to explore: * Android XR Unity samples - Google's official sample projects showcasing different Android XR features * Unity XR Interaction Toolkit examples - comprehensive examples of XR interactions and gameplay mechanics * Unity VR Template - a complete starting point for VR projects * VR Multiplayer Template - explore social XR experiences
android-developers.googleblog.com
October 30, 2025 at 7:32 AM
Bringing Androidify to XR with the Jetpack XR SDK
_Posted by Dereck Bridie, Developer Relations Engineer_ > > Samsung Galaxy XR is here, powered by Android XR! This blog post is part of our Android XR Spotlight Week, where we provide resources—blog posts, videos, sample code, and more—all designed to help you learn, build, and prepare your apps for Android XR. With the launch of Samsung Galaxy XR, the first device powered by Android XR is officially here. People can now enjoy many of their favorite apps from the Play Store in a whole new dimension: the third dimension! The third dimension is a spacious one, with plenty of room for your apps too. Get started today using whichever tools work for your app. For example, you can use the Jetpack XR SDK to build immersive XR experiences using modern Android development tools such as Kotlin and Compose. In this blog post, we’ll tell you about our own journey as we brought the whimsy of our beloved Androidify app to XR, and we'll cover the basics of what it takes to bring your apps to XR too. A tour through Androidify Androidify is an open source app that lets you create Android bots, using some of the latest technologies like Gemini, CameraX, Navigation 3, and of course, Jetpack Compose. Androidify was initially designed to look great on phones, foldables, and tablets by creating adaptive layouts. Androidify looks great across multiple form factors A key pillar of adaptive layouts is reusable composables. Jetpack Compose helps you create bite-sized UI components that can be laid out in different ways to create intuitive user experiences, no matter what type of device the user is on. In fact, Androidify is compatible with Android XR with zero modifications to the app! Androidify adapts to XR using its large-screen-responsive layout with no code changes Apps that have no special handling for Android XR can be multi-tasked in an appropriately sized window and work much like they would on a large screen. Because of this, Androidify is already fully featured on Android XR with no additional work! But we didn't want to stop there, so we decided to go the extra mile and create an XR-differentiated app to bring a delightful experience to our XR users. Orienting yourself in XR Let’s go over key basic concepts for Android XR, starting with the two modes apps can be run in: Home Space and Full Space. | ---|--- Apps in Home Space (left) and an app in Full Space (right) In Home Space, multiple apps can be run side-by-side so users can multitask across different windows. In that sense, it’s a lot like desktop windowing on a large screen Android device, but in virtual space! In Full Space, the app has no space boundaries and can make use of Android XR’s full spatial features, like spatial UI and controlling the virtual environment. While it might seem tempting to make your app run only in Full Space, your users might want to multi-task with your app, so supporting both promotes a better user experience. Designing for Androidify’s new dimension A delightful app starts with a great design. Ivy Knight, Senior Design Advocate on Android DevRel, took on the task of taking existing designs for Androidify and coming up with a new design for XR. Take it away, Ivy! Designing for XR required a unique approach, but actually still had a lot in common with mobile design. We started by thinking about containment: how to organize and group our UI elements in subspace, either by clearly showing boundaries or by subtly implying them. We also learned to embrace all the various sizes of spatial UI elements, which are meant to adjust and move in response to the user. As we did with Androidify, build with adaptive layouts, so you can break your layouts down into parts for your spatial UI. Starting the design with Home Space Luckily, Android XR lets you start with your app as it is today for Home Space, so we could transition to the expanded XR designs by just adding a window toolbar and Full Space transition button. We also considered possible hardware features and how the user would interact with them. The mobile layouts for Androidify adapt across various postures, class sizes, and the number of cameras to give more photo options. Following this model, we had to adapt the camera layout for headset devices as well. We also needed to make adjustments for text to work to account for the proximity of the UI to the user. Designing for the bigger shift to Full Space Full Space was the biggest shift, but gave us the most creative room to adapt our design. From tablet to XR Androidify uses visual containment, or panes, to group features with a background and outline, like the "Take or choose a photo" pane. We also used components like the top app bar to create natural containment by framing the other panes. Finally, intrinsic containment is suggested by the proximity of certain elements to others, such as the "Start transformation" bottom button, which is near the "Choose my bot color" pane. Spatial panels made for easy separation. To decide how to adapt your mobile designs for spatial panels, try removing surfaces starting with the surface that is the furthest back and then moving forward. See how many backgrounds you can remove and what remains. After we did this exercise for Androidify, the large green Android squiggle was what remained. The squiggle not only acted as a branding moment and background, but an anchor for the content in 3D space. Establishing this anchor both made it easier to imagine how elements could move around it, and how we could use proximity to break out and translate the rest of the user experience. Other design tips for helping your app get spatial * Let things be uncontained: Break out components and give them some real (spatial) space. It's time to give those UI elements some breathing space. * Remove surfaces: Hide the background, see what that does to your designs. * Motivate with motion: How are you using transitions in your app? Use that character to imagine your app breaking out into VR. * Choose an anchor: Don’t lose your users in the space. Have an element that helps collect or ground the UI. For more about XR UI design patterns, check out Design for Android XR on Android Developers. Spatial UI basics Now that we've covered Ivy's experience adapting her mindset while designing Androidify for XR, let's talk about developing spatial UI. Developing a spatial UI with the Jetpack XR SDK should seem familiar if you’re used to working with modern Android tools and libraries. You’ll find concepts you’re already familiar with, like creating layouts with Compose. In fact, spatial layouts are really similar to 2D layouts using rows, columns, and spacers: These elements are arranged in SpatialRows and SpatialColumns The spatial elements shown here are SpatialPanel composables, which let you display 2D content like text, buttons, and videos. Subspace {    SpatialPanel(        SubspaceModifier            .height(824.dp)            .width(1400.dp)    ) {        Text("I'm a panel!")    }} --- A SpatialPanel is a subspace composable. Subspace composables must be contained within a Subspace, and are modified by SubspaceModifier objects. Subspaces can be placed anywhere within your app’s UI hierarchy, and can only contain Subspace composables. SubspaceModifier objects are also really similar to Modifier objects: they control parameters like sizing and positioning. An Orbiter can be attached to a SpatialPanel and move along with the content it’s attached to. They’re often used to provide contextual controls about the content they’re attached to, giving the content the primary focus. They can be placed at any of the four sides of the content, at a configurable distance. An Orbiter is attached to the bottom of a SpatialPanel There are many more spatial UI elements, but these are the main ones we used to create spatial layouts for Androidify. Getting started with XR development Let’s start with the project setup. We added the Jetpack XR Compose dependency, which you can find on the Jetpack XR dependencies page. We added code for a button that transitions the user into Full Space, starting with detecting the capability to do so: @Composablefun couldRequestFullSpace(): Boolean =   LocalSpatialConfiguration.current.hasXrSpatialFeature &&    !LocalSpatialCapabilities.current.isSpatialUiEnabled} --- Then, we made a new button component that uses the Expand Content icon to our existing layouts, and gave it an onClick behavior: @Composablefun RequestFullSpaceIconButton() {   if (!couldRequestFullSpace()) return   val session = LocalSession.current ?: return    IconButton(       onClick = {           session.scene.requestFullSpaceMode()       },   ) {       Icon(           imageVector =                 vectorResource(R.drawable.expand_content_24px),           contentDescription =                stringResource("To Full Space"),       )   }} --- Now, clicking that button just shows the Medium layout in Full Space. We can check the spatial capabilities and determine if spatial UI can be displayed – in that case, we’ll show our new spatial layout instead: @Composablefun HomeScreenContents(layoutType: HomeScreenLayoutType) {   val layoutType = when {      LocalSpatialCapabilities.current.isSpatialUiEnabled ->           HomeScreenLayoutType.Spatial      isAtLeastMedium() -> HomeScreenLayoutType.Medium      else -> HomeScreenLayoutType.Compact   }    when (layoutType) {      HomeScreenLayoutType.Compact ->          HomeScreenCompactPager(...)       HomeScreenLayoutType.Medium ->          HomeScreenMediumContents(...)             HomeScreenLayoutType.Spatial ->          HomeScreenContentsSpatial(...)   }} --- Implementing the design for the Home Screen Let’s go back to the spatial design for the Home Screen in Full Space to understand how it was implemented. We identified two SpatialPanel elements here: one panel that the video card is in on the right, and one that contains the main UI. Finally, there’s an Orbiter attached to the top. Let’s start with the video player panel: @Composablefun HomeScreenContentsSpatial(...) {   Subspace {      SpatialPanel(SubspaceModifier                   .fillMaxWidth(0.2f)                   .fillMaxHeight(0.8f)                   .aspectRatio(0.77f)                   .rotate(0f, 0f, 5f),      ) {          VideoPlayer(videoLink)      }   }} --- We simply reused the 2D VideoPlayer component from the regular layouts into a SpatialPanel with no additional changes! Here’s what it looks like standalone: The main content panel followed the same story: we reused medium panel content in a SpatialPanel. SpatialPanel(SubspaceModifier.fillMaxSize(),             resizePolicy = ResizePolicy(                 shouldMaintainAspectRatio = true             ),             dragPolicy = MovePolicy() ) {    Box {        FillBackground(R.drawable.squiggle_full)        HomeScreenSpatialMainContent(...)    }} --- We gave this panel a ResizePolicy, which gives the panel some handles near the edges that let the user resize the panel. It also has a MovePolicy, which lets the user drag it around. Placing them in the same Subspace makes them independent of each other, so we made the VideoPlayer panel a child of the main content panel. This makes the VideoPlayer panel move when the main content panel is dragged through a parent-child relationship. @Composablefun HomeScreenContentsSpatial(...) {   Subspace {       SpatialPanel(SubspaceModifier..., resizePolicy, dragPolicy) {           Box {               FillBackground(R.drawable.squiggle_full)               HomeScreenSpatialMainContent(...)           }           Subspace {              SpatialPanel(SubspaceModifier...) {                  VideoPlayer(videoLink)              }           }       }   }} --- That’s how we did the first screen! Moving on to the other screens I’ll go over some of the other screens briefly too, highlighting specific considerations made for each one. The creation screen in Full Space Here, we used SpatialRow and SpatialColumn composables to create a layout that fits the recommended viewing space, again reusing components from the Medium layout. Results Screen in Full Space: A bot generated with a prompt: red baseball cap, aviator sunglasses, a light blue t-shirt, red and white checkered shorts, green flip flops, and is holding a tennis racket. The results screen shows the complimentary quotes using a feathering effect, allowing them to fade out near the edges of the screen. It also uses an actual 3D transition when viewing the input that was used, flipping the picture over in space. Publishing to the Google Play Store Now that the app is ready for XR with the spatial layouts, we went on to release it onto the Play Store. There’s one final, important change we made to the app’s AndroidManifest.xml file: <!-- Androidify can use XR features if they're available; they're not required. --><uses-feature android:name="android.software.xr.api.spatial"               android:required="false" /> --- This lets the Play Store know that this app has XR-differentiated features, showing a badge that lets users know that the app was made with XR in mind: Androidify as shown in the Google Play Store on Android XR When uploading the release, we don’t need any special steps to release for XR: the same app is distributed as normal to users on the mobile track as to users on an XR device! However, you can choose to add XR-specific screenshots of your app, or even upload an immersive preview of your app using a spatial video asset. On Android XR devices, the Play Store automatically displays this as an immersive 3D preview, allowing users to experience the depth and scale of your content before they install the app. Start building your own experiences today Androidify is a great example of how to spatialize an existing 2D Jetpack Compose app. Today, we showed the full process of developing a spatial UI for Androidify, from design to code to publishing. We modified the existing designs to work with spatial paradigms, used SpatialPanel and Orbiter composables to create spatial layouts that show when the user enters Full Space, and finally, released the new version of the app onto the Play Store. We hope that this blog post helped you understand how you can bring your own apps to Android XR! Here’s a few more links that can help you on your way: * Check out the source code for Androidify, and make your own bot using Androidify on Google Play. * Get started with our developer documentation and learn more about Jetpack Compose for XR. * Download the Android XR emulator and try your own app out!
android-developers.googleblog.com
October 30, 2025 at 7:32 AM
Giving your apps a new home on Samsung Galaxy XR, the first device powered by Android XR
_Posted by Matthew McCullough, Vice President, Product Management, Android Developer_ _ _ The first device powered by Android XR is here: engineered by Samsung, Galaxy XR, is available now, putting the possibilities of immersive experiences directly into users' hands. With this launch, the Android XR platform, built for the era of AI, leverages the helpfulness of Gemini to bring users new ways to use an AI assistant and experience apps and games. Galaxy XR unlocks a new form factor that extends the reach of the Android ecosystem, seamlessly blending digital content with the physical world in ways that feel intuitive and natural. This creates innovative opportunities for your app, allowing you to transform existing 2D content into interactive 3D elements and allowing users to intuitively place and resize your app's experience directly in their living environments. If you're building for Android, you're building for Android XR - because of the underlying adaptive app framework. And to help you get started, we are kicking off the Android XR Spotlight Week today. During this week, we’ll deep dive into what you need to know about bringing your 2D app to the platform, building spatial experiences, and publishing your app. You’ll even have the opportunity to get your questions answered directly from our team, so be sure to tune in. Android XR extends your apps into a new reality Galaxy XR offers a new way of interacting with computing that is anchored by natural input like hands, eyes, and voice input. It moves beyond the constraints of a physical screen, unlocking new ways for users to watch, create, and explore. Android XR is an extension of the Android development foundation, offering a unified development target for the next generation of extended reality devices. This means you can efficiently scale your work, leveraging the established foundation of the Android ecosystem while utilizing familiar tools and APIs. The platform is built to simplify your transition to spatial development. You can adapt your existing apps and build new experiences using the Jetpack XR SDK, which integrates seamlessly with Android Studio and the Android APIs you already use, or utilize industry-standard tools like the Unity engine and open frameworks such as OpenXR and WebXR.  Android XR is designed to lower the barrier to entry, allowing you to use your current expertise to create truly differentiated apps. The platform is ready for your innovation, offering both familiarity and powerful new capabilities. Adapt your existing apps and build new immersive experiences Getting started on Android XR is a flexible process, allowing you to choose the path that best suits your current app status and development goals. Adapting existing apps:  In most cases, your mobile app will run great on Android XR with little to no additional development through Google Play, now available on headsets. Apps that require a bit of effort should start by ensuring your current Android apps are optimized using the adaptive app principles. Many of the best practices you've already implemented for foldables and tablets will help your app shine on headsets. Building new experiences: The Jetpack XR SDK provides the tools you need to create entirely new spatial experiences that truly take advantage of the form factor. The SDK allows you to spatialize your UI, utilize Jetpack Compose for XR to build declarative spatial layouts, and integrate 3D models and rich content using Jetpack SceneCore. And, with the addition of ARCore for Jetpack XR, you can include perception capabilities to seamlessly blend digital content with the real world. Now is the time to design for focused productivity, immersive entertainment, and next-generation discovery. And, developers are already building for the platform. The team at Calm successfully transformed their mobile app into an immersive spatial experience by leveraging their existing Android codebase. They were able to build their first functional XR menus on the first day and a core XR experience in just two weeks, proving that building for XR is a natural extension of existing Android work. Port your Unity titles with ease with OpenXR: Android XR supports the OpenXR standard, which ensures a common set of APIs and standards across devices.  Early access partners have been impressed at how smooth it is to port existing XR titles, leveraging both OpenXR and the established Android XR SDK for Unity. Building on this solid foundation gives you access to a wide array of features, from hand and eye tracking to scene meshing and anchor persistence, making truly immersive apps possible. To get started, download Unity 6 and bring your games and experiences to Android XR. Start Building for Android XR Today The future of extended reality is here, and your current Android expertise is the catalyst for shaping what comes next. Get started building your next experience and adapting your current apps to meet users on this new form factor. Head over to developer.android.com/xr for documentation, guides, and resources to begin building for Android XR. And be sure to tune in to the Android XR Spotlight week to get your questions answered and deep dive into the technical details As we continue to refine the tooling, please share your feedback, your input directly guides the evolution of the Android XR development experience. We can't wait to experience what you build.
android-developers.googleblog.com
October 30, 2025 at 7:32 AM
Welcome to Android XR Spotlight Week!
_Posted by Jan Kleinert, Android Developer Relations Engineer and Bradley Allen, Android Technical Writer_ _ _ Samsung Galaxy XR is here, and it's the first device powered by Android XR! Since we launched the developer preview of the platform last December, developers have started building apps and games made for Android XR. And, with this new hardware, users can now discover and explore these immersive experiences directly on their headsets. To support your development journey and ensure you have all the tools and knowledge to start building for Android XR today, we’re kicking off the Android XR Spotlight Week! Over the next few days, we’ll dive into the SDKs, offer guidance on building high-performance apps with Unity, and provide a clear path to publishing your innovative apps for the Galaxy XR and future Android XR devices. Check back here daily for updates and direct links to new posts, videos, and resources! ### Here’s what we’re covering during Android XR Spotlight Week ### **Building for Android XR with the Jetpack XR SDK (October 22nd)** : Read the overview to understand how building for headsets on Android XR leverages familiar tools and APIs, then explore the blog and video guide on the Jetpack XR SDK to start building differentiated spatial experiences, demonstrated by the Androidify app. Achieving high quality experiences with Unity (October 23rd): Leverage the Unity engine to create high performing XR experiences, start by reviewing our practical setup guide and then discover best practices and optimization tips to ensure high-quality app experiences for your users. Publishing to Google Play and getting your questions answered (October 24th): Learn how to publish your immersive app on Google Play, and then join our live #AskAndroid Q&A session at 9am PT with the Android XR team to get your questions answered. Explore how Calm reimagine mindfulness with Android XR (October 27th): Watch the video with the Calm team to learn more about their process for adapting their existing Android app and launching innovative spatial experiences for Android XR. We’re excited for you to join us this week on our journey with XR. The opportunity to build the next generation of immersive apps and games is here, and we can’t wait to see what you create.
android-developers.googleblog.com
October 30, 2025 at 7:32 AM
Grow your app with the Google Play Apps Accelerator - apply now
_Posted by Robbie McLachlan - Developer Marketing_ At Google Play, we’re committed to helping businesses of all sizes reach their full potential. That’s why we’re excited to open submissions for our new Google Play Apps Accelerator. If you’re an early-stage app company ready to scale, this program is designed for you. Selected companies from over 60+ eligible countries will join a 12-week accelerator starting in March 2026. The program combines 1-1 mentorship from Google and industry experts with a curriculum focused on growing your app business on Google Play. Sessions will cover: * Build: We'll cover the journey from a prototype to a scalable, high-quality app - focusing on the tools and best practices for longevity. * Grow: So you've launched. Now what? Waiting for users to show up isn't the best strategy. We'll dive into what really works for growth. You'll get the playbook on go-to-market strategy, user acquisition, and LiveOps that turns downloads into a user community. * Earn: Want to build a smart monetization strategy? We'll focus on revenue models that build value and drive sustainable growth for your business. * Lead: You've mastered your product, but what about your people? Your role as a founder is evolving. We’ll give you leadership insights to hire the right people, grow their skills, and build a thriving culture. You’ll also get the chance to meet and connect with other founders from around the world who are looking to take their apps to the next level. All submissions must be completed by January 7, 2026, @ 11:59 PM GMT. Check out our terms and conditions for more information. Apply now to supercharge your growth on Google Play.
android-developers.googleblog.com
October 29, 2025 at 2:45 PM
Dynamic App Links: Elevating your Android deep linking
_Posted by Ran Mor - Product Manager_ We're excited to announce the availability of Dynamic App Links, a significant leap forward for Android App Links that brings them on par with, and in many ways surpasses, industry standards for deep linking. For too long, Android App Links have been limited in their functionality, but with this launch, we're introducing powerful new features that provide unparalleled control and flexibility for developers. Since Android 6, App Links has been crucial for delivering a seamless web-to-app user experience. By directing users directly to relevant content within your app, rather than a web browser or mobile-web page, you enhance engagement, boost conversions, and foster greater customer loyalty. Now Dynamic App Links, available on Android 15 and later,  makes achieving this even easier and more effective. ## What's New: Functionalities Enabled by Dynamic App Links The core of these enhancements lies in the Digital Asset Links JSON file. Previously, this file was primarily used for basic verification. Now, it's a powerful configuration tool that allows you to specify paths, query parameters, fragments, and exclusions, providing a dynamic and robust deep linking solution. Here what’s new in Dynamic App Links: ### Exclusions support You can now specify certain paths or sections of a URL that should not open your app, even if they would otherwise match your App Link configuration. This is incredibly useful for: * Unsupported Content: Directing users to web content that isn't yet supported within your app. * Legacy Content: Managing old URLs that you no longer want to route to your app. * Specific Campaigns: Temporarily excluding certain links during promotions or tests. This granular control ensures users always land in the most appropriate experience. ### Query parameters support With the new Query parameters functionality you can define specific parameters that, if present in a URL, will prevent your app from opening. This opens up exciting possibilities for: * Dynamic Exclusions: Quickly turning off app linking for specific scenarios without requiring an app update. * A/B Testing: Directing users to different experiences (app vs. web) based on test parameters. * Controlled Rollouts: Gradually enabling app linking for certain user segments. ### Dynamic updates Make easier updates to your App Links configuration without needing to update your app. You can now specify the URL paths that your app will handle directly within the Digital Asset Links JSON file that is hosted on your server. This means you can: * Respond quickly to changes: Adapt your deep linking strategy in real-time without the overhead of a new app release. * Reduce development cycles: Implement and test App Link changes much more efficiently. * Maintain agility: Keep your app's deep linking configuration current with your evolving content and features. ## Why Dynamic App Links? Android Dynamic App Links are the preferred way to link to content within your app because they offer: * Seamless User Experience: Direct users instantly to the exact content they're looking for, bypassing browser redirects. * Improved Engagement: Keep users within your app, leading to higher engagement and longer session times. * Increased Conversions: Guide users effortlessly through your app's flows, improving the likelihood of desired actions. * Enhanced Customer Loyalty: Deliver a polished and efficient experience that keeps users coming back. With Dynamic App Links, you now have the tools to build even more powerful and flexible deep linking experiences, ensuring your users always find the content they need, right where they expect it. We're excited to see what you'll build with Dynamic App Links. Visit our documentation to start exploring these new features today and elevate your app's deep linking strategy!
android-developers.googleblog.com
October 28, 2025 at 2:44 AM
#WeArePlay: Meet the founder making breast cancer awareness simple and accessible
_Posted by Robbie McLachlan - Developer Marketing_ In our latest #WeArePlay film, which celebrates the people behind apps and games on Google Play, we meet Corrine, the founder of Know Your Lemons. After losing family and friends to breast cancer, she used her skills as a designer to create a simple, visual, and accessible way to educate people about breast cancer. Discover how her award-winning app is changing the conversation around breast cancer and saving lives worldwide. ## Your mission is deeply personal. What inspired you to create the app and use lemons? In my early 20s, I lost both my grandmothers and a close friend to breast cancer, which made me look for information about the disease. As a graphic designer, I knew I could improve the educational materials available by creating something more effective. That’s where the lemons come in. I was searching for a universal symbol and discovered that lemons have features like nipples and pores, like a breast. The idea was perfected when a radiologist explained that a cancerous lump is usually hard, like a lemon seed. It gave us a simple, friendly, and visually clear way to talk about a scary topic and explain the 12 signs of breast cancer so that everyone can understand. ## How has Know Your Lemons helped cultural conversations around breast cancer? Our mission is to start conversations. We found that it’s easier to say, "Hey, have you seen these lemons?" than, "Do you know the symptoms of breast cancer?" Our app opens the door to these crucial, life-saving discussions. For example, a woman in an African village with breast cancer had been ostracized because her community thought she was cursed. One day, she saw one of our volunteers giving a talk using our visuals and recognized her own condition. Not only did we help get her into treatment, but our team returned to her village to teach everyone that cancer is a disease, not a curse. We also heard of a boyfriend who found our visual of the 12 signs of breast cancer on social media and showed it to his girlfriend. Weeks later, she noticed a lump and, remembering the visual, pushed her doctors for a check-up. She was diagnosed with breast cancer and started treatment—all because her boyfriend found our posts. ## How does your app and the ‘Lemonistas’ work together to spread awareness? To spread our mission, we use a two-part approach that combines powerful technology with a human touch. First, the Know Your Lemons app acts as a complete guide in your pocket. It has our visual self-exam guide that makes finding a lump easier. To bring that education to life, we have over 1,200 trained volunteers called “Lemonistas”. They go into their communities to teach hands-on classes using digital presentations and physical props, like a model of a lemon with a lump inside. Our partners in Tanzania, for example, have seen women come to their clinic at stage one or two instead of stage four. They estimate that 12 lives were saved in 18 months, which shows how our tools in the hands of passionate volunteers can make a life-saving difference. ## How has being on Google Play helped your mission? It’s really about making this accessible to people. One of the best reviews we ever got started as a one-star review. A user couldn't sign in because of a bug. We fixed it and replied to her comment on the Play Store. A couple of months later, she changed her review to five stars because our app had helped her find her breast cancer. She said the way we explained the self-exam made all the difference. That ability to connect directly with users on the Play Store is so important; without it, she might not have found it until it was too late. ## The app is always evolving. What is next for Know Your Lemons? We recently added a feature for companies to offer a breast health benefit to their employees, which helps them get on the right screening plan and connects them to resources like genetic counseling. The next big feature we’re adding is an AI-powered matchmaking tool for newly diagnosed patients. It will connect them with patient advocacy organizations that are specific to their needs—whether it’s a certain type of breast cancer or support for parents with young children. Most patients don't know these resources exist, so we bring the help directly to them. Discover other inspiring app and game founders featured in #WeArePlay.
android-developers.googleblog.com
October 27, 2025 at 5:17 AM