Android P features and APIs  |  Android Developers

Android P introduces great new features and capabilities for users and developers. This document highlights what's new for developers.

To learn about the new APIs, read the API diff report or visit the the Android API reference — new APIs are highlighted to make them easy to see. Also be sure to check out Android P Behavior Changes to learn about areas where platform changes may affect your apps.

Indoor Positioning with Wi-Fi RTT

Android P adds platform support for the IEEE 802.11mc Wi-Fi protocol—also known as Wi-Fi Round-Trip-Time (RTT)—to let you take advantage of indoor positioning in your apps.

On Android P devices with hardware support, your apps can use the new RTT APIs to measure the distance to nearby RTT-capable Wi-Fi Access Points (APs). The device must have location enabled and Wi-Fi scanning turned on (under Settings > Location), and your app must have the ACCESS_FINE_LOCATION permission. The device doesn't need to connect to the APs to use RTT. To maintain privacy, only the phone is able to determine the distance to the AP; the APs do not have this information.

If your device measures the distance to 3 or more APs, you can use a multilateration algorithm to estimate the device position that best fits those measurements. The result is typically accurate within 1 to 2 meters.

With this accuracy, you can build new experiences like in-building navigation, fine- grained location-based services such as disambiguated voice control (for example, "Turn on this light"), and location-based information (such as "Are there special offers for this product?").

Display cutout support

Android P offers support for the latest edge-to-edge screens with display cutout for camera and speaker. The new DisplayCutout class lets you find out the location and shape of the non-functional areas where content shouldn't be displayed. To determine the existence and placement of these cutout areas, use the getDisplayCutout() method.

A new window layout attribute, layoutInDisplayCutoutMode, allows your app to lay out its content around a device's cutouts. You can set this attribute to one of the following values:

You can simulate a screen cutout on any device or emulator running Android P as follows:

  1. Enable developer options.
  2. In the Developer options screen, scroll down to the Drawing section and select Simulate a display with a cutout.
  3. Select the size of the cutout.
Note: We recommend that you test the content display around the cutout area by using a device or emulator running Android P.


Android P introduces several enhancements to notifications, all of which are available to developers targeting Android P and above.

MessagingStyle with photo attached.

MessagingStyle with replies and conversation.

Enhanced messaging experience

Starting in Android 7.0 (API level 24), you could add an action to reply to messages or enter other text directly from a notification. Android P enhances this feature with the following enhancements:


// create new Person val sender = Person()         .setName(name)         .setUri(uri)         .setIcon(null)         .build() // create image message val message = Message("Picture", time, sender)         .setData("image/", imageUri) val style = Notification.MessagingStyle(getUser())         .addMessage("Check this out!", 0, sender)         .addMessage(message)


// create new Person Person sender = new Person()         .setName(name)         .setUri(uri)         .setIcon(null)         .build(); // create image message Message message = new Message("Picture", time, sender)         .setData("image/", imageUri); Notification.MessagingStyle style = new Notification.MessagingStyle(getUser())         .addMessage("Check this out!", 0, sender)         .addMessage(message);

Channel settings, broadcasts, and Do Not Disturb

Android O introduced Notification Channels allowing you to create a user-customizable channel for each type of notification you want to display. Android P simplifies notification channel settings with these changes:

Multi-camera support and camera updates

You can now access streams simultaneously from two or more physical cameras on devices running Android P. On devices with either dual-front or dual-back cameras, you can create innovative features not possible with just a single camera, such as seamless zoom, bokeh, and stereo vision. The API also lets you call a logical or fused camera stream that automatically switches between two or more cameras.

Other improvements in camera include new Session parameters that help to reduce delays during initial capture, and Surface sharing that lets camera clients handle various use-cases without the need to stop and start camera streaming. We’ve also added APIs for display-based flash support and access to OIS timestamps for app-level image stabilization and special effects.

In Android P the multi-camera API supports monochrome cameras for devices with FULL or LIMITED capability. Monochrome output is achieved via the YUV_420_888 format with Y as grayscale, U (Cb) as 128, and V (Cr) as 128.

Android P also enables support for external USB/UVC cameras on supported deveices.

ImageDecoder for drawables and bitmaps

Android P introduces ImageDecoder to provide a modernized approach for decoding images. You should use ImageDecoder to decode an image rather than the BitmapFactory and BitmapFactory.Options APIs.

ImageDecoder lets you create a Drawable or a Bitmap from a byte buffer, a file, or a URI. To decode an image, first call createSource() with the source of the encoded image. Then, call decodeDrawable() or decodeBitmap() by passing the ImageDecoder.Source object to create a Drawable or a Bitmap. To change default settings, pass OnHeaderDecodedListener to decodeDrawable() or decodeBitmap(). ImageDecoder calls onHeaderDecoded() with the image's default width and height, once they are known. If the encoded image is an animated GIF or WebP, decodeDrawable() returns a Drawable that is an instance of the AnimatedImageDrawable class.

There are different methods you can use to set image properties. These include:

ImageDecoder also lets you add customized and complicated effects to an image such as rounded corners or circle masks. Use setPostProcessor() with an instance of the PostProcessor class to execute whatever drawing commands you want. When you post-process an AnimatedImageDrawable, effects are applied to all frames.


Android P introduces a new AnimatedImageDrawable class for drawing and displaying GIF and WebP animated images. AnimatedImageDrawable works similarly to AnimatedVectorDrawable in that RenderThread drives the animations of AnimatedImageDrawable. RenderThread also uses a worker thread to decode, so that decoding does not interfere with RenderThread. This implementation allows your app to have an animated image without managing its updates or interfering with your app's UI thread.

An AnimagedImageDrawable can be decoded with the new ImageDecoder. The following code snippet shows how to use ImageDecoder to decode your AnimatedImageDrawable:

Drawable d = ImageDecoder.decodeDrawable(...); if (d instanceof AnimatedImageDrawable) {     ((AnimatedImageDrawable) d).start();   // Prior to start(), the first frame is displayed }

ImageDecoder has several methods allowing you to further modify the image. For example, you can use the setPostProcessor() method to modify the appearance of the image, such as applying a circle mask or rounding corners.

Android P adds built-in support for High Dynamic Range (HDR) VP9 Profile 2, so you can now deliver HDR-enabled movies to your users from YouTube, Play Movies, and other sources on HDR-capable devices.

Android P adds support for HEIF (heic) images encoding to the platform. HEIF still image samples are supported in the MediaMuxer and MediaExtractor classes HEIF improves compression to save on storage and network data. With platform support on Android P devices, it’s easy to send and utilize HEIF images from your backend server. Once you’ve made sure that your app is compatible with this data format for sharing and display, give HEIF a try as an image storage format in your app. You can do a jpeg-to-heic conversion using ImageDecoder or BitmapFactory to obtain a bitmap from jpeg, and you can use HeifWriter in the new Support Library alpha to write HEIF still images from YUV byte buffer, Surface, or Bitmap.

Media metrics are now also available from the AudioTrack, AudioRecord, and MediaDrm classes.

Android P adds new methods to the MediaDRM class to get metrics, HDCP levels, security levels and number of sessions, and to add more control over security levels and secure stops. See the API Diff report for details.

In Android P the AAudio API includes new AAudioStream attributes for usage, content type, and input preset. Using these attributes you can create streams that are tuned for VoIP or camcorder applications. You can also set the SessionID to associate an AAudio stream with a submix that can include effects. Use the AudioEffect API to control the effects.

Android P includes a new AudioEffect API for DynamicsProcessing. With this class you can build channel-based audio effects composed of multiple stages of various types including equalization, multi-band compression, and limiter. The number of bands and active stages is configurable, and most parameters can be controlled in real time.

Data cost sensitivity in JobScheduler

With Android P, JobScheduler has been improved to let it better handle network-related jobs for the user, in coordination with network status signals provided separately by carriers.

Jobs can now declare their estimated data size, signal prefetching, and specify detailed network requirements—carriers can report networks as being congested or unmetered. JobScheduler then manages work according to the network status. For example, when a network is congested, JobScheduler might defer large network requests. When on an unmetered network, JobScheduler can run prefetch jobs to improve the user experience, such as by prefetching headlines.

When adding jobs, make sure to use setEstimatedNetworkBytes(), setIsPrefetch(), and setRequiredNetwork() when appropriate to help JobScheduler handle the work properly. When your job executes, be sure to use the Network object returned by JobParameters.getNetwork(). Otherwise you'll implicitly use the device’s default network which may not meet your requirements, causing unintended data usage.

Neural Networks API 1.1

The Neural Networks API was introduced in Android 8.1 (API level 27) to accelerate on-device machine learning on Android. Android P expands and improving the API, adding support for nine new ops — Pad, BatchToSpaceND, SpaceToBatchND, Transpose, Strided Slice, Mean, Div, Sub, and Squeeze.

Autofill framework

Android 8.0 (API level 26) introduced the autofill framework, which makes it easier to fill out forms in apps. Android P introduces multiple improvements that autofill services can implement to further enhance the user experience when filling out forms. For more details, see the Autofill Framework page.

Security enhancements

Android P introduces a number of new security features, including a unified fingerprint authentication dialog and high-assurance user confirmation of sensitive transactions. For more details, see the Security Updates page.

Client-side encryption of Android backups

Android P enables encryption of Android backups with a client-side secret. Because of this privacy measure, the device's PIN, pattern, or password is required to restore data from the backups made by the user's device. To learn more about the technology behind this new feature, see the Google Cloud Key Vault Service whitepaper.

To learn more about backing up data on Android devices, see Data Backup Overview.


Android P introduces enhancements to the accessibility framework that make it easier to provide even better experiences to users of your app.

Navigation semantics

New attributes make it easier for you to define how accessibility services, especially screen readers, navigate from one part of the screen to another. These attributes can help users who are visually impaired quickly move through text in your app's UI and allow them to make a selection.

For example, in a shopping app, a screen reader can help users navigate directly from one category of deals to the next, without the screen reader having to read all items in a category before moving on to the next.

Accessibility pane titles

Prior to Android P, accessibility services could not always determine when a specific pane of the screen was updated, such as when an activity replaces one fragment with another fragment. Panes consist of logically-grouped, visually-related UI elements that typically comprise a fragment.

In Android P, you can provide accessibility pane titles, or individually identifiable titles, for these panes. If a pane has an accessibility pane title, accessibility services receive more detailed information when the pane changes. This capability allows services to provide more granular information to the user about what's changed in the UI.

To specify the title of a pane, use the new android:accessibilityPaneTitle attribute. You can also update the title of a UI pane that is replaced at runtime using setAccessibilityPaneTitle(). For example, you could provide a title for the content area of a Fragment object.

Heading-based navigation

If your app displays textual content that includes logical headings, set the new android:accessibilityHeading attribute to true for the instances of View that represent those headings. By adding these headings, you allow accessibility services to help users navigate directly from one heading to the next. Any accessibility service can use this new capability to improve users' UI navigation experience.

Caution: To keep your app responsive when accessibility services are enabled, apply accessibility headings only to View objects that contain multiple sections of text.

Group navigation and output

Screen readers have traditionally used the android:focusable attribute to determine when they should read a ViewGroup, or a collection of View objects, as a single unit. That way, users could understand that the views were logically related to each other.

Prior to Android P, you needed to mark each View object within a ViewGroup as non-focusable and the ViewGroup itself as focusable. This arrangement caused some instances of View to be marked focusable in a way that made keyboard navigation more cumbersome.

In Android P, you can use the new android:screenReaderFocusable attribute in place of the android:focusable attribute in situations where making a View object focusable has undesirable consequences. Screen readers place focus on all elements that have set either android:screenReaderFocusable or android:focusable to true.

Android P adds support for performing convenience actions on behalf of users:

Interaction with tooltips New features in the accessibility framework give you access to tooltips in an app's UI. Use getTooltipText() to read the text of a tooltip, and use the new ACTION_SHOW_TOOLTIP and ACTION_HIDE_TOOLTIP to instruct instances of View to show or hide their tooltips. New global actions Android P introduces support for two new device actions in the AccessibilityService class. Your service can now help users lock their devices and take screenshots using the GLOBAL_ACTION_LOCK_SCREEN and GLOBAL_ACTION_TAKE_SCREENSHOT actions, respectively.

Window change details

Android P makes it easier to track updates to an app's windows when an app redraws multiple windows simultaneously. When a TYPE_WINDOWS_CHANGED event occurs, use the getWindowChanges() API to determine how the windows have changed. During a multiwindow update, each window now produces its own set of events. The getSource() method returns the root view of the window associated with each event.

If an app has defined accessibility pane titles for its View objects, your service can recognize when the app's UI is updated. When a TYPE_WINDOW_STATE_CHANGED event occurs, use the new types returned by getContentChangeTypes() to determine how the window has changed. For example, the framework can now detect when a pane has a new title, or when a pane has disappeared.

Google is committed to improving accessibility for all Android users, providing enhancements that enable you to build services, such as the Talkback screen reader, for users with accessibility needs. To learn more about how to make your app more accessible and to build accessibility services, see Accessibility.


To eliminate unintentional rotations, we’ve added a new mode that pins the current orientation even if the device position changes. Users can trigger rotation manually when needed by pressing a new button in the system bar.

The compatibility impacts for apps should be very minimal in most cases. However, if your app has any customized rotate behavior or uses any esoteric screen orientation settings, you might run into issues that could have gone unnoticed before when user rotation preference was always set to portrait. We encourage you to take a look at the rotation behavior in all the key activities of your app and make sure that all of your screen orientation settings are still providing the optimal experience.

For more details, see the associated behavior changes.

A new rotation mode lets users trigger rotation manually when needed using a button in the system bar.


Android P brings the following new text-related features to the platform:

On-device system tracing

System tracing allows you to capture timing data for each process that's running on an Android device and to view the data in an HTML report. This report is useful for identifying what each thread is doing and for viewing globally significant device states.

Note: You don't need to instrument your code to record traces, but doing so can help you see what parts of your app's code may be contributing to hanging threads or UI jank.

In Android P, you can now record system traces from your device, then share these recordings with your development team. To record a system trace, complete the following steps:

  1. Open the Developer Options settings screen.
  2. In the Debugging section, select System Tracing. The System Tracing app opens.
  3. (Optional) Choose the Categories of system and sensor calls that you'd like to trace, and choose a Buffer size (in KB). This step is not required; in most cases, the default settings are preferable.
  4. Enable Record trace to start recording the system trace.
  5. When you're done recording, disable Record trace from the System Tracing app, or tap the notification.
  6. If desired, tap the new notification that appears to share your trace. The trace is delivered as a compressed .ctrace file; to expand the trace to HTML, use the systrace command, which is located in android-sdk/platform-tools/systrace/:

    $ systrace --from-file yourtracefile.ctrace The trace file is saved to /data/local/traces/ so you can access it later with adb pull.

Note: You can create a Quick Settings tile to perform all the above steps more quickly. If you'd prefer to use a command-line interface, Android P still supports the systrace command.

If you prefer to use your IDE to gather and access this information, you can also view information about your app's process and CPU activity using Android Studio's built-in CPU profiler. However, the CPU profiler requires your device to be plugged in and using ADB; the System Tracing app does not have such requirements.