Qualcomm Patent | Asynchronous Time Warp With Depth Data

Patent: Asynchronous Time Warp With Depth Data

Publication Number: 20190012826

Publication Date: 2019-01-10

Applicants: Qualcomm

Abstract

A wearable display device is described that is connected to a host device. The wearable display device includes one or more sensors configured to generate eye pose data indicating a user’s field of view, one or more displays, and one or more processors. The one or more processors are configured to output a representation of the eye pose data to the host device and extract one or more depth values for a rendered frame from depth data output by the host device. The rendered frame is generated using the eye pose data. The one or more processors are further configured to modify one or more pixel values of the rendered frame using the one or more depth values to generate a warped rendered frame and output, for display at the one or more displays, the warped rendered frame.

Background

Split-rendered systems may include at least one host device and at least one client device that communicate over a network (e.g., a wireless network, wired network, etc.). For example, a Wi-Fi Direct (WFD) system includes multiple devices communicating over a Wi-Fi network. The host device acts as a wireless access point and sends image content information, which may include audio video (AV) data, audio data, and/or video data, to one or more client devices participating in a particular peer-to-peer (P2P) group communication session using one or more wireless communication standards, e.g., IEEE 802.11. The image content information may be played back at both a display of the host device and displays at each of the client devices. More specifically, each of the participating client devices processes the received image content information for presentation on its display screen and audio equipment. In addition, the host device may perform at least some processing of the image content information for presentation on the client devices.

The host device and one or more of the client devices may be either wireless devices or wired devices with wireless communication capabilities. In one example, as wired devices, one or more of the host device and the client devices may comprise televisions, monitors, projectors, set-top boxes, DVD or Blu-Ray Disc players, digital video recorders, laptop or desktop personal computers, video game consoles, and the like, that include wireless communication capabilities. In another example, as wireless devices, one or more of the host device and the client devices may comprise mobile telephones, portable computers with wireless communication cards, personal digital assistants (PDAs), portable media players, or other flash memory devices with wireless communication capabilities, including so-called “smart” phones and “smart” pads or tablets, or other types of wireless communication devices (WCDs).

In some examples, at least one of the client devices may comprise a wearable display device. A wearable display device may comprise any type of wired or wireless display device that is worn on a user’s body. As an example, the wearable display device may comprise a wireless head-worn display or wireless head-mounted display (WHMD) that is worn on a user’s head in order to position one or more display screens in front of the user’s eyes. The host device is typically responsible for performing at least some processing of the image content information for display on the wearable display device. The wearable display device is typically responsible for preparing the image content information for display at the wearable display device.

Summary

In general, this disclosure relates to techniques for correcting for camera translation (e.g., moving the wearable display device towards or away from a virtual object) while a host device generates image content information and transmits the image content information to a wearable display device. A host device may have per-pixel depth data that may be used to correct for camera translation. However, in split-rendered systems (e.g., where both the host device and the wearable display device process image data such as in gaming virtual reality (VR), augmented reality (AR) applications, etc.), transmitting per-pixel depth data from the host device to the wearable display device, which is an example of a client device, may consume significant bandwidth.

The techniques of this disclosure are directed to systems that permit time and space warping to correct for a translation of head position and scene motion from their state in the last fully rendered frame without necessarily always transmitting/receiving per-pixel depth data. For example, a host device of a split-rendered system may generate quantized depth data for each depth layer of a scene and may generate masks indicating which pixels are in which layer of the scene. In another example, the host device may assign a single depth value for all pixels of a scene.

In one example, this disclosure is directed to a wearable display device connected to a host device. The wearable display device including one or more sensors configured to generate eye pose data indicating a user’s field of view, one or more displays, and one or more processors implemented in circuitry. The one or more processors are configured to output a representation of the eye pose data to the host device and extract one or more depth values for a rendered frame from depth data output by the host device. Each one of the one or more depth values indicates a weighted depth value for a plurality of pixels of the rendered frame. The rendered frame is generated using the eye pose data. The one or more processors are further configured to modify one or more pixel values of the rendered frame using the one or more depth values to generate a warped rendered frame and output, for display at the one or more displays, the warped rendered frame.

In another example, this disclosure is directed to a method including outputting, by a processor implemented in circuitry, a representation of eye pose data indicating a user’s field of view to a host device and extracting, by the processor, one or more depth values for a rendered frame from depth data output by the host device. Each one of the one or more depth values indicates a weighted depth value for a plurality of pixels of the rendered frame. The rendered frame is generated using the eye pose data. The method further includes modifying, by the processor, one or more pixel values of the rendered frame using the one or more depth values to generate a warped rendered frame and outputting, by the processor, for display at one or more displays, the warped rendered frame.

In a further example, this disclosure is directed to a non-transitory computer-readable storage medium having stored thereon instructions that, when executed, cause a processor of a wearable display device to output a representation of eye pose data indicating a user’s field of view to a host device and extract one or more depth values for a rendered frame from depth data output by the host device. Each one of the one or more depth values indicates a weighted depth value for a plurality of pixels of the rendered frame. The rendered frame is generated using the eye pose data. The instructions further cause the processor to modify one or more pixel values of the rendered frame using the one or more depth values to generate a warped rendered frame and output, for display at one or more displays of the wearable display device, the warped rendered frame.

In one example, this disclosure is directed to a wearable display device connected to a host device. The wearable display device including means for outputting a representation of eye pose data indicating a user’s field of view to a host device and means for extracting one or more depth values for a rendered frame from depth data output by the host device. Each one of the one or more depth values indicates a weighted depth value for a plurality of pixels of the rendered frame. The rendered frame is generated using the eye pose data. The wearable display device further includes means for modifying one or more pixel values of the rendered frame using the one or more depth values to generate a warped rendered frame and means for outputting for display at one or more displays, the warped rendered frame.

In another example, this disclosure is directed to a host device connected to a wearable display device, the host device comprising one or more processors implemented in circuitry that are configured to generate image content information for a rendered frame based on eye pose data received from the wearable display device and generate one or more depth values using per-pixel depth data for the rendered frame. Each one of the one or more depth values indicating a weighted depth value for a plurality of pixels of the rendered frame. The one or more processors being further configured to send, to the wearable display device, the image content information for the rendered frame and depth data indicating the one or more depth values.

In another example, this disclosure is directed to a method including generating, by one or more processors implemented in circuitry, image content information for a rendered frame based on eye pose data received from the wearable display device and generating, by the one or more processors, one or more depth values using per-pixel depth data for the rendered frame. Each one of the one or more depth values indicating a weighted depth value for a plurality of pixels of the rendered frame. The method further including sending, by the one or more processors, to the wearable display device, the image content information for the rendered frame and depth data indicating the one or more depth values.

In a further example, this disclosure is directed to a non-transitory computer-readable storage medium having stored thereon instructions that, when executed, cause a processor of a host device connected to a wearable display device to generate image content information for a rendered frame based on eye pose data received from the wearable display device and generate one or more depth values using per-pixel depth data for the rendered frame. Each one of the one or more depth values indicating a weighted depth value for a plurality of pixels of the rendered frame. The instructions further cause the processor to send, to the wearable display device, the image content information for the rendered frame and depth data indicating the one or more depth values.

In another example, this disclosure is directed to a host device connected to a wearable display device, the host device including means for generating image content information for a rendered frame based on eye pose data received from the wearable display device and means for generating one or more depth values using per-pixel depth data for the rendered frame. Each one of the one or more depth values indicating a weighted depth value for a plurality of pixels of the rendered frame. The host device further includes means for sending to the wearable display device, the image content information for the rendered frame and depth data indicating the one or more depth values.

发表评论

电子邮件地址不会被公开。 必填项已用*标注