Samsung Patent | Method for Supporting VR Content Display in Communication System

Patent: Method for Supporting VR Content Display in Communication System

Publication Number: 20190045222

Publication Date: 2019-02-07

Applicants: Samsung

Abstract

The present disclosure provides a method for displaying a 360 degree image, capable of performing encoding in tile unit defined as a part of the 360 degree image according to the position occupied within the 360 degree image, the method comprising: an operation for requesting at least one tile corresponding to a view region to be displayed so as to correspond to the viewpoint of a user; an operation for receiving at least one tile corresponding to the view region and decoding the at least one tile received; and an operation for rendering the at least one tile decoded.

CROSS-REFERENCE TO RELATED APPLICATION(S)

[0001] This application is a U.S. National Stage application under 35 U.S.C. .sctn. 371 of an International application number PCT/KR2017/001494, filed on Feb. 10, 2017, which is based on and claimed priority of a Korean patent application number 10-2016-0016508, filed on Feb. 12, 2016, in the Korean Intellectual Property Office, the disclosure of which is incorporated by reference herein in its entirety.

TECHNICAL FIELD

[0002] The present disclosure relates to displaying virtual reality (VR) content, and relates to a head mount display (HMD), mobile VR, TV VR, a 360-degree image, a viewpoint, and a point of interest.

BACKGROUND ART

[0003] Consumer media content is continuously evolving from black and white content to color content, high definition (HD) content, ultra high definition (MD) content, and the like. Recently, the standardization for high dynamic range (HDR) content has been proceeded, and standards for HDR content have been distributed. Meanwhile, VR content had been in an incubation stage before VR devices were distributed.

[0004] FIG. 1 illustrates a processing flow for providing VR media content.

[0005] The processing flow for VR media content may follow the procedure illustrated in FIG. 1.

[0006] For example, an image captured 100 by a video camera may be mastered 102 by a mastering device, encoded/multiplexed 104by an encoding/multiplexing (Mux) device, and distributed 106 as VR content in various media formats.

[0007] The distributed VR content may de-multiplexed (Demux) 108 in a receiving side, transmitted 110 to a display device, and VR displayed 112.

[0008] VR content has characteristics significantly different from the conventional 2-dimentional (2D) or 3 dimensional (3D) content. VR content provides a user with an immersive experience by allowing the user to view an image at all of 360 degrees. However, freely viewing an image at 360 degrees by the user means that a content provider partially damages the image provided to the user in an aspect of artistic depiction.

[0009] One of the main reasons why the spread of VR content and a device is delayed is that a display quality of VR content is not better than that of fixed view content non-interactively displayed. That is, a screen content quality of current VR technology which consumers experience is poorer than that of fixed view-type content.

[0010] While industry recommendations (for example, UHD resolution or HDR, etc) for next-generation content are defined with respect to the fixed view-type content, VR content is restricted by basic issues such as a picture resolution, an insufficient frame rate, and flickering.

[0011] Currently, a quality of VR content is limited by resolution. Users are considered as being accustomed to an HD resolution of at least 2K. Accordingly, users expect a resolution of a better quality (more immersive quality) for VR content.

[0012] However, a current VR system is limited by a decoder which can support decoding merely up to 4 K (:4 K UHD, that is, a resolution of 2160 p).

DETAILED DESCRIPTION OF THE INVENTION

Technical Problem

[0013] The present disclosure provides a scheme of providing a high quality display to a user using a stitched 360-degree image with a higher resolution and an improved quality.

[0014] The present disclosure provides a scheme of resolving a view region display delay due to feedback input according to a change in a user’s viewpoint.

Technical Solution

[0015] In accordance with an aspect of the present disclosure, a method of displaying a 360 degree image encoded in the unit of tiles defined as part of the 360 degree image according to a position within the 360 degree image is provided. The method comprises: requesting at least one tile corresponding to a view region to be displayed according to a user’s viewpoint; receiving the at least one tile corresponding to the view region and decoding the at least one received tile; and rendering the at least one decoded tile.

[0016] In accordance with another aspect of the present disclosure, an apparatus for displaying a 360 degree image encoded in the unit of tiles defined as part of the 360-degree image according to a position within the 360 degree image is provided. The apparatus comprises: a controller configured to request at least one tile corresponding to a view region to be displayed according to a user’s viewpoint and receive the at least one tile corresponding to the view region; a decoder configured to decode the at least one received tile; and a display unit configured to render the at least one decoded tile.

[0017] In accordance with another aspect of the present disclosure, a method of transmitting a 360 degree image is provided. The method comprises: encoding the 360 degree image in the unit of tiles defined as part of the 360 degree image according to a position within the 360 degree image; receiving a request including an index indicating at least one tile; and transmitting an encoded image of a tile corresponding to the index included in the request, wherein the index included in the request indicates a tile corresponding to a view region to be displayed according to a user’s viewpoint.

[0018] In accordance with another aspect of the present disclosure, an apparatus for transmitting a 360 degree image is provided. The apparatus comprises: an encoder configured to encode a 360 degree image encoded in the unit of tiles defined as part of the 360 degree image according to a position within the 360 degree image; and a controller configured to receive a request including an index indicating at least one tile and transmit an encoded image of a tile corresponding to the index included in the request, wherein the index included in the request indicates a tile corresponding to a view region to be displayed according to a user’s viewpoint.

Advantageous Effects

[0019] Even though a display device according to the present disclosure performs decoding having complexity (corresponding to 4 K) the same as that of fixed view type content, an image actually rendered to the user has a significantly high resolution and quality.

BRIEF DESCRIPTION OF DRAWINGS

[0020] FIG. 1 illustrates a processing flow for providing VR media content;

[0021] FIG. 2 illustrates a VR view region having a resolution of 1 K within a stitched 360 degree image of a 4 K resolution;

[0022] FIG. 3 illustrates a VR view region having a resolution of 4 K within a stitched 360 degree image of a resolution of 8 K;

[0023] FIG. 4 illustrates tiles fragmented based on a position within a 360 degree image;

[0024] FIG. 5 illustrates all tiles that make up a 360 degree image and tiles (sub-image) fetched by a device based on a user’s viewpoint;

[0025] FIG. 6 illustrates the case in which a view region after a change in a user’s viewpoint is located outside a sub-image;

[0026] FIG. 7 illustrates a uniform safeguard and a safeguard determined using viewpoint movement information;

[0027] FIG. 8 illustrates a method for displaying by a display device VR content according to the present disclosure;

[0028] FIG. 9 illustrates a configuration of a display device according to the present disclosure;

[0029] FIG. 10 illustrates a method for providing by a content serverVR content according to the present disclosure; and

[0030] FIG. 11 illustrates an apparatus configuration of a content server according to the present disclosure.

MODE FOR CARRYING OUT THE INVENTION

[0031] Hereinafter, embodiments of the present disclosure will be described in detail with reference to the accompanying drawings. In the following description of the present disclosure, a detailed description of known configurations or functions incorporated herein will be omitted when it is determined that the detailed description may make the subject matter of the present disclosure unclear. The terms as described below are defined in consideration of the functions in the embodiments, and the meaning of the terms may vary according to the intention of a user or operator, convention, or the like. Therefore, the definitions of the terms should be made based on the content throughout the specification.

[0032] FIG. 2 illustrates a VR view region having a 1 K resolution within a stitched 360-degree image of a 4 K resolution.

[0033] A maximum of 4 K resolution is used for one stitched 360-degree image 200. Here, the stitched 360-degree image means a 360-degree image provided by stitching (joining). When reproducing VR content, a VR system decodes the entire 360-degree video image 200 with a resolution of 4 K regardless of a part (or a view region) 202 of the image which the user watches. At this time, VR content actually displayed on the display device 210 corresponds to a part 202 of the 360-degree image 200 of the resolution of 4 K. Accordingly, in the VR system, an actual view region 202 rendered to the user has a low resolution that is approximately 1 K.

[0034] Accordingly, a scheme according to the present disclosure may use a stitched 360-degree image having a higher resolution (for example, 8K, 10K, 16K, or other resolutions) instead of limiting the resolution of the 360-degree image to the resolution of 4 K. Further, only a corresponding region according to a user’s viewpoint (a region of an image displayed to the user in the 360-degree image, hereinafter referred to as a view region) may be decoded in a decoder according to the present disclosure and is shown to the user. Accordingly, the corresponding region may have a resolution corresponding to a maximum decoding capability of the display device, and the user consuming VR content may experience higher quality VR content.

[0035] In order to enable viewpoint dependent video encoding and rendering, tasks related to a VR experience may exist. First, in order to make a terminating device, which consumes VR content, receive, decode, and render only a view region corresponding to a viewpoint, a video should be encoded in the unit of fragmented tiles based on a location within a stitched image and delivered to the terminating device. Second, since the user’s viewpoint may continuously change (for example, rotate) according to a user’s intention, location based delivery of the view region may cause a delay due to a feedback loop response, and the delay should be alleviated.

[0036] Prior to the description of operation, construable examples of some terms used herein will be first described below. However, it is noted that the terms are not limited to the examples of the construable meanings which are proposed below.

[0037] The display device is a device that outputs VR content described in the present disclosure and may be referred to as display hardware, a processing unit, a user device, a UE, a mobile station (MS), a mobile equipment (ME), a device, a terminal, or the like.

[0038] VR content may be a dynamic image such as a video or a still image such as a photo. In the present disclosure, a 360 degree image is described as an example of the VR content. The 360 degree image is an image photographed by a stitching camera and stitched, and may provide a view at all 360 degrees if the user changes a location or orientation.

[0039] The VR system means a general environment for supporting VR content consumption by the display device and may be referred to as a VR ecosystem. A fundamental aspect of VR is a system which may monitor the user. The system allows the user to use a kind of controller that provides feedback input to a content display device or a processing unit. The system may control the content in accordance with the feedback input and enable interaction with the user.

[0040] The system may include, for example, at least one of approximate configurations (or functions) shown in the following Table.

TABLE-US-00001 TABLE 1 Configuration (or function) Description Display hardware Display device or processing unit. For example, HMD, wireless VR, mobile VR, TVs, and cave automatic virtual environments (CAVEs) User controller Configuration for providing feedback input to VR system (display hardware). For example, peripherals and haptics Content capture For example, camera, video stitching (for connecting several videos) device Content studio For example, games, live, cinema, news, and documentary, etc. Industrial For example, education, health care, real estate, application architecture, and travel, etc. Production tool For example, 3D engine and power processing & service App stores Provides app for VR media content

[0041] A user’s feedback input through a controller may be divided into 1) orientation tracking, and 2) position tracking. The orientation tracking corresponds to tracking a rotation (that is, a direction of a user’s viewpoint) by the controller and has 3 degrees of freedom (DOF). The position tracking is tracking translation (that is, a user’s movement) by the controller and has 3 DOF. Accordingly, a maximum of available DOF when the user experiences VR content is 6.

[0042] For example, encoding and transmission of the 360 degree image in the unit of tiles will be described.

[0043] FIG. 3 illustrates a VR view region having a resolution of 4 K within a stitched 360 degree image of a resolution of 8 K.

[0044] Referring to FIG. 3, a stitched 360 degree image 300 may have a resolution of 8 K (; 8K Full UHD, that is, a resolution of 4320 p). The 360 degree image may be encoded and delivered in the unit of tiles (for example, 302, 304). At this time, a region (that is, the view region) 312 rendered by the display device 310 is a region corresponding to a user’s viewpoint. The display device may decode and render only the view region 312, and at this time, the view region may have a resolution of 4 K. Accordingly, the display device may use a maximum decoding capability for rendering the view region, and the user may experience high quality VR content.

[0045] FIG. 4 illustrates tiles fragmented based on a position within a 360 degree image.

[0046] In the present disclosure, a tile is a part (that is, a unit region of encoding) divided from the 360 degree image 400 in order to encode and transmit one 360 degree image 400. The 360 degree image may include one or more tiles (for example, 402, 404) and may be encoded and transmitted in the unit of at least one tile. The tile 402, 404 may be defined according to a position thereof occupied within the 360 degree image 400.

[0047] One tile may or may not overlap another tile within the 360 degree image. Further, each tile is an independent sub-video or sub-image. If one tile references another tile in the past for prediction (that is, one tile is a dependent sub-image), a problem may occur when an I frame (I frame; intra-frame: a frame including all information required for rendering therein), which is not received by the decoder is necessary.

[0048] If the display device detects a change in a viewpoint of a viewer, the display device may transmit a parameter (for example, position information, index information, etc. of a changed viewpoint) for the changed viewpoint or a tile related parameter corresponding to the changed viewpoint to a server. Further, the display device may receive a tile corresponding to the changed viewpoint from the server, and decode and render the received tile.

[0049] Tile related metadata available to the decoder may include, for example, the following information (parameter).

TABLE-US-00002 TABLE 2 Tile metadata Number of tiles within 360 degree image Tile position (for example, upper left corner or lower right corner Tile index for indexing request from client to server (it may be transmitted within hint track whenever a change in user’s viewpoint is newly updated) Information for distinguishing between low resolution and high resolution

[0050] Subsequently, a scheme for mitigating a delay according to view region position based delivery will be described.

[0051] The present disclosure proposes a scheme using a safeguard in fetching of tiles by the decoder.

[0052] When the user’s viewpoint is static, the position of a view region is also not changed, so that a delay problem is not generated and a safety zone for tile fetching is not required. However, the user’s viewpoint may be unpredictably changed in all aspects such as the position and orientation. Accordingly, as the safety zone (that is, the safeguard) for preparing for a rapid change in the viewpoint, tiles surrounding or enclosing the user’s current viewpoint may be fetched (that is, requested and received) by the display device.

[0053] FIG. 5 illustrates all tiles that make up a 360 degree image and tiles (a sub-image) fetched by the device based on the user’s viewpoint.

[0054] Referring to FIG. 5, a sub-image 510 fetched for a view region 500 corresponding to a user’s viewpoint 502 at any time is illustrated.

[0055] The user may view only a part corresponding to the view region 500 through a VR display device. However, the VR display device may request, receive, and decode all tiles within the sub-image 510 as the safeguard (that is, a region corresponding to the sub-image 510 may be fetched as the safeguard). At this time, a total number of tiles decoded (that is, fetched) by the VR display device may be equal to or smaller than that corresponding to a maximum resolution supported by the decoder. It is not necessary that the received tiles are tiles adjacent to each other within the 360 degree image.

[0056] In FIG. 5, the safeguard is determined as a region outside the view region 500 and inside the fetched sub-image 510. A most basic algorithm for determining the region of the safeguard is defining a uniform safeguard region that surrounds the view region 500. However, the uniform safeguard region may not effectively handle the case in which the user quickly moves a head (that is, a rapid change in a viewpoint).

[0057] FIG. 6 illustrates the case in which a view region after a change in a user’s viewpoint is located outside a fetched sub-image.

[0058] Referring to FIG. 6, it can be seen that a view region 600 is located outside a region of a fetched sub-image 610 due to a rapid change in a user’s viewpoint 602.

[0059] Since a rate of the change in the viewpoint (a rate of the change in the viewpoint according to user’s intention) is higher than a tile refresh rate (that is, a rate of a procedure in which the device makes a request for, receives, decodes, and renders tiles of the view region corresponding to the changed viewpoint according to user’s feedback input), the display device may not display an image region of the view region 600 outside the fetched sub-image 610.

[0060] In order to solve the problem, the present disclosure proposes a method using a backup background image having a low resolution and a method using prediction based viewpoint change. The backup background image may be an image for all or some of the 360 degree image. In order to reduce a bit rate consumed for transmitting the backup background image, the backup ground image may be an image having a lower spatial resolution or a lower frame rate than an image corresponding to the tiles of the view region. Selectively, the backup ground image may be an image compressed using a higher quantization parameter than the image corresponding to the tiles of the view region.

[0061] When the display device uses the backup ground image having the low resolution, the backup image having the low resolution for the stitched entire image may be received and rendered during the whole reproduction procedure. The display device may prevent display incapability by outputting the rendered backup image having the low resolution in accordance with the rapid viewpoint change. However, with respect to the part corresponding to the moved viewpoint, a tile having a high resolution cannot be fetched or rendered.

[0062] When the display device uses a prediction based viewpoint change, the display device may determine a safeguard region using recent viewpoint movement information (or a function of the information) instead of using a uniform safeguard region surrounding the viewpoint. In this case, a sub-image “to be fetched” may be determined by a direct function of recent viewpoint movement information and the current viewpoint. Since general panning (horizontal rotation of the viewpoint) of the user is not irregular to such an extent to randomly move in a plurality of directions (further, since the general panning cannot help being limited to a rotation speed of a user’s neck), using information on movement of the viewpoint may reduce a probability of showing a region, which is not fetched in advance, in the current user view region (that is, a probability of not outputting an image within the view region).

[0063] FIG. 7 illustrates a uniform safeguard and a safeguard determined using viewpoint movement information.

[0064] FIG. 7A illustrates a uniform safeguard determined based on a viewpoint. In this case, one safeguard region 710 surrounding a view region 700 may be fetched by a display device.

[0065] FIG. 7B illustrates a flexible safeguard determined based on viewpoint movement information. In this case, when a view region is changed in order of 720, 722, and 724 due to a change in a user’s viewpoint, the display device may determine a region 730 that reflects a change in the viewpoint as the safeguard through a viewpoint movement vector 726 indicating the change in the viewpoint. The viewpoint movement vector may be acquired by a device such as a controller included within the display device.

[0066] Selectively, the display device may further perform pre-tile fetching based on metadata on a point of interest of the user.

[0067] Each frame and a scene within content may have a region of higher interest compared to other regions within the stitched image. In order to quickly render the region of high interest, the region of interest may be identified by the display device through metadata and “pre-fetched”. Further, as a method of preventing a “missing tile” in the view region corresponding to the user’s viewpoint, not only tiles covering the region of interest may be pre-fetched but also tiles on a path connecting regions of interest or tiles on a path connecting the current viewpoint to the region of interest may be pre-fetched.

[0068] In addition, in order to prevent the missing file in the view region, various combinations of the methods can be applied.

[0069] FIG. 8 illustrates a method for displaying by a display device VR content according to the present disclosure.

[0070] The display device may make a request for transmitting a tile in a view region corresponding to a user’s viewpoint in step 800. The request may be transmitted to, for example, a VR content provider or a server. The VR content may be, for example, a 360 degree image, and the 360 degree image may be encoded and delivered in the unit of tiles. Selectively, the 360 degree image may be encoded with a resolution higher than or equal to 8 K, and tiles corresponding to the view region may be decoded with a resolution (for example, 4 K) corresponding to a maximum capability of the decoder. Accordingly, the user may experience a high quality image in VR content.

[0071] The display device may receive at least one tile corresponding to the view region and decode the received tile in step 805.

[0072] The display device may render and display the decoded tile on a display unit in step 810. Selectively, the tile requested by the display device may further include a tile within a sub-image region corresponding to a safeguard as well as the tile corresponding to the view region.

[0073] At this time, the display device may render only the tile corresponding to the view region, and the tiles corresponding to the safeguard may be used as tiles to be rendered when the view region is changed due to a change in the viewpoint. Selectively, the sub-image region may be determined by a function of a vector indicating movement of the viewpoint. Selectively, the display device may perform an operation of receiving a change in the viewpoint and an operation of rendering the tile within the view region corresponding to the change in the viewpoint. Selectively, the display device may further perform an operation of receiving and decoding a backup image for all of the 360 degree image, and, when the tile within the view region corresponding to the change in the viewpoint does not exist in the at least one decoded tile, the tile within the view region corresponding to the change in the viewpoint may be rendered using the backup image. Selectively, the display device may further perform an operation of making a request for, receiving, and decoding at least one of a tile corresponding to a point of interest, a tile on a path that connects the point of interest to another point of interest, and a tile on a path that connects the point of the interest to the viewpoint.

[0074] FIG. 9 illustrates a configuration of a display device according to the present disclosure.

[0075] A display device 900 may include at least one of a controller 902, a display unit 906, and a decoder 904. The display device 900 may display a 360 degree image encoded and transmitted in the unit of tiles.

[0076] The controller 902 may request at least one tile corresponding to a view region to be displayed in accordance with a user’s viewpoint to the content server and receive at least one tile corresponding to the view region.

[0077] The decoder 904 may decode the at least one received tile. Selectively, the 360 degree image may be an image encoded within a resolution higher than or equal to 8 K, and the tile corresponding to the view region may be decoded to have a maximum resolution supported by the decoder. For example, the number of tiles included in the view region may correspond to the maximum resolution supported by the decoder. The controller 902 and the decoder 904 are not necessarily implemented as separated devices, but may be implemented as one module such as a single chip.

[0078] The display unit 906 may render and output the at least one decoded tile.

[0079] FIG. 10 illustrates a method for providing by a content server VR content according to the present disclosure.

[0080] The content server may encode a 360 degree image in the unit of tiles in step 1000.

[0081] The content server may receive a request including an index indicating at least one tile from a display device in step 1005.

[0082] The content server may transmit the encoded image of the tile corresponding to the index included in the request to the display device in step 1010. At this time, the index included in the request may indicate a tile within a displayed view region corresponding to a viewpoint of the user of the display device.

[0083] FIG. 11 illustrates an apparatus configuration of a content server according to the present disclosure.

[0084] A content server 1100 may include at least one of a controller 1102 and an encoder 1104. The content server 1100 may provide a 360-degree image as an example of VR content.

[0085] The encoder 1104 may encode the 360-degree image in the unit of tiles. The tile may be defined according to a position within the 360-degree image.

[0086] The controller 1102 may receive a request including an index indicating at least one tile and transmit the encoded image of the tile corresponding to the index included in the request to the display device. At this time, the index included in the request may indicate a tile corresponding to a view region to be displayed in accordance with a viewpoint of the user of the display device. The controller 1102 and the decoder 1104 are not necessarily implemented as separated devices, but may be implemented as one module such as a single chip.

[0087] It noted that the diagrams illustrating the example of the view region within the image and the diagrams illustrating the configuration of the method and the apparatus in FIGS. 2 to 11 are not intended to limit the scope of the present disclosure. That is, it should not be construed that all component parts or operations shown in FIGS. 2 and 11 are essential component elements for implementing the present disclosure, and it should be understood that only a few component elements may implement the present disclosure within a scope without departing from the subject matter of the present disclosure.

[0088] The above described operations may be implemented by providing a memory device storing corresponding program codes in any constituent unit of a server or UE apparatus in a communication system. That is, the controller of the base station or UE may perform the above described operations by reading and executing the program code stored in the memory device by means of a processor or a Central Processing Unit (CPU).

[0089] The entity, the function, the base station, the load manager, various structural elements of the terminal, modules and the like may be operated by using a hardware circuit, e.g., a complementary metal oxide semiconductor based logic circuit, firmware, software, and/or a combination of hardware and the firmware and/or software embedded in a machine readable medium. As an example, various electric configurations and methods may be carried out by using electric circuits such as transistors, logic gates, and an application specific integrated circuit.

[0090] While the present disclosure has been shown and described with reference to certain embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the present disclosure. Therefore, the scope of the present disclosure should not be defined as being limited to the embodiments, but should be defined by the appended claims and equivalents thereof.

发表评论

电子邮件地址不会被公开。 必填项已用*标注