Você está na página 1de 45
oe cu») United States 1 $ 201502450434. «2 Patent Application Publication co) Pub. No.: US 2015/0245043 Al Greenebaum et al. (43) Pub. Date: Aug. 27, 2015 (71) Applicant: APPLE INC., Cupertino, CA (US HOAN 19/44 (2006.01) CA (US); Haitao Guo, Cupertino, CA (2014.11): HOAN 19/44 (2014.11); HOIN (22) Filet: Feb, 25, 2015 ‘implemented in or by a decoding/display pipeline associated (60) Provisional application No. 61/944,484, filed on Feb. acteristics, and environmental conditions including but not fcd-on feb 2% 014 povioalapixavon No. 2HeHtoanbien ihingandvieerleaon hon poses (61/946,633, filed om Feb. 28, 2014. ‘an ambient setting or environment. The display-side adaptive ae, your | FOF) [ yz0e | mapviog | rooe img | 1000 1010 qorwcs) | ‘source ae too ca | dexty | | enone | ino HOR ssey pre 8 supers [roo | HEVE oa ror Mt roe | 1052 US 2015/0245043 AI Aug. 27,2015 Sheet 1 of 20 Patent Application Publication BT sjanuoo requco DAT jaued Aeidsio gp indu poste, geri ut Aedsip ul 214, weans a9pu papcaue Old uuannue or (shosues eos au Pir erepeew oor 2oun0s aap TuewuaarU Ue wayshs | US 2015/0245043 AI EE (shosues é ‘Old me we suonpues wage omsod stain \ / \ SK A LC auyodd fas / fuporep Aug. 27,2015 Sheet 2 of 20 Fiz we we / wT poued asp co fad] Suuediows fe] voranpss fag ayes owey ue Guyeos sioeieasiou Patent Application Publication i oe oe sopsveyoeieyo Aejep soysuepaeieyo jueju09 US 2015/0245043 AI Aug. 27,2015 Sheet 3 of 20 Patent Application Publication € Ola we sisfjeue —— wand co 22E OfUt wat cope eye Oyu Aerdsip v wre oe os ve we we wuestesse Srcopa] | “adapt p [A] wae be} sedoma Ft] some [Top Aeidsip o9pi poptss A OTE sedi Aejdsip/Bupooep 2S jul vaca ot es 00g wajsks US 2015/0245043 AI Aug. 27,2015 Sheet 4 of 20 Patent Application Publication bold Te or or wauysnipe wounsalpe | jeadamdepe [| Greg [P| Lome mee of ‘ewes joued qwiod ayn (puodwey-oyeds yeued wajque jeued owweukp ‘era 0; 4 + . ‘TER payee 9 ferdsip oF 2p O}ur ahr uy Zep OU (SJemnpou wetuuounue Zedsp jusjuo 03pm sseue pue voy09jy09 e1eD y ¥ y % = Lz. L| vojsianuoo fg—] uounsnipe fe} —vossianuco eoeds 1070 ynuues 160 aed so euczuoy [* odd ue jeonian spe tay Teh add fertsp Patent Application Publication Aug. 27,2015 Sheet Sof 20 US 2015/0245043 Al receive and decode an encoded video stream for a target display pane! 500 y determine content characteristics 502 Y obtain display characteristics 04 Y obtain environment information 508 ¥v process the decoded video according to the content, ‘environment, and display characteristics to generate video adapted to the aisplay panel and current environment 508 y display the processed video to the target display panel 510 FIG. 5 US 2015/0245043 AI Aug. 27, 2015 Sheet 6 of 20 Patent Application Publication v9 ‘Dla XN IB 002 004 0S 0 I l Tous exncoper ‘abeyea) paved Aeidsip uoiBes jeseped —+_L| ura uondeaied vewny—_| (sjena] 952) 2h oydeaied soz Patent Application Publication Aug. 27,2015 Sheet 70f20 US 2015/0245043 Al I 2000 5 = FIG. 6B display range 255 127 perception (256 levels) US 2015/0245043 AI Aug. 27, 2015 Sheet 8 of 20 Patent Application Publication 2024 pia yndjno eles, Peaeae eee eee eaet test teen rece teenreatteaee OL Buddew ea uoisin peydepe 01 fexdsip Tg paunseew Burddew fejdsip —————,_ panseew a0e1 00pIn o areipeuseut | sounos 0] wershs Buyepues eagdepe yueique: Ob Ur juowuounua EL Gut ‘ejdsip woel 08p1n easnos US 2015/0245043 AI Aug. 27,2015 Sheet 9 of 20 Patent Application Publication 8 Ola O78 euade fojasn/6 28 fejdsyp/Buyposap spin os OB EL | ac bet ae feds ¥aH aaa s2po0ep pit das : he oe sopsuayoeveys Aexdspp \/ soygueyo@ eyo jue]u09 Patent Application Publication Aug. 27,2015 Sheet 10 of 20 US 2015/0245043 Al receive and decode an encoded SDR video stream for an HOR target display 900 y perform SDR-to-HDR conversion techniques fo adapt and ‘expand the input SDR video content to HDR video content 902 ¥ display the HOR video content to the HDR display pane! 904 FIG. 9 US 2015/0245043 AI Aug. 27,2015 Sheet 11 of 20 Patent Application Publication OL ‘Old (Bom or wror wart WrO8 ero ar a zis ran | 294 | S2u60rs |“ S54 | od feiss | aoa 3h ASH ‘TEDT Aeidsip/Burpooep OPT waists er0) Onur 2604 Out dusuuonue Aerdsip foomuow | | Tot | wor} HOT | sos wee on apo0ve | 20k be | le ap Zio8 20K ou | 04 Zax S3A DASH ‘ASH 18oy -pued Buddew | seouy ts) OOF Bupooueenas Patent Application Publication Aug. 27,2015 Sheet 12 0f20 US 2015/0245043 Al obtain video content fr a target display panel 1100 vy obtain display information and/or environment information for the target display panel 1102 x ‘map the video content fo @ dynamic range forthe target display panel according to the obtained information 1104 ¥ ‘map the video content fo a color gamut for the target display panel according to the obtained information 1106 Y encode the video content and send the encoded video content to a decoding/display pipaline 1108 Y the decoding/dlsplay pipeline decodes and displays the video content to the target display panel 1110 FIG. 11 Patent Application Publication Aug. 27,2015 Sheet 13 of 20 US 2015/0245043 Al 0.5 0.45 0.4 0.35 0.3 y (output 0.25 signa) 0.15 04 0.05 0 0 07 02 03 04 05 06 07 08 09 1 x (input signal) FIG. 12 Patent Application Publication Aug. 27,2015 Sheet 14 of 20 US 2015/0245043 Al 73 ° © vy (output 0.5 signal) 0 07 02 03 04 05 06 07 08 09 1 To x (input signal) TI 12 FIG. 13 Patent Application Publication Aug. 27,2015 Sheet 1S of 20 US 2015/0245043 Al display digital content toa target aisplay panel 1400 Ad obtain display information and/or environment information for the target display panel 1402 y determine an adjustment fo brightness level for the display according tothe information 1404 y scale the display brightness according @ non-linear function to adjust the brightness level of the display 1406 FIG. 14 Patent Application Publication Aug. 27,2015 Sheet 16 0f20 US 2015/0245043 AL Memory 8800 external interfaces) 8900 FIG, 15 Patent Application Publication Aug. 27,2015 Sheet 17 0f20 US 2015/0245043 Al PMU 9070 | External Memory Peripherals 8800 9020 * * 9000 FIG. 16 Patent Application Publication Aug. 27,2015 Sheet 18 of 20 US 2015/0245043 Al Computer System 2900 Processor Processor Processor 29108 29106 ve 29100 10 hteraco 2930 [ [ I ¥ Tan 2020" aan Input/Output Device(s) htertaco ut Program a240 —_ instuctons ee user 1] keyboa [ldislays | camerats) |] Sonsns) "2960 210 2380 2290 22% ‘Network 2085 FIG. 17 Patent Application Publication Aug. 27,2015 Sheet 19 of 20 US 2015/0245043 Al Portable Memory 2102 Maltifunction ‘Operating System 2126 r ie Communication Module 2128 Conlactidation Module 2130 Graphics Modile 2132 Text Input Mode 2134 CPS Mode 2135 Applications 2136 Telephone Module 2138 Video Conference Module 2130 ‘Camara Module 2143 Tage Management Module 2144 Video & thsic Payer Module 2152 Onin Video Module 2155 ‘Search Module 215 ‘Browser Module 2147 ee Devioe/Global Intemal State 2757 of Exemalforis) + + RF Ccaity Contrler L-+ hy 2108 ait Tudo Creu Es F 2103. | Peripherals [+ Zoe a tip interface 2113 ae 2118 Proximity Sensor % 2166 Processors) Orenfaton Sensors) 2120 P| 2068 # y 2103-4 10 Subsystem 2106 Display Optical Sonsor(s) | [Other Input Controller, Controller Controiler(s) 255 2188 2160 t t t t t t Touch-Sensive Optical ‘Other out Display System | | Sensa(s)/Camera | | Control Devices ae 2104 216 FIG. 18 Patent Application Publication Aug. 27,2015 Sheet 20 of 20 US 2015/0245043 Al Portable ‘Multifunction Device rv 2100 2206 2208 |_| 2208 Pan} Bie Speaker Optical Sensor) Broximy Sensor att 268 Microphone ‘Acoelerometer(s) Extemal Port(s) 2124 FIG. 19 US 2015/0245043 Al DISPLAY-SIDE ADAPTIVE VIDEO PROCESSING PRIORITY INFORMATION [0001] This application claims benefit of priority of US. Provisional Application Ser. No, 61/944,484 entitled “DIS- PLAY PROCESSING METHODS AND APPARATUS filed Feb. 25, 2014, the content of which is incorporated by reference herein in its entirety, U.S. Provisional Appl tion Sex. No, 61/946,638 ented “DISPLAY PROCESSING METHODS AND APPARATUS" filed Feb, 28, 2014, the ‘content of which is incorporated by reference herein in its ‘entirety, and 0 US. Provisional Application Se. No, 619 633 entitled “ADAPTIVE METHODS AND APPARATUS” filed Feb. 28, 2014, the content of which is incorporated hy reference herein in its entirety, BACKGROUND 0002} 1 Technical Field [0003] This disclosure eates generally to digital video or Image processing and display. [0003] 2. Description of the Related Art [0005] Various devices including but not Timited to per Sonal computer systems, desktop computer systems, laptop ‘and notebook computers, tablet or pal devices, digital cane ‘eas, digital video recorders, and mobile phones or smart Phones may include sofiware and/or hardware that my Implement video processing method(s). For example, & device may include an apparatus (eg an integrated cect ((C), suchasa system-onra-chip (SOO), ora subsystem of an IC), that may receive and process digital video input fom one ‘or more sources and output the processed video frames according to one oF more video processing, methods. As fother exumple,asoftware program may be implemented on ‘adevice thatmay reveive and process digital video input fom, ‘one or more sources according to one oF more video process ng methods and output the processed video frames to one oF sore destinations, 10006] As an example, @ video encoder may be imple- mented 38 an apparams, oF altematively a8 a software pro- zram, in which digital video input i eneoded or converted ‘nto another format according t a video encoding method, {or example a compressed video format such as H.264/Ad- vanced Video Coding (AVC) format, or 1.265 High Este ciency Video Coding (HEV) format As another example, a video decoder may be implemented as an apparatus, o alter natively as a software program, in which video in a com- pressed video format such as AVC of HEV is eosived and ‘decoded or converted into another (decompresse) fomat seconding to. video decoding method for example a display Tormat used hy a display device. The 1.264/AVC standard is published by ITU-T ina document titled "ITU-T Recomimen- ation 1.264 Advanced video coding for generic audiovisual services The H.268/HEVC standard is published by ITU-T jn document titled “ITU-T Recommendation H.265: High Ficieney Video Coding” {0007} In many systems, an apparatus or software program ‘ay implement both video encodr companent and a Video ‘decoder component; soch an apparatus or program is com- monly refered to a¢ 2 codec, Nate that a codee may encode! decode both visual/image data and audio/sound data in 3 Video sea, Aug. 27, 2015 [0008] In digital image and video processing, convention: ally digital images (eg. video or still images) are captured rendered, and displayed ata limited dynamic rang, reterred {0 as standard dynamic range (SDR) imaging. In addition, mages are conventionally rendered for display using rel tively nrrow color gamut, referred to asstandard color gamut (SCG) imaging, Extended or high dynamic range (HDR) ‘maging roles to technology and techniques that proce a ‘wider range of luminance in elecwonie images (e438 dis played on display seroens or devices) than is obtained using ‘Standard digital imoging technology and techniques (elered tas standard dynamic range, or SDR, imaging). Many nev devices such as image sensors and displays support HDR ‘maging as well as wide color gamut (WCG) imaging. These ‘devices may be referred 0 as HDR-enabled deviees or simply TDR devices, SUMMARY OF EMBODIMENTS [0009] Various embodiments of methods and apparatus for ‘adaptive processing, rendering, and display of digital image ‘content, For example video itames of video streams, are ‘eseribed Embodiments of video processing methods and apparatus are described that may adaptively render video data Tordisplay toa target display panel. The adaptive video pro- cessing methods may tae into secount various information Jncluding but not limited to video content, display character istics, and environmental conditions inching but not limited {o ambient lighting and viewer location with respect to the splay panel when processing and rendering video content Tora target display panel in an ambient setting oF eaviron- ‘ment, The adaptive video processing methods may use this information to adjust one of more video processing fnetions (ex, noisearifacts reduction, scaling, sharpening, tone napping, color gamut mapping. Frame rate conversion, color correction, white point andor black point correction, color balanee, et.) as applied to the video data to render video for the target display panel that is adapted to the display panel ‘according ko the ambient environmental or viewing eondi- [0010] In some embodiments, adaptive video processing for a target display panel may be implemented in or by a decoding’ display module or pipeline associated withthe tar et cisplay panel, These embodiments may’ be refered 0 a8 ‘isplayside adaptive video processing systems. In at least some embodiments, a decoding display pipeline may receive fand decode an encoded video sircam Tor a tamget display ‘panel. The decoded video content may be analyzed to datr- fine intnframe andior inter-frame characteristics of the video, for example luminance characteristics (eg, dynamic range widih), color charoeteristis (eg, color range). inter frame motion, specular highlights, contrast, bright and dark regions, and so on, One or more display’ characteristics may be obiained for the target display panel. The display charac- teristics may include one or more of, but are not limited to measured response, display format, display dynamic range, backlight level(s), white point, black leakage, Jocal contrast enhancement or mapping, eurent splay control settings, and so on. Information about the feurrent envionment of the target display panel may be jobiained. For example, a device that inchudes the display panel may inckude one or more forward andior backs acing sensors that may be used to collet data (eg. ighting, viewer location, ct.) rom the ambient environment; the col Ieeted data may be analyzed to determine one or more env US 2015/0245043 Al ronment metres. The decoding/display pipeline then pro- ‘cesses the decoded video according to te content, display ‘characteristics, and current environment information o gens ‘erate video adapted to the target display panel and the current ‘enviroamient BRIE DESCRIPTION OF THE DRAWINGS. 10011] FIG. 1 illustates adaptive video processing in a Video playback system, according to some embodiments [012] "FIG. 2 isrates adaptive video processing in an ‘example decoding/display pipeline, according to some ‘embodiments [0013] FIG. 3 illustrates an example decoding/isplay Pipeline that performs adaptive video processing, according to some embodiments [0014] FIG. 4 illustrates an example display pipe and dis- Play bckend that perform adaptive video processing, accond- ing to some embadiments [0015] FIG. 5isa flowchart ofa method for adaptive video procewig in edi an ay ising cording os 10016] TIGS, 64 aod 6B iustrate the human pereeptual range wit respect to an example display panel 10017] FIG. 7 graphically illsteates perceptual eolor man- agement, cording io some embodiment [0018] FIG. 8 illustrates an example decoding/isploy Pipeline performing SDRto-HHDR coaversion on SDR input Vide o generate display video eonteat adapted to an HDR lsplay, according to some embodiments [0019] FIG. 9s « flowchart of a method for performing SDRAo-HDR conversion vide io generate display video cone ‘ent adapted to an HDR display, aeconing to some embodi- 10020] FIG. 10 iostrates an example video playback sys- teminwhich aserver-side encoding pipeline generates outpat Video dts adapied to a target disp panel, according to some embodiments [0021] FIG. 11 sa flowchart of video playback method in Which aserverside encoding pipeline generates output video data adapted to a taret display panel, seeording to some ‘embodiments [0022] FIG. 12 shows the input-outpot relationship of brighiness adjustment with scaling factor of 05. 10023] FIG. 19 ostates the input-output relationship of a non-linear brightness adjustment funetion, aeconling 4 at Jeast some embodiments [0024] FIG. 14 js a flowchart of a non-linear brightness adjustment method, acording oat lest some embodiments. 0025) FIG. 15 is a block diagram of one embodiment of « system on a chip (SOC) that may be configured to implement ‘aspects of the systems and methods described herein [0026] FIG. 16 isa block diagram of one embodiment of a system that may’ include one or more SOCS 10027] FIG. 17 illustrates anexample computer system that may be configured to implement aspect othe systems and methods described herein, according to some embodiments 0028] FIG. 18 illustrates a block diagram of » portable nulfnetin device in accordance with some embodiments 10029] FIG. 19 depicts a portable mulifanetion device in ‘accordance wit some embodiments, {0030} White the invention is susceptible to various modi- featons and altenatve forms, specific embodiments thereof are shown by way of example inthe drawings and will herein be described in detail, Itshoukl be understood, however, that Aug. 27, 2015 the drawings and detailed deseripion thereto are not intended to limit the invention tothe particular form disclosed, but ox the contrary, the intention is 10 cover all modifications, ‘equivalents and altematives falling within the spirit and scope ‘ofthe present iavention, As sed throughout ths application, the word “may” is used ina permissive sease (J, meaning Jhaving the potential to), rather than the mandatory sense (he, ‘meaning must). Similarly the words “includ.” “including. and “ineludes” mean including, bot no imited to [0031] Various units, circuits, or other componests may be ‘described as “configured 10” perform a ask o tasks. [n such contexts, “configured to” is a broad recitation of structure ‘generally meaning “having cicuitry that” performs the task ‘ortaks during operation, such, tieunivcieuiVeomponent ‘ean be configured to perform the task even whea the vit cioui/eomponeatis notcurrenly on. [a general the cireitry that forms the structure corresponding to "configured 0” may include hardvare circuits, Simla, various uaitseieuits ‘components may be described as performing a task o tasks, Tor convenience in the description, Such descriptions shoud be interpreted as including the phrase “configured to.” Rec ing a univcireuilcomponent tit s configured to performone ‘oF more tasks is expresly intended not to invoke 35 U.S $112, paragraph six, interpretation for that univeircuilcom- ones. DETAILED DESCRIPTION [0032] Various embodiments of methods and apparatus for adaptive processing, rendering, and display of digital image content, for example video frames or video streams, are described. Embodiments of video processing methods and apparatus are described that may adaptively render video data {or dispay ta target display panel. The adaptive video pro- cessing methods may take into aecount various information ‘including but not limited to video content, display charactr- istics, and environmental conditions inching but nt Timited to ambieat lighting and viewer location with respect to the splay panel when processing and rendering video content fora target display pane! in an ambient setting or environ- ‘meat. The adaptive video processing methods may use this information to adjust one of more video processing functions (ex, moisearifets reduction, scaling, sharpening, tone ‘mapping, color gamut mapping. frame rate conversion, color correction, white point andor black point correction, color balance, ot.) as applied to te vidao data to render video for the target display panel that i adapted tothe display panel ‘cording to the ambient viewing conditions. [0033] Conveationally, video processing algorithms have been designed for standard dynamic range (SDR) imauing, With the emergence of high dynamie range (HDR) imaging ‘echniques, systems and displays, a need for video processing techniques targeted at HDR imaging has emerged. For HDR video processing, there may be certain things that need to be done differently than with SDR’ video processing. For example, HDR video may require more aggressive noise reduction, may have more visible judder, and may’ require ferent sharpness and detail enhancement than SDR video. Thus, embodiments of the adaptive video processing methods ‘and apparatus as deseribed herein tat may implenient video processing techniques that are targeted at HDR imaging. la ‘addition, embodiments may also support wide eolor gamut (WC) imaging [034] Generally defined, dynamic range is the ratio ‘between the laest and smallest possible values oa ehange- US 2015/0245043 Al able quantity, such asin signals lke sound and light In digital Jmage processing, a high dynamic range (HDR) image is aa mage that is produced using an HDR imsging technique that produces a wider range of luminosity than is obtained using standard digital imaging techniques. For example, an HDR mage may include more bits per ebaael (¢.. 10,12, 14, oF ‘more bits pet uma and chroma channel), or more bits foe fuminosity (the lima channel, than are used in conventional mage processing ((ypcally, 8 bis per channel, eg. 8 bits for ccolorichroma and for luma). An image produced using san- ‘dard digital imaging techniques may be referred to as having. ‘standard dynamic range (SDR), and typically uses $bits per ‘channel. Generally defined, tone mapping is @ technique that ‘maps one set of tonal image values (ee, ums values from FR image data) to another (eto SDR image data). Tone mapping may be used, for example, to approximate the appearance of HDR images in a medium that has a more limited dynamic range (¢-g.,SDR). Tone mapping may wen- ‘erally be applied to fuma image data 10035] "Insomeembodimeats ofthe video processing meth- ‘ods and apparatus as desribed horn, a global tone mapping, (GTM) technique may be used in converting video content from one dynamie range to another Ina GTM technique, @ slobal tone curve may be specified or determined for one or ‘more video frames an used in converting video content from ‘one dynamic range to another. In some embodiments instead ‘of or in addition to & GTM technique, a loal tone mapping (CIM) technique may be used in converting video content from one dynamie range to another. In an TM technique, an mage or frame i divided into multiple regions, with atone ‘curve specified or determined foreach region. 10036) Generally defined, color gamut refers toa particular subset of colors, forexample te subset ofcolors which canbe sccurately represented ina given circumstance, such as Within a given color space (e.g, an RGB color space) or by @ display device. Color gamut may also refer to thecomplete set ‘of colors found within an image, A color gamut mapping technique may be used, for example to conver the colors as represented in one color space to a color gamut used in nother color space. color gamut mapping technique (sich may also be referred tows color or ehroma mapping) ‘may be applied to image data (generally to chroma image data), and may in some cases narrow or clip an image's color tamu or allernatively may be used to correct or adjust the ‘color gamut of range ofan image during or after tone map> ping 10037] In photometry, the SI unit for lnminance is candel per square meter (cd/m), Candela is the ST unit of luminous fntensity. A non-SI term forte sme wit ie"NIT™. The is the STunitof luminance and luminous emitance, measuring Juminous Mux (lumens) per unit area, The lux i equal to one Jumen per square meter. The lumen is the SI derived unit of Fuminous flax, a measure of visible ight emitted by a source. Adaptive Video Processing Systems 10038] FIG. 1 illustrates adaptive video processing in an ‘example video playback system, according to some embod ments, Embodiments ofthe adaptive video processing met- ‘odsand apparatus may, forexample, implemented in video playhack systems that include a server/encoding module oF Pipeline 110 anda decoding/cisplay module or pipeline 130. Theserver‘encoding pipeline 110 and decoding/display pipe- fine 130 may be implemented in the same device, or may be implemented indifferent devices. The serveriencading pipe- Aug. 27, 2015 Tine 110 may be implemented in a device or system that includes at least one video source 100 suc asa video camera ‘oreameras. The decoding display pipeline £30 may beimple- ‘mented ina device or system 120 that includes. target display ‘panel 140 and thats oeated in an ambient envionment 190. (One ormore human viewers 180 may be located in the ambi- ent environment 199, The system 120 may include or may Implement one or more controls 160 for the display pancl 140, forexample brightness andcontrast contols. Thesystem 120 may also include one or more sensors 180 such as light sensors or cameras. The ambient environment 190 my, for ‘example, bea room (bevom, den, ete.) in house, an ou ‘door setting, an office or conference room in an office build ing. or in general any envionment in which system 120 With display panel 149 may be present. Theambient environment 190 may include one or more ight sources 192 such as lamps ‘or ceiling lights, other arificial ight sources, windows, and the sun in outdoor enviroameats, Note that a system 120 andor display panel may be moved or repositioned within an ‘ambient environment 190, or moved from one ambient cay ronment 190 (eg. a 0m) to another (¢~., another rem oF ‘an outdoor environment) [039] In atleast some embodiments the serverlencoding Pipeline 110 may receive input video from video source 100 (ea, froma video eamera on addevie or stem that inelides serveriencoding pipeline 110), convert the int video into ‘another format according to a video eacading method, for ‘examples compressed video format such as H.264/ Advanced Video Coding (AVC) format, of 1.265 High Flicieney Video Coding (HEVC) format, nd stream 112 the encoded video t0 a decoding display pipeline 130, The decoding/dispay pipe- line 130 may receive and decode the encoded video stream 112 10 generate display video 132 for display on the display ppancl 140. In some emboximents, metadata 114 describing the encoding may also be provided by the server/encoding pipeline 110 to the decoding/display pipeline 130. For ‘example, the metadata may include information describing ‘gamut mapping andor tone mapping operations performed ‘onthe video content Insome embodiment, the metadata 14 may be used by the decoding display pipeline 130 in process- ‘ng the input video stream 112 (0 generate the output display vdeo 132 content [0040] A video playback system as illustrated in FIG. 1 ‘may implement one or more adaptive video processing met fds and apparatus as described herein that may take into Account various information including but not Timed t0 video content, display information 142 (eg. display panel 140 characteristics, control input 162, backlight levels, ete), and environmental information 182 (eg. ambient lighting 192, viewer 180 locaton, ete.) when processing and rendor- ing Video content fora target display panel 140 inan ambient setting or environment 190. The adaptive video processing ‘methods and apparatus may use this information, obtained rom sensors) 150, display pane! 140, oF from other sources, ‘wadjustone or more video processing functions (e.g poise) anifiets reduction, sealing sharpening, tone mapping, color gamut mapping, frame rate conversion, color coerection, ‘white point and/or black point correction, color balance -Adapted Vision where adopted vision is a human perception range under current ambient conditions (eg, ambient light level), for «xumplees determined by adaptive video processing methods fn apparatus as described herein, ae where the mappings Gadieatedby the aos) may inlude transforms (c= ‘atc adaptation transforms) f the color appearance model ‘Movil olor management tht includes this additonal step in the mapping process may be refered toot a percep color management system. color appearance model ofa pstepial color management system may be erred 095 perceptual color model or perc mse. [0086] F1G. 7 arphicallyiustrates perceptual olor man- agemct in an aabien adaptive rendering system 700 st gh fevel, according to some embodiments. As in conven- inal color management, source video content 720. maybe ‘mapped 762 to the measured display response range accord dng to display information 730 t0 generate video content 7208, However, additions] mapping 74 is applied ofthe display response ino a detemnined adapted human vsion ‘ang, generating output video 720 that sadapted to current ‘wring ondions according othe environment information 740 and display information 730. In some embodiment, ‘xkitions] mapping. 704 may involve convolving by the jnverse of the vifference between ideal human vision in a zivenenvionneat eg, he cuvein FIG. 7) andthe portion Of that which the display panel setally represents (ex. the ‘splay range in FIG. TB) according othe meastred response ofthe display panel {0087} "In some cmbodinents, mbicat adaptive rendering system 700 may be implemented on the display side of @ ‘ideo playbook system. Far example in some embodiments, ambient adaptive rendering may be implemented by one or ‘more components of «decodingldxpay pipeline of «video playback system a illustrated in FIGS. 1 through 8. {0088} Information that can he obtained and fed into a Peteeptal color model of « perceptual color managemes US 2015/0245043 Al system implemented an ambient adaptive rendering system 700 may include, but isnot limited 0, display information 730, for example various display characteristics and settings, ‘and eavironment information 740 including but not Himited 6 viewer and lighting information, Some of this information may be static (ex, display charactristis such as bit depth ‘and dimensions), while other information may be dynamic (ex current display settings, backlight level, ambient light, reflective light, viewer position, viewer lation, et.) This information may be collected and used to adaptively cender video content 720 for display according to current ambient ‘conditions as applied to a perceptual color model, In some ‘embodiments, a devie that includes the display panel thatthe Video content 720 is being adapted for by the ambient adap- tive rendering system 700 may include one or more sensors, or example ambient light sensors, cameras, motion detec- tors,and 0 on, that may be used io collect atleast some ofthe ‘information 730 and 740 used inthe perceptual color model 10089] The following describes various measurements, metrics o characteristics that may be obtained and input to 8 Perceptual color model nan ambient adaptive rendering sys- tem 700, according to some embodiments. However, this ist ‘isnot intended to be limiting 0090] Physical dimensions and other state eharateris- tics ofthe display. 10091} Measurements. These metres may be pre-mea- sured fora ype of display panel or may be measured for an individual display pane! [0092] “The measured response of the display ‘panel—a mapping between the input levels from ‘Source video content and the light output levels ofthe clisplay panel for each color (eg, RGB) channel [0093] Measured nutive white point of the display panel [0094] Measured light leakage from the display panel {contbutes othe pedestals ilstated in FIC 68). {0095} "Measured reflective light off the display panel {contibutes ote pedestals ilustated in FG. 64). 10096} Measured maxionm (nd minigun backlight love forthe display {0097} Ambient metres, for example captured by sensor (6) or determined from dats captured by seasons). dvi that includes adisplay panel may sls nsladeone formore sensors, The sensors may inelde one ome St, bat are not limited t, ambieot light sensors, colo fmbient light sensors, and cameras, The Hight sensors fn cameras may’ include one or mote backward (t= ‘wands the viewer oF ser facing sensors andor on€ 0 more forward (away fromthe viewer oF user) fing [0098] Light currently iting the display pane. This ‘may be determined foreach color channel. {0099} "Amount of ight efetng of the display. This ‘may be determined for cach color channel [0100] "Meties ex. brighiness color te. Jofthetield «View oF backgeound tha the viewerfser i Tang [0101] The white pont thatthe viewer is adapted to [ot02] Position ofthe viewerts) with respect wo the splay panel (eg. distance, viewing angle, ee.) In some emmhodimenis, a user-ficing camera of the device that inchs the display panel may capt an image of viewer andthe image may be analyred 0 ‘stmt distance fom the viewor tthe device, Foe ‘example, the image ofthe viewer's face may Be ae Aug. 27, 2015 lyzed to determine the distance, based on measured dlstance between the viewer's eyes in the captured image, as human eyes tend tobe about the same ds lance apart. The estimated distance w the viewer may. for example, bo used to estimate the field af view that the display panel subtends, {0103} Dynanscally detemined display metrics: [0104] “Cunent backlight level othe display panel {0105} Current average pine highness (pixels acta ally illuminated). Fr example, this metic may be used in detemnining the brightness of the currently displayed video content. This may be detenmined for ‘each color channel [0106] While not shown in FIG. 7, in some embostiments information 730 and environment infor: pation 740, other information may be obtained and used by the ambien! adaptive rendering system in adapting the video ‘o the envionment, For example, in some embodiments, the ambient adaptive rendering system 700 may target the dis played video toa viewer's mood oF viewing imentions, which Inay be referred to as viewing mode. For example, in some embodiments, lighting, location time of day, biometrics, and! for other data may be acquired and used to automatically determine a viewing mode for the video content 720. The {delenmined viewing mode may then be input tothe perceptual color model o adjust the source video content 720 t0 the viewing mode, For example, viewing modes may range Irom a calm or relaxed viewing mode to a cinematic or dynamic viewing mode. In some embodiments, user input (2. via a splay panel conto, remote contro, smartphone app, et.) ‘may instead of also be used in determining or adjusting 4 viewing mode for video content 720. For example, in some embodiments, a viewer may adjust a slider or swite for & mood” or “intention” parameter, for example to adjust or selet between Io or more Viewing mode on a diseree oF continous sale between a most relaxed “calm” mode and a dynamic, brightest, “cinematic” mode [0107] Various embodiments of an ambient adaptive ren- ering system 700 may use various image processing algo- rithms and technigues including but oot Himited «© color ‘gamut mapping and global or local tone mapping techniques ‘o apply the rendering adjustments to the video content 720. Insome embodiments atleast a portion ofthe ambient adap- tive rendering 700 fonctionality may be implemented using ‘ne of more Grephies Processor Units (GPUs). For example some embodiments may implement custom shader that may apply adjustments determined according to the perceptual olor model 1 video content 720. In some embodiments, at least a portion ofthe ambient adaptive rendering 700 fune- sionalty may be implemented in orby other hardware inckud- ‘ng but not limited to ust hardware For example, in some embodiments, one or more Image Signal Processor (ISP) color pipes may be used to apply the rendering adjustments to the video content 720. [0108] In some embodiments, one or more color lookup tables (CLUTs) may be used to apply at last some of the adaptive adjustments tothe video content 720, For example jn some embodiments, three 1D (one-dimensional) LUT may be warped in hardware to apply adaptive adjustments to the video content 720. [0109] Embodiments of a ambient adaptive rendering sys- {em 700 may automatically adapt HDR video eontent to a target display panel hased on the display panels charaters- ties and capabilites, US 2015/0245043 Al [0110] Embodiments ofan ambient adaptive rendering sys- tem 700 may dynamically adapt video content for display ia different viewing ewironments, which may provide ‘improved viewing in different eaviroaments andlor under «different ambient conditions, Thus, the ambieat adaptive ren= ‘ring system 700 may provide an improved viewing expe- rience for users of mobile devices by automatically adapting displayed content according to changes in the envionment in ‘which the users are viewing the content. [0111] By dynamically adsptinga display panel dierent ‘environments and ambient conditions, embodiments of an ‘ambient adaptive rendering system 700 may use lest back- light in some viewing environments, which for example may save power on mobile devices. In some embodiments, the backlight can be mapped into the perceptive calor model, hich may forexample allow the ambient adaptive rendering system 700 to make the display act moce paper-like when ‘adapting to different environments and ambient conditions In other words, the ambient adaptive rendering system 700 may be able to match the display to the luminance level of paper in the same environment as wel as track and adjust 0 ‘or Tor the white point ofthe viewer's eavironment, 10112] In some embodiments, infomation collected or ‘Renerated by the ambient adaptive rendering system 700 may be fed forward (upstream) ina video processing pipeline and used to aiect video processing hefore the video content is processed by the ambient adaptive rendering system 700. For ‘example, refering to FIGS. 1 through 3, ambient adaptive rendering may be implemented in or by «display pipe andor display backend component of a display pipeline. Display ‘and/or environment information may be fed upstream fo one ‘or more components or stages ofthe display pipeline (ee .t0 ‘a decoder, video pipe, and/or frame rate conversion stage, oF ‘© a compositing component that composites other digital ‘information such a text with streamed video content) and tied to affect video content processing at those upstream ‘components ofthe display pipeline. 10113] In some embodiments, referring to FIG. 1, display and/or environment information collected by the display-side ‘ambient adaptive rendering system 700 may be fed back to 3 serveriencoding pipeline and used wo aflect the server-side processing of video content before the content i streamed (© device that includes te target display panel. Por example, in some embodiments, the displ andlor environment informa tion may indicate that the eapabilities of the target display pane] does not support full HDR imaging in an ambient ‘Sovironment. In response the serveriencoding pipeline may process and encode iaput HDR content into lower dynamic Fange that can be displayed by the tant display panel under the current conditions. This may for example, save tanstis- sion bandwidth when a target display’ panel cannot support the fall dynamie range that is available inthe source video Dispay-Side SDR w9 HDR Conversion 10114} Referring ayain to FIG. 1, in some embodiments, ‘oneor more characteristics ofthe input encoded video stream 112 may’ be used by the decoding/display pipeline 130 ia adjusting the ne or more of the video processing funetions to ‘adapt the video toa target display panel 140. For example, in someembodiments, the target display panel 140 may support HDR imaging. However, the decoding/display pipeline 130 may receive encoded standard dynamic range (SDR) video ‘data for display to the true panel 140, Conventionally, SDR. Aug. 27, 2015 {o HDR processing has been performed by linearly sealing the SDR video conteat to the HDR target display: However, IDR imaging is much brighter shan SDR imaging, and eon- ventional incur sealing from SDR video coslent does not result in video conten that is optimally adapted tothe higher dynamic ange; moreover, the linear scaling may result in visible anificts. For example, specilar highlights may’ be dimmed of lost, dark areas may be noisy, and colar of tonal ‘banding may be visible [0115] To improve the quality of HDR video content gen- erated from SDR video input, in some embodiments, upon {etocting SDR video dat, the decodng/ display pipeline 130 ‘may adjust one oF more of the video processing Functions, andar perform ane more adlitional processing functions to coavert the decoded SDR video input to an HDR {ormat for improved display atthe higher dynamic the HDR target panel 140. Broadly deseribed, these adjust ‘ments may involve non-linear mappings of the SDR video content into the HDR space to improve the quality (ez. brightness) ofthe content when displayed tothe target HDI sisplay panel [0116] FIG. 9 is « high-level lowehatt of « method for performing SDR-o-HDR convention vide to generate dis Play video content adapted to an FIDR display, according to ‘Some embodiments, As indicated a1 900 of FIG. 9, a devod- ingidisplay pipeline may receive and decode an encoded SDR video stream for an HDR target display. Asindicated at 902 of FIG. 9, the decoding/display pipeline may perform one or more non-linear SDR«0-HDR coaversion techniques 10 adapt and expand the input SDR video coatent to HDR video ‘content adapted the HDR-enabled display panel. Asindicaed 1904 of FIG. 9, the HDR video content may be displayed t0 the HDR display panel, The elements of FIG. 9are described in more detail with reference to FG. 8 [0117] FIG. & illusires an example devoding/dsplay Pipeline performing SDR-to-HIDR conversion on SDR int Video to generate display video contest adapted to an HDR splay, according to some embodiments. In at least some ‘embodiments, a decoding/ display pipeline 810 may be con- figured to process HDR vidoo input to generate HDR display video 882 for a true display panel 840, However, decoding! display pipeline 810 may instead receive SDR’ video 800 input [0118] As shown in FIG. 8, embodiments ofthe decoding! splay pipeline 810 may leverage content characteristics 820 Slermined from input SDR video 800 content and display characteristics 830 of the display panel 840 to convert the SSD video 800 input to HDR video 832 output for display to ‘an HDR-enabled display panel 849, In some embociments, ‘decoding! display pipeline 810 may include video processing funetions or modules including but not limited to decoder 812, video pipe 814, frame rate conversion 816, and display ‘management #18 funetions or modules. Content characters tics 820 and display charscterstcs 880 may be provid to ‘one or more ofthese mos.les and used in adapting the respec- tive function(s) for eonverting SDR video 800 input to HDR video 832 output [0119] Anencoded SDR video 800 stream (eg. an 1.264) AVC or H.26S/HEVC encoded video stream) may be received ata decoder 812 component ofthe decoding/display pipeline 810, The decoder 312 may decode/decompress the Input video to generate video eontent that is fed 10a video pipe 814. Video pipe 814 may, for example, perfores ose! Antifoet reduction, scaling and sharpening. In some embod US 2015/0245043 Al rents, either the decoder 812 or the video pipe 814 may ‘convert the input SDR video 800 into an HDR-compatible format, for example by converting w a format with an ‘extended bit depth to support HDR imaging. [0120] Frame rate conversion S16 may convert the video ‘output by the video pipe 814 10 a higher frame ate by gener ting intermediate video frame(s) between existing frames. ‘Converting to & higher frame rate may, for example, help 10 ‘compensate for judder that may appear in HDR video. Dis play management 818 may include a display pipe that may Perform video processing tasks including but ot limited 0 Scaling, colors space conversion(S), color gamut adjustment, tnd tone mapping, and display backend that mney perform ‘aditional video processing tasks including but not mit color (chroma) and tone (luma) adjustments, bac ‘adjustments, gamuma correction, white point correction, black, Polat correction. and spatio-temporal dithering to generate HDR display video 832 output toa target HDR-enabled dis- play panel 840. 10121] In embodiments, content characteristics 820 and display characteristics 88 may be provided to one or moreof the modules in the decodingdiplay pipeline 810 and used in ‘adapting the respective function(s) for converting SDR video 800 input to HDR video 832 output. Various enhancements may be performed by the decoding display pipeline 810 based on the characteristics that may improve the display of| the video content when coaverted from SDR to the higher dynamic minge supported hy the display panel 840. The fol- Towing describes examples of enhancements that may be performed when converting SDR video to IDR video, and is ‘ot intended tobe Kimiting. 10122] Insome embodiments in response to detecting SDR video 800 content, conten characteristics 820 module may ‘analyze the video content to look for areas in video frames ‘with specular highlights. Thus, content characteristics that ‘are detected may inelude specular highlights in the input video frames. The decoding/ display pipeline 810 may reduce the size of at least some of the specular highlights, and/or ‘increase the brightness of at least som ofthe specular high: Tights to make the specular highlights appear more impres- sive when displayed, 10123] In some embodiments, dark or shadow regions in the input SDR video 800 content may be detected and auto- matically processed differently by the decoding/display pipe- Tine 810 forimproved HDR display. For example, the devod- ingdisplay pipeline 810 may apply stronger noise reduction to te detected dark or shadow regions to reduce noise in the darker regions of the video content when displayed to the HDR display panel 840, 10124) As another example, the decodina/dispay pipeline 10 may adjust or sclet tone curves used in tone mapping to ‘despen the shadow areas, The tone curves may be non-linear, {or example S-shaped tone curves, to reduce nose in the dark regions and provide better contrast thanean be obtained using ‘conventional linear scaling. In some embodiments, the tone ‘curves may be dynamically selected based on one or more detected content characteristics andor display characters ties. In some embodiments, one or more metrics about the ambient environment (eg, ambient lighting metrics) may be ‘ected and used in determining the tone curves. In some ‘embodiments, non-near, global tone curve maybe selected {ora video frame or sequence of frames. In some embodi- ments, instead of or in addition to a global tone euve, the Aug. 27, 2015 Video frames may’ be subdivided into multiple regions, and local tone curves may be dynamically selected for each region. [0125] In some embodiments, color tasitions caused by color clipping (e 2. during tone oF gamut mapping on the encoder side) may’be detected, and the decodiny/display pipeline 810 may attemp to reconstruct the correct color(s)¥0 Smooth the colo transitions. [0126] In some embodiments, bit depth extension from SDR to HDR (eg. S+bit SDR to 10-bit HDR) may be per ormed by the decoding/display pipeline 810 using tek iques tha attempt to avoid banding artifats by smoothing the image content when extended nto the lager bit dept. For ‘example, in some embodiments, rather than performing @ Tinear extension into the expanded bit dept, data valves for input pixels may be analyzed to determine slope, and the slope may be used to perform a non-linear extension into the ‘expanded bt depth to produce a smoother rendering of the extended bits than ean be achieved using linear function. Server-Side Adaptive Video Processing [0127] Referring again to FIG. 1, in some embodiments, ‘adaptive video processing fora target display panel 140 may be implemented in or by a serveriencoding pipeline 110. ‘These embodiments may be referred to as server-side adap- tive video processing systems. Embodiments ofa server-side system may, for example, be used iynamic range (HDR) and wide color gamut (CG) video playback to an HDR-enabled display panel in ceases where the display-side video processing pipeline does ‘ot support HDR/WCG imaging, does aot support the fal {ynantc range and color gamut ofthe target display panel, or is otherwise limited. For example, embodiments of the server-side adaptive video processing system may be used to support HDR and WCG video streaming to small or mobile evices, oF to legacy devices, that may have limited display~ Side video processing capabilities, [0128] FIG. 11 isa Nowehartofa video playback method in ‘whieh a serverside encoding pipeline generates outpt video ata adapted to a trget display panel, according to some embodiments. As indicated at 1100 of FIG. 11, a server encoding pipeline may obtain video content fora target di play panel. For example, the serverioneoding pipeline may receive input video from a video source sueh as a video ‘eameraon a device or system that includes the server‘encod- ing pipeline, and may be directed co encode and stream the video content for display on a paniewlar target displ pane! ‘The target display pane! may beon the same devie or system as the server/encoding pipeline, or altematively may be on a different device or system. The target display panel may support high dynamic range (HDR) and wide color gamut (WCG) imaging [0129] While not shown, in some embodiments, the server! ‘encoding pipeline may obtain or determine one or more cha- acteristics ofthe input video content. For example, in some tembodimients, the video content may he analyzed to deter Tor example, how wide the dynamic range of the vdeo content i, how much movessent theres from frame o frame forscene 1 scene, color ranges, specular highlights, contrast bright and dark regions and so on. This content information ‘may be used along with other iaformation in processing the video content for display onthe target display panel [0130] As indicated at 102 ofFIG. 11 theserveriencoding pipeline may obtain display information and/or environmes US 2015/0245043 Al ‘information forthe target display panel The display informa tion may indicate display charatersties that may inelude one ‘or more of, butarenot limited to, measured respons, format, resolution size, dynamic range, bit depth, backlight level(s). white point, current display control settings, and so on, The ‘environment information may include, bu is not limited to, various ambient lighting mets and viewer metties such as viewer location relative tothe target display panel, sizeof the «splay pane, and distance tothe display panel. The ambient lighting metries may, forexample, include meteiesabout Hight striking the display’ panel, reflective light levels from the display panel, and metrics (ep, brighiness, color, white point, etc.) of the field of view (or background) that the viewer/user is facing. In some embexliments, a device that includes the target display panel may include one or more orward- and/or backwant-facing sensors (e.g.camers, ight sensors ete) that may be used to collet data from the ambi- ‘ent envionment; the collected data may be analyzed to deter- mine the one or more eavionment metrics that are thea ‘obtained by or provide to the server/encoding pipeline [0131] Asindicatedat 1104 ofFIG.11,theserveriencoding pipeline may map the video content toa dynamic range forthe target display panel according to the obiained information, la some embodiments, the server/encoding pipeline maps the video content wo the dynamic range ofthe tne display panel 8 indicated by the obtained information according toa tone ‘mapping technique. The tone mapping technique may be adjusted according tothe obtained information. For example, the dynamic rage of the source data may’be mapped to the bit depth of the target display panel according 10 the display information, As another example, tne curves andlor transfer Janetions used inthe tone mapping technique may be modi- fied oradjusted based upon caisor more mete including but not limited to curent ambient lighting metrics atthe display pane] as indicated by the environment information. In some ‘embodiments, a non-linear global tone curve maybe selected {ora video frame or sequence of frames being processed in the server/encoding pipeline based at feat in part upon the display and/or enviroameat information, Ia some embod ments, instead of or in addition to a global tone eurve, the video frames may be subdivided into multiple regions, and Jocal tne curves may be dynamically selected foreach region based at least in part upon the display and/or environment information 10132] | Asindisted a1 1106 of FIG. 11, theserverieneoding Pipeline may map the video content toa color gamut fo the target display panel according to the obsined information, la some embodiments, the serverencoding pipeline maps the Video content to a color gamut of the target display panel as indicated by the obtained information according to's color amut mapping technique. The color gamut mapping tech- nique may be adjusted according to the obtained information. For example, the color gamut of the source data may be ‘mapped tothe bit depth of the target display panel according tothedisplay information, As anotherexample,curves, ans- {er funetions, andr lookup tables may be selected according ‘othe particular color gamut supported by the display panel as indicated in the display information. As another example, ‘curves, transfer functions, and/or lookup tables used in the “gamut mapping technique may he modified or adjusted based Upon one or more metrics ineluding but aot imited to curent ambient ighting metrics atthe display pancl as indicated by the environmest. Aug. 27, 2015 (0133) Asindicatdat 1108 of FG. 11, thesererencoding Pipeline may’ encode the video conta and send the encoded "ido content oa decoding display pipeline associated with ‘be trae display panel. The vdeo ts may, forexample, be encoded bythe serverlencoing pipeline cording to a.com pressed video Tormat sich sa FL264AVC oF H.268/HEVC Format for delivery to the target display panel The encoded video content may for example, be wsiten to.a memory for access by a deodingdiplay pipeline associated with the target display panel, provided or sueamned to the devoting! display pipeline associated with the target display pane over ‘Wired or wireless network connection, or eerie deli red to the decoding display pipeline associated with the target display panel (0134) As idicated at 1110 of FIG. 11, the decoding play pipeline decades and displays the video content. Since Cisplay pane-specitic tone at coor gamut mapping oth amie range and color gat supporto by the tart ‘ey panel is peronmed on the sevedencoding side the Gocodingidinplay pipeline may not require any changes or ‘modification © sippor HDR andor WCG imaging {0138} Nowe that a serverencoding pipeline may apply @ ‘method a ilosted in FIG. 11 to map the same video con- tenttotwoormore diferent tags display panels according to the particular characteristics andor environments ofthe dis ply panels. For example, the sve encoding pipeline may ‘Mbp video processing and encoding functions according to the dispay-specific information to adapt video content to target display panels that suppor diferent bit depts, color spaces, color gamuis, andor dynamie ranges, Also note that {he onde of processing in FIG. 11 andin the ier Howes tun flow diagrams not ntendd oe imitng. Fr example, insomeembodimentsofthe video playback method shown in FIG. 11, clement 1106 (color gamut mapping) may occur before element 1104 (dynamic range mapping). (0136) ‘TheclomensofFIG. 11 are describ inmone detail With efereace 10 FIGS. 1 and 10 {0137} Referringagain'o FIG. inembodimentsofserver- Side adaptive video processing syste, a server encoding Pipeline 110 may map video content obtained from «source 100 fo a tanuet display panel 140, For example, the video content may be HDR and WCG video costent obtained foes ’n image sensor or camera. In some embodiments i map- Ping the video content to 2 target display panel 140, the Servelencoing pipeline 110 maps the video comtent 10 a olor gamit of the target display pal 140 according to a color mot mapping technique, and naps the video content toa dynunie rage forthe trget display panel M0 scoonling toa Tone mapping technique. In peering the mapping the Scrveriencoding pipeline 110 may take into aceon one oF ‘more of video content, capabilities and characteristics ofthe target display pan! 140, and information about the environ ‘ment 190 atthe target display panel 1, inching but not Timitedo lighting 192 nd viewer 180 ifcemation. [0138] _Atleastsome ofthe infomation thatmay beusedhy the server encsing pipeline 110 in mapping Video content to a target display panel 140 may be captured by 8 device or System 120 that inclodes the target display panel 140 and a

Você também pode gostar