Photography Guide 2025

High Dynamic Range
Stills Photography

A comprehensive modern guide to capturing, processing, and displaying HDR images with unprecedented realism and detail.

Explore

Contents

1 Introduction 2 Dynamic Range & Human Perception 3 Use Cases and Benefits 4 Data, Colour Spaces & Bit Depth 5 HDR Displays and Standards 6 File Formats and Metadata 7 Capturing HDR Stills 8 Editing HDR Images 9 Sharing HDR Images 10 Limitations and Challenges 11 Best Practices 12 Conclusion
01

Introduction

High dynamic range (HDR) photography represents a fundamental shift in how we capture, process, and display images. While traditional photography has long been constrained by the ~6-10 stop dynamic range of standard displays, modern HDR workflows can preserve and present 12+ stops of real-world luminance—much closer to what our eyes naturally perceive.

The evolution has been dramatic. Early "HDR" techniques (circa 2005-2015) used tone-mapping to compress multiple exposures into SDR's limited container—often producing those characteristic surreal, over-processed looks. Today's true HDR is different: images are captured with 10-bit or higher colour depth, encoded in formats that preserve extended luminance information (with PQ encoding maintaining absolute values), and displayed on screens achieving 1,000+ nits peak brightness with near-perfect blacks. The result isn't just "better pictures"—it's a viewing experience that can genuinely replicate the sensation of looking through a window.

Making HDR work requires alignment across the entire imaging pipeline: from camera sensors that capture extended dynamic range, through colour spaces and encoding standards that preserve this information (like Rec. 2020 and PQ/HLG), to displays that can actually render these extreme luminance ranges. This guide walks through each stage—the underlying theory, practical capture techniques, RAW processing workflows, and the standards that ensure your HDR images display correctly across different devices and platforms.

02

Dynamic Range and Human Perception

Dynamic range is the ratio between the brightest and darkest detail that can be reproduced. It is measured in stops; each stop represents a doubling of luminance. Human vision can adapt to an enormous range of lighting conditions over time through pupil adjustment and neural adaptation, but the simultaneous contrast range - what we can perceive in a single glance without adaptation - is approximately 10-14 stops, which aligns well with modern camera sensor capabilities.

SDR Displays

When images with a dynamic range of 14 stops are displayed on SDR monitors, which are designed for about 100 nits peak brightness and 8‑bit colour, much of the highlight and shadow information must be clipped or compressed.

HDR Displays

HDR displays lift this ceiling: consumer HDR TVs deliver 500–1,200 nits peak brightness and modern smartphone and tablet displays even a peak brightness of 1,600 nits.

Combined with 10‑bit colour (1.07 billion hues) and high contrast ratios of 1,000,000:1 or higher, HDR allows photographers to show what the camera truly captured.

03

Use Cases and Benefits

HDR excels when capturing visible light sources, reflections, and high-contrast scenes—preserving both shadow and highlight detail while maintaining the smooth tonal transitions that SDR would compress into banding or posterisation.

🌅

Sunsets & Sunrises

Colour gradients in the sky and intense sunlight can be reproduced without clipping or posterisation.

🌃

Night Cityscapes & Fireworks

Small, bright light sources against a dark background appear vivid and detailed.

🏛️

Interior Architecture

HDR reveals details in both bright windows and dim rooms simultaneously.

👤

Lifelike Portraits

Subtle specular highlights on skin, hair and eyes retain their sparkle without washing out, and shadows remain soft. HDR portraits can look more three‑dimensional and realistic when highlights are carefully controlled.

🚗

Automotive Photography

HDR is capable of capturing headlights and nuances in LED accent lights preserving the original intent.

✈️

Aviation Photography

HDR is able to capture and reproduce faithful skies, reflections, lighting and details in the aircraft body.

Overall, HDR technology makes for a much more realistic and detailed viewing experience, offering higher peak brightness and detail in highlights with billions of colours.

04

Data, Colour Spaces, Transfer Functions and Bit Depth

The journey from camera sensor to display involves several crucial transformations. Understanding these steps—and the choices made at each stage—is essential for creating and delivering compelling HDR images. While these steps are necessary for both SDR and HDR workflows, the methods differ.

4.1 Scene-Referred vs Display-Referred Data

All digital images begin life as scene-referred data: linear values proportional to the actual light intensities captured by the sensor. These values can far exceed what any display can reproduce - a sunlit scene might contain 100,000 nits while even HDR displays peak at 1,000-4,000 nits.

For maximum editing flexibility, this linear scene data is preserved in RAW formats. However, for distribution to end users, images must be transformed for real-world viewing. While SDR uses purely display-referred encoding, HDR standards like PQ maintain scene-referred values up to 10,000 nits - preserving the relationship to real-world luminance even though displays will tonemap these values to their actual capabilities.

The transformation process involves three critical decisions that define how images will be stored and displayed:

⚡
Transfer Function

Gamma, PQ, or HLG — determines how light values are encoded

🎨
Colour Space

sRGB, P3, Rec.2020 — defines the range of reproducible colours

📊
Bit Depth

8, 10, or 12-bit — sets the precision of the encoding

The fundamental difference between SDR and HDR lies in how these three components work together:

SDR (Display-Referred)

Uses relative encoding where maximum code values mean "display your peak brightness." This gamma-based approach limits dynamic range to about 6 stops, regardless of whether it's paired with narrow gamuts (sRGB/Rec.709) or wider ones (Display P3).

HDR (Display/Scene-Referred)

Encodes absolute luminance values (PQ's scene-referred approach) where code values represent specific nit levels, or maintains relative encoding (HLG's display-referred approach) that adapts to each display's peak brightness levels. Both enable 10-14 stops of dynamic range and expanded colour gamuts.

The following sections explore each of these components - transfer functions, colour spaces, and bit depth - in detail to understand how they enable HDR photography's dramatic improvements in image quality.

4.2 Understanding Transfer Functions (OETFs/EOTFs)

Transfer functions solve a critical problem: how to efficiently encode the enormous range of light values captured by the camera sensor into limited digital code values.

Why We Need Transfer Functions

Linear encoding wastes precious bits. In 8-bit linear, shadow gradients would show severe banding because most code values would be spent on bright areas where our eyes can't perceive small differences. Transfer functions redistribute these limited code values to match human perception - more in shadows where we see fine distinctions, fewer in highlights where we don't.

The Complete Pipeline

  1. Linear scene light hits the camera sensor
  2. OETF (Opto-Electronic Transfer Function): Applied during encoding, converts linear light values to non-linear code values (for example when a RAW file is saved as JPEG in-camera)
  3. Storage/Transmission: Perceptually-encoded values require less bandwidth
  4. EOTF (Electro-Optical Transfer Function): Applied by the display, converts code values back to linear light output
  5. Result: Display emits linear light → it looks perceptually correct

The critical difference in HDR systems is which transfer function is standardised - this fundamentally determines whether the system is scene-referred or display-referred.

Traditional SDR (Display-Referred)

  • Standardises both OETF and EOTF as paired functions (e.g., Rec. 709)
  • OETF: gamma ~1/2.2 for encoding
  • EOTF: gamma ~2.2-2.4 for display
  • System gamma: The product of encoding and display gammas (~1.2), providing contrast enhancement for dim viewing compared to the scene
  • OOTF (Opto-Optical Transfer Function): The complete scene-to-display transform, which includes the system gamma PLUS any artistic adjustments baked in during encoding
  • Display-referred because values are relative: code value 255 = "display your maximum brightness"
  • Every display shows different absolute brightness (100-400 nits) but maintains relative relationships
  • Content mastered assuming ~100 nit reference display in dim environment

Modern HDR Transfer Functions

Perceptual Quantizer (PQ/ST.2084) — Scene-Referred

  • Standardises an EOTF (display decode function) specified in ST.2084
  • Content is encoded using the inverse of this EOTF (effectively an OETF)
  • Creates a scene-referred system: code values represent absolute luminance (0-10,000 nits)
  • Code value 58% always means 203 nits, regardless of display capability
  • Based on Barten's model for perceptually uniform steps across the entire range of visible light intensities
  • System gamma: 1.0 (the inverse EOTF for encoding and EOTF for display cancel out)
  • OOTF: May include creative adjustments during mastering
  • Used in HDR10, Dolby Vision, and gain map implementations

Display handling: When content exceeds display capabilities, the display can hard clip at maximum (simpler, but loses detail), tone map to preserve detail (more complex, varies by manufacturer), or use metadata (HDR10+, Dolby Vision) to guide the mapping.

Hybrid Log-Gamma (HLG) — Display-Referred

  • Standardises an OETF (camera/encoding function) specified in ITU-R BT.2100
  • The EOTF is derived from this OETF and includes a system gamma adjustment
  • Display-referred: code values are relative, with 100% = display maximum
  • Bottom half uses gamma curve (~1/2.4), top half logarithmic for extended highlights
  • System gamma: Variable, typically 1.2, built into the HLG OOTF design
  • OOTF: Automatically adjusts based on display peak brightness (no tone mapping needed)
  • No metadata required - each display interprets relative to its own capabilities
  • Backward compatible: SDR displays show acceptable image using the gamma portion
  • A 600-nit and 2,000-nit display both show 100% signal at their respective peaks

Why This Distinction Matters

The PQ scene-referred approach provides consistent absolute brightness across displays until they reach their physical limits. When content exceeds a display's capabilities, the display either clips or applies tone mapping to compress the range. This approach works particularly well for mastered content where creators want precise control over the final look. Metadata can guide displays in handling out-of-range content optimally.

In contrast, the HLG display-referred approach automatically adapts to each display's capabilities without requiring tone mapping or metadata. Each display simply interprets the signal relative to its own peak brightness, making implementation simpler. This approach excels for broadcast content where the viewing devices are unknown and varied.

Transfer Function Comparison

Luminance output vs normalized input — hover over legend to highlight curves

Y-Axis Max:
Scale:
0.00.2 0.40.6 0.81.0 Normalized Input (Code Value) Luminance (nits)

In Practice — Why Photography Uses PQ with Gain Maps

HDR photography faces a unique challenge: images must look stunning across an enormous range of display capabilities—from 400-nit laptops to 1,600-nit smartphones - while adapting to varying ambient lighting conditions. This is why modern HDR photo formats (JPEG with gain maps, HEIF with Adaptive HDR) combine an SDR base image with a PQ-encoded gain map rather than using pure PQ or HLG encoding.

Pure PQ encoding would require every viewing device to tone map from absolute values, leading to inconsistent results as each manufacturer's tone mapping algorithm differs. A sunset mastered at 4,000 nits might be beautifully compressed on one phone but harshly clipped on another. Meanwhile, HLG's relative approach would lose the precise tonal relationships that make HDR photos compelling—shadow details and highlight gradations would vary depending on each display's peak brightness.

The gain map approach using PQ provides the best of both worlds: PQ's perceptually uniform encoding ensures consistent tonal relationships in the HDR reconstruction, while the gain map architecture allows graceful adaptation to display capabilities. The SDR base image guarantees backward compatibility, and the PQ-encoded gain map precisely specifies how to extend into HDR ranges when the display headroom is available. This is why Apple's Adaptive HDR, Google's Ultra HDR, and Adobe's Gain Map implementations all rely on PQ encoding—it provides a mathematically rigorous, perceptually optimised reference for reconstructing HDR images across the diverse ecosystem of modern displays.

Why Not HLG for Still Photography?

While HLG's display-adaptive design seems appealing—automatically adjusting to each screen's capabilities without explicit tone mapping—this adaptiveness proves problematic for still photography. HLG's system gamma changes with display peak brightness, meaning the same image has different mid-tone rendering on every screen. A portrait edited on a 1,000-nit display will have noticeably different contrast when viewed on a 4,000-nit display, not just in the highlights but throughout the tonal range.

This variability makes HLG unsuitable for photography workflows where precise tonal relationships are paramount. Photographers need predictable results: when adjusting shadow detail or highlight rolloff, they must know how these edits will appear across different displays. HLG's relative encoding provides no fixed reference point for editing decisions. Additionally, HLG's bit allocation—optimised for video streaming efficiency—is less perceptually uniform than PQ across the extreme luminance ranges common in still photography, where a single frame might contain both deep shadows and specular highlights that would typically be spread across many frames in video content.

The gain map approach using PQ encoding emerged as the superior solution, providing display adaptation through an explicit, controlled mechanism rather than HLG's implicit system gamma variations.

4.1.1 Understanding Colour Spaces and Gamuts

A colour space defines how colours are represented by specifying the chromaticity coordinates of the red, green, and blue primaries, as well as a reference white point. Different colour spaces are characterised by different sets of primaries; wider-spaced primaries enclose a larger portion of the visible spectrum, enabling more accurate and realistic reproduction of saturated colours.

The colour gamut, by contrast, describes the actual range of colours a device (such as a monitor, projector, or printer) can reproduce within that colour space. In other words, the gamut is the practical output of colour reproduction, limited by the hardware.

Wide-gamut colour spaces allow more saturated and lifelike colours to be represented, but encoding them smoothly usually requires higher bit depth to avoid visible banding.

CIE 1931 xy Chromaticity Diagram

Colour space gamut comparison — showing coverage of visible spectrum

D65 0.00.1 0.20.3 0.40.5 0.60.7 0.00.1 0.20.3 0.40.5 0.60.7 0.8 x y
sRGB / Rec.709 (~35%)
Adobe RGB (≈52%)
Display P3 (≈54%)
Rec.2020 (≈76%)

4.1.2 Bit Depth and Banding

Bit depth determines how finely the luminance and colour values are quantised. An 8‑bit channel can represent 256 levels, producing 16.7 million colours (256³). A 10‑bit channel provides 1,024 levels, yielding 1.07 billion colours. The difference is crucial: 8‑bit images often show banding in smooth gradients in HDR workflows. HDR standards therefore require at least 10‑bit encoding.

Marketing Warning: Some display makers market 8‑bit panels as "HDR," but this is misleading; true HDR needs 10‑bit to avoid banding. 8‑bit is adequate for SDR's limited dynamic range, but it lacks enough steps to smoothly span the brighter highlights of HDR.

What makes HDR compelling is not simply more hues but brighter whites and deeper blacks. Wide‑gamut spaces like DCI‑P3 provide about 25–30% more colour than sRGB and often use 10‑bit encoding, but perceived real‑world realism comes largely from higher brightness and contrast. SDR displays peak at roughly 100 nits with about 1,200:1 contrast; HDR displays in living‑room TVs offer 500–1,200 nits and modern smartphones and tablets exceed 1,000 nits at sustained brightness, achieving contrast ratios of 1,000,000:1 or more. These dramatic brightness differences make highlights sparkle, gradients smooth and shadows inky, conveying realism even when colour gamut improvements are subtle.

For professional HDR still photography, adopt a workflow that uses 10‑bit (or higher) per channel from capture to export. Higher bit depths like 12‑bit, 16‑bit or 32-bit float provide headroom for editing but may not be supported on all displays or file formats.

Bit Depth & Banding Visualization

Compare gradient smoothness across bit depths — notice banding in lower depths

1-bit
2 levels
2-bit
4 levels
3-bit
8 levels
4-bit
16 levels
6-bit
64 levels
8-bit (SDR)
256 levels
10-bit (HDR)
1,024 levels

Finally, remember that HDR photos are increasingly viewed on consumer devices, with modern smartphones leading the way. Their bright, wide-gamut displays are pushing HDR into the mainstream, making advanced image reproduction available to millions of users every day. This shift represents an opportunity: by mastering in wide gamuts and high bit depths, your images can take full advantage of these capabilities and stand out on the very devices driving HDR adoption. Always preview your exports on the kinds of displays your audience uses to optimise the experience.

Colour Space Reference Table

Coverage percentages are calculated based on the triangular area within the CIE 1931 xy chromaticity diagram, which is the standard reference method for comparing color space gamuts.

Colour Space Coverage of Visible Colours (CIE 1931 xy) Typical EOTFs Notes
sRGB / Rec.709 ~35% sRGB TRC (piece-wise, ~2.2); BT.1886 (γ≈2.4) for SDR TV Web/SDR standard; Rec.709 shares (essentially) the same primaries; BT.1886 is the reference SDR-TV EOTF.
Adobe RGB (1998) ≈52.1% Gamma ≈2.2 Photography/print workflows; D65 white.
Display-P3 ≈53.6% sRGB TRC (SDR on Apple/web) Primaries match DCI-P3; D65 white + sRGB tone curve.
DCI‑P3 (Cinema) ≈53.6% Gamma 2.6 (digital cinema) Theatrical mastering; ~6300 K "DCI white."
P3-D65 (HDR, consumer) ≈53.6% PQ (ST-2084) commonly; sometimes HLG Same primaries as P3 but D65 white; some deliverables (e.g., Netflix) use P3-D65 + PQ
Rec.2020 (BT.2020) ≈75.8% PQ or HLG (per Rec.2100); SDR can use BT.1886 The reference gamut for HDR video and stills; future‑proof but under‑realised by current displays (which reach 60–80% coverage).
ProPhoto RGB (not meaningful as a % of CIE 1931 xy) Gamma 1.8 (encoding); often linear in RAW pipelines Suitable for RAW editing but must be converted to deliverable formats.
05

HDR Displays and Standards

5.1 What Makes a Convincing HDR Display?

For an HDR image to look spectacular, the display must have sufficient brightness, contrast and colour depth. A convincing HDR display should deliver at least 1000 nits of sustained brightness, 10‑bit colour and a 1,000,000:1 contrast ratio so that bright highlights and deep shadows can be reproduced simultaneously. SDR monitors, by comparison, peak at around 100 nits and 1200:1 contrast.

Today's high‑end consumer devices meet or exceed these thresholds: Apple's MacBook Pro (Liquid Retina XDR) maintains 1,000 nits sustained brightness, 1,600 nits peak and a 1,000,000:1 contrast ratio; the iPad Pro (2024) reaches 1,000 nits full‑screen with 1,600 nits peak and a 2,000,000:1 contrast ratio. Even consumer smartphones now offer impressive HDR viewing. Apple's iPhone Pro models, for example, deliver typical peak brightness around 1,600 nits and reach up to ~2,000 nits in HDR and sunlight conditions — more than enough to deliver a compelling HDR experience.

✨

Minimum HDR Display Requirements

☀️
Brightness: ≥ 1,000 nits sustained

Higher peaks allow specular highlights to truly shine.

🌑
Contrast Ratio: ≥ 1,000,000:1

Maintains deep blacks. OLED panels achieve near‑infinite contrast because they turn pixels off; mini‑LED LCDs require many local‑dimming zones to approach this level.

🔢
Bit Depth: 10‑bit or higher per channel

Avoids banding and reproduces smooth gradients.

5.2 Display Standards and Certification

The DisplayHDR programme by VESA defines performance tiers for HDR monitors (400, 600, 1000, 1400, and True Black variants). Each tier specifies minimum peak brightness, colour gamut and contrast ratio. As a rule, photographers should choose monitors certified at DisplayHDR 1000 or above for meaningful HDR editing, or opt for the Apple ecosystem with XDR displays. SDR monitors or low‑tier HDR‑400 displays lack the brightness and contrast needed for accurate HDR representation.

5.3 Colour Gamut Coverage

The extent of a monitor's gamut determines how many colours it can show. To fully exploit HDR stills, aim for displays that cover at least P3; ideally, they should target Rec.2020 even though current panels only reach 60–80% of this space. High‑end monitors that exceed 90% P3 and support 10‑bit are well‑suited for HDR editing.

06

HDR File Formats and Metadata

With HDR as an emerging image technology many formats are actively being developed. Support for each format varies across platforms, devices and operating systems. Sharing HDR images proves to be a challenge as most images are being reprocessed when sharing over WhatsApp, iMessages, iCloud Shared Albums, social media platforms, and so on. This strips necessary gain map channels and metadata from the image, posing a significant compatibility issue.

Below is an overview of most common image formats that either utilise gain map technology or are encoded straight in PQ.

6.1 JPEG with Gain Maps (ISO 21496‑1)

Standard JPEG is limited to 8‑bit and SDR. Gain map technology extends JPEG by storing a base image plus a secondary gain map and metadata. The gain map encodes multiplicative scaling factors that reconstruct a wider dynamic range; SDR devices read only the base image, while HDR‑aware devices apply the gain map. Adobe explains that gain maps allow dynamic adaptation to display capabilities and preserve a consistent artistic vision across devices.

An international standard, ISO 21496‑1, was ratified in 2025; it harmonises the gain map metadata used by Apple, Google (Ultra HDR) and Adobe. The standard allows the gain map to be compressed relative to the base image: ISO 21496‑1 supports downscaling the gain map and even storing it as a single channel and a quarter of the base image's resolution. JPEG + gain map files are backward compatible and increasingly supported by social platforms and browsers such as Safari (macOS 26 onwards), and Chrome. The latest camera models such as the Hasselblad X2D II 100C can shoot HDR in UltraHDR in camera.

Gain map technology enables mobile displays to adaptively adjust HDR headroom in real-time. The gain map provides scaling factors that allow the display to dynamically allocate its available brightness range—whether that's 600 nits or 1,600 nits—to best reproduce the image based on current viewing conditions. This single-file solution ensures maximum compatibility: the same image file automatically adapts from a dim laptop screen to a bright smartphone in sunlight, with each display using its capabilities optimally rather than requiring device-specific versions or fixed tone mapping curves.

6.2 HEIF/HEIC

The High‑Efficiency Image File (HEIF) container, commonly stored as HEIC, supports 10‑bit and 12‑bit colour depth and can embed PQ or HLG transfer functions. New cameras such as the Hasselblad X2D II 100C can capture true HDR files in HEIF in camera. HEIF files have a similar file size to JPEG but preserve more tonal range and are widely supported on smartphones and modern operating systems.

Apple has also introduced Adaptive HDR for photos: HEIF files captured on recent iPhones include a gain map and metadata that allow the image to adaptively scale highlights depending on the display's headroom. This ensures that the same HEIF looks natural on a 600‑nit phone and spectacular on a 1,600‑nit tablet. HEIF is also widely supported on Threads and Instagram when posting HDR images.

6.3 AVIF

AVIF (AV1 Image File Format) is derived from the AV1 video codec and offers significantly higher compression efficiency than JPEG. It supports 8-, 10-, 12- and even 16‑bit colour depths and can encode HDR content natively. AVIF is open and royalty‑free and is supported by major browsers. Because of its efficient compression, AVIF images can be 40–90% smaller than equivalent JPEGs. AVIF HDR images do not use gain-map technology - they are straight up PQ encoded.

6.4 JPEG XL (JXL)

JPEG XL is a modern format designed to replace legacy JPEG. According to the JPEG XL specification, it supports ultra‑high‑resolution images (up to 1 terapixel), sample precision up to 32 bits for HDR content, multiple channels (including alpha, depth and thermal data) and lossless or lossy compression. JPEG XL also has built‑in support for wide colour gamuts and high dynamic range, including Rec. 2100 colour primaries with PQ or HLG transfer functions. Although browser support is still limited, JPEG XL is royalty‑free and extensible and may become important for archival and professional workflows.

6.5 RAW

Raw formats (e.g., CR3, NEF, ARW, DNG) store unprocessed sensor data at 12‑16 bit precision and capture the full dynamic range of the sensor. No colour space is assigned until conversion. When creating HDR stills, process the RAW file into an HDR format (e.g., HEIF PQ or JPEG with gain map) using software that preserves 10‑bit or higher data.

6.6 TIFF and Floating‑Point Formats

The Tagged Image File Format (TIFF) is a flexible container that can store images at 8‑, 16‑ or 32‑bit precision. 32‑bit floating‑point TIFFs are used in professional workflows to hold scene‑linear data or tone‑mapped HDR results. Unlike integer formats, a 32‑bit float can represent extremely bright values above 1.0, making it suitable for storing intermediate HDR composites and exporting to other applications. Because TIFF is large and lacks adaptive metadata, it is best reserved for archiving and interchange rather than web delivery. Because of its high precision it is used as input image when preparing HDR HEIF files via ShareHDR.

07

Capturing HDR Stills

7.1 Which Camera Should I Choose?

Most modern mirrorless and DSLR cameras from the past 5 years capture sufficient dynamic range (12-15 stops) for HDR photography when shooting RAW. The sensor's bit depth matters more than megapixels—look for cameras offering 14-bit RAW files rather than 12-bit, as this provides finer tonal gradations crucial for smooth HDR transitions.

Full-frame sensors generally deliver better dynamic range and lower noise than crop sensors, making them preferable for HDR work. Among current options, cameras like the Canon R5, Sony α7R V, Nikon Z9, and medium format Hasselblad X2D II 100C excel at capturing wide dynamic range. The Hasselblad X2D II 100C stands out as the first camera to shoot native HDR HEIF files with a 1,400-nit HDR display for accurate field preview—though this comes at a premium price.

For most photographers, any recent full-frame mirrorless camera paired with proper RAW processing will produce excellent HDR results.

Smartphones like the iPhone 15/16 Pro and Google Pixel 8/9 Pro also capture compelling HDR images using computational photography, making them viable for casual HDR photography when paired with apps that export gain map JPEGs or HEIF files.

The key is less about having the "perfect" camera and more about understanding how to expose properly for HDR—protecting highlights while capturing shadow detail—and processing the files through an HDR-aware workflow as opposed to shooting lossy formats like JPEG in-camera.

7.2 Exposure Strategy and Dynamic Range

Although modern sensors record 12–15 stops of dynamic range, high‑contrast scenes such as sunsets or interiors with bright windows may exceed this. Exposure bracketing is a proven technique: shoot multiple images at different exposure levels and merge them later. A common practice is to bracket around the base exposure using ±2 EV or, for more complex scenes, ±2 and ±4 EV. Photography Life recommends a five‑frame bracket of –2, –1, 0, +1, +2 stops to capture details across the range. Always shoot in RAW to maximise dynamic range and tonal precision.

When merging exposures, ensure your tripod is steady and avoid moving subjects. Some modern cameras and smartphones can capture "HDR" photos in one shot by using multi‑frame processing; however, bracketing offers more control and better noise reduction in challenging scenes.

Critical Rule: The most important rule when shooting HDR images is to never over‑expose your highlights. Clipped highlights are extremely distracting in HDR and often cannot be salvaged in post; the larger dynamic range reduces the ability to push exposure without revealing noise and lens flare. While blown highlights can appear "dreamy" in SDR, they stand out like glaring holes on an HDR display, completely ruining the viewing experience. Protect the brightest parts of your scene by exposing for highlights and letting shadows fall where they may, or by bracketing to cover both extremes. Many cameras can show a warning overlay when taking pictures, informing of overexposed areas.

7.3 Camera Settings and Bit Depth

Set your camera to record RAW. Use the base ISO of your camera which allows to maximise dynamic range, and adjust shutter speed or aperture to control exposure. Avoid clipping highlights; once blown, highlight detail cannot be recovered. While this might be a stylistic choice in SDR photography, blown out highlights on HDR really do not work.

Since dynamic range is fundamentally limited by sensor hardware, proper exposure technique matters more than chasing marginally higher bit depths. While 14-bit RAW files provide more tonal precision than 12-bit, the critical factor is protecting highlights—no amount of bit depth can recover blown detail. Most cameras record RAW at 12 or 14 bits, which is sufficient for HDR processing when exposed correctly.

The Hasselblad X2D II 100C stands apart as currently the only dedicated stills camera that captures native HDR HEIF files in-camera using PQ encoding. More remarkably, it's the sole camera featuring a built-in 1,400-nit HDR display that shows the actual HDR image while shooting—not just a tone-mapped SDR preview. This real-time HDR preview is revolutionary for field work, allowing photographers to judge highlight detail and tonal relationships accurately before processing. While other cameras may add HDR HEIF capture in future firmware updates, for now the Hasselblad offers a uniquely integrated HDR workflow from capture through preview, albeit at a premium price point.

08

Editing HDR Images

HDR editing requires applications that understand wide‑gamut colour, 12-14 bit RAW image pipelines and displays capable of displaying HDR content. Not all photo editors are HDR‑aware: Capture One, Affinity Photo and most versions of Adobe Photoshop still treat HDR images as SDR and won't display the image correctly. In contrast, Lightroom (Classic and Cloud) and Apple's Photomator fully support importing and exporting HDR files. These apps let you toggle HDR editing on and off and provide the user with a useful diagram showing the extended dynamic range.

Working in a wide‑gamut colour space such as Rec.2020 or DCI‑P3 preserves saturated colours and prepares your image for the future.

Important: Adaptive Display Headroom

It is important to note HDR editing software on Mac with XDR displays or iPad/iPhone with HDR capable screens will adjust the headroom of the device accordingly to ambient lighting conditions. Lightroom currently is able to display 4 additional stops over SDR. This number, however, goes down as screen brightness of the device increases. While this means the difference between HDR highlights appear less strong, it does not mean the additional detail is lost - the file itself still entails the full additional 4 stops. They just can't be shown, as the display at full brightness has less headroom to display the additional stops.

Export HDR images using PQ in TIFF or AVIF, or adaptive gain‑map options in HEIF, or gain‑map JPEG but be mindful where the image will be shown.

09

Sharing HDR Images

For web and social media, HDR stills attract attention because they appear more lifelike. Smartphones already support HDR capture and display, and platforms such as Instagram and YouTube are beginning to honour HDR uploads. As operating systems and browsers incorporate HDR colour spaces and the ISO gain map standard, HDR still images are expected to become the default for premium content.

JPG + Gain Map

Great for web distribution as well for social media purposes. Broad compatibility with modern browsers.

HEIF

When encoded correctly survives Apple's internal image pipelines for sharing images across iMessage and iCloud Shared Albums. Also works great for social media platforms that support HDR, namely Threads and Instagram.

10

Limitations and Challenges

Device Variability

Not all HDR‑labelled devices meet the minimum requirements. Many consumer screens labelled "HDR‑ready" provide only 400 nits and 8‑bit panels, which deliver little improvement over SDR. Always verify brightness, contrast ratio and bit depth.

File Size and Storage

HDR files (especially RAW and 32‑bit formats) are larger than SDR JPEGs. Gain‑map JPEGs increase file size modestly but still require more storage and bandwidth than conventional images.

Editing Complexity

HDR editing demands knowledge of colourspaces and careful colour management. Work in a wide‑gamut space (Rec.2020 or P3) to avoid clipping colours. Understand how different displays map HDR tones. Poorly managed HDR can result in oversaturated colours, crushed blacks or blown highlights.

Metadata Preservation

Many social media sites and messaging platforms strip embedded metadata when images are uploaded, including copyright information and the gain‑map data required for HDR. To ensure your HDR photos display correctly and retain attribution, share them via services that support HDR and maintain metadata, or provide a direct download link.

Sharing Challenges

Sharing HDR images remains challenging with inconsistent results, metadata and gain map losses, and ill interpreted EOTFs. All formats are actively being developed and compatibility improves across the board so keeping up to date with the newest developments is of importance. At this time, HEIF and JPG + gain map prove to be the most compatible.

Environmental Limitations

HDR displays are brightest in controlled lighting. In bright viewing conditions (e.g., sunlight), even 1000 nits may not appear dazzling. Make sure to use the histogram in your editing software.

Platform Warnings: Keep in mind that scheduling platforms like Sprout etc. will strip all HDR metadata. The same will happen when uploading HDR images to Instagram or Threads via your PC as opposed to your mobile device. Wix and SquareSpace also re-process the images, resulting in loss of HDR metadata.

11

Best Practices for HDR Photography

Capture clean, well‑exposed images – Use RAW format, bracket exposures for high‑contrast scenes and avoid blown highlights.

Adopt a high bit‑depth workflow – Edit in 16‑bit ProPhoto or Rec.2020 and export using a 10‑bit or higher format (TIFF, HEIF, AVIF, gain‑map JPEG).

Use wide‑gamut, high‑brightness monitors – Ensure it meets the brightness (≥1,000 nits), contrast (≥1,000,000:1) and 10‑bit criteria for accurate editing.

Consider where and how the image will be shared.

Make sure to test the exported image at different brightness levels on your mobile device.

Stay informed – Keep abreast of new standards (e.g., updates to Rec.2100, DisplayHDR tiers) and software updates that improve HDR support.

12

Conclusion

HDR still photography represents a significant leap in image realism. By combining high‑bit‑depth encoding, wide‑gamut colour spaces and advanced transfer functions with displays capable of 1000 nits sustained brightness, million‑to‑one contrast and 10‑bit colour, photographers can present scenes as the eye perceived them during capture.

Although HDR workflows require more care in capture, editing and delivery, the resulting images offer luminous highlights, deep shadows and rich colour gradations that stand out on modern screens.

🚀

The Future is HDR

As HDR support becomes ubiquitous across devices and the web, embracing these techniques will position photographers at the forefront of visual storytelling.

HDR Photography Guide

A comprehensive resource for modern image creation

Skip to Content
Florian Thess
Portfolio
Films
Photography
HDR
Shop
Contact
(0)
Cart (0)
Florian Thess
Portfolio
Films
Photography
HDR
Shop
Contact
(0)
Cart (0)
Portfolio
Films
Photography
HDR
Shop
Contact

Let's keep in touch.

Sign up with your email address to receive news and updates.

Thank you!

Florian Thess

hello@florianthess.com