The Rise of Lossless Media: A Compression Tale

By Tom Kehn, VP, Solutions Consulting
Introduction

Compression has been crucial in managing the storage and transmission of large media files. However, as technological advancements continue, the role of compression is evolving. This article delves into the history of media compression, differentiates its role in post-production and broadcast consumption, and explores the future of lossless media. We also discuss the evolution of bandwidth, streaming platforms, and wireless technologies driving this transformation. As we move towards a future where terabytes per second of data transfer speeds and petabytes of storage become commonplace, lossy compression may become a relic of the past, giving way to a new era of lossless, high-fidelity media.

Fun Fact: Claude Shannon, known as the father of information theory, developed the first theoretical model of data compression in 1948. His groundbreaking work laid the foundation for all modern data compression techniques.

The Genesis of Media Compression

Compression techniques were developed to address the limitations of early digital storage and transmission technologies, enabling the efficient handling of large media files.

  • Audio Compression: The MP3 format, introduced in the early 1990s, significantly reduced audio file sizes by removing inaudible frequencies, revolutionizing music distribution and storage.
  • Image Compression: JPEG compression, developed around the same time, reduced image file sizes by exploiting human visual limitations, impacting digital photography and web development.
  • Video Compression: Standards like MPEG-1, MPEG-2, and H.264 were created to reduce video data requirements while maintaining visual quality, facilitating efficient video streaming and storage.
  • Editing Formats Compression: Early editing systems like CineWave and Media 100 used their proprietary codecs to enable real-time video editing and playback, providing a foundation for the development of modern high-efficiency editing formats. Later, formats like Avid DNxHD were developed to balance high quality and manageable file sizes, allowing for smoother editing workflows by reducing the strain on storage and processing power. Following this, codecs such as Apple ProRes emerged, further enhancing editing efficiency while preserving much of the original quality. These advancements set the stage for the use of proxy workflows, where lower-resolution copies of high-resolution files are used during the editing process to improve performance and reduce system demands.
Honoring the Codec Pioneers

These early codecs and non-linear editing (NLE) systems, despite their limitations, were essential in the development of digital video technology. They enabled the first steps towards online video streaming, multimedia content distribution, and advanced video editing workflows. While many of these codecs and systems have since fallen out of use, they paved the way for the advanced compression technologies and editing capabilities we rely on today.

1970s

  • CMX 600 (1971): Developed by CMX Systems, the CMX 600 was one of the first computerized video editing systems. It used magnetic tape to store data and allowed for basic non-linear editing capabilities.

1980s

  • Ampex VideoFile (1982): One of the first commercial non-linear editing systems, VideoFile used digital storage for editing purposes, laying the groundwork for future NLE systems.
  • Lucasfilm EditDroid (1984): Developed by Lucasfilm, EditDroid used laserdiscs to store video footage, offering more flexibility than tape-based systems.
  • Cinepak (1989): One of the earliest video codecs, Cinepak was used extensively in the early days of digital video, particularly within Apple’s QuickTime and Microsoft’s Video for Windows platforms. It offered low compression efficiency but widespread compatibility.

1990s

  • Avid Media Composer (1989): One of the first widely adopted NLE systems, Avid Media Composer revolutionized video editing by allowing editors to manipulate digital video with great flexibility and precision.
  • Microsoft AVI Codecs (Early 1990s): The Audio Video Interleave (AVI) format supported a variety of codecs such as Intel Indeo, Cinepak, and Microsoft Video 1, enabling early digital video playback and editing.
  • QuickTime (1991): Apple’s multimedia framework included support for various codecs like Sorenson Video and Cinepak, becoming a popular format for video playback on both Mac and Windows platforms.
  • JPEG (1992): The JPEG standard for compressing still images reduced file sizes by exploiting human visual limitations, making it crucial for digital photography and web images.
  • MP3 (1993): The MPEG-1 Audio Layer III, or MP3, became the standard for audio compression, significantly reducing file sizes and revolutionizing music distribution.
  • Media 100 (1993): An early digital non-linear editing system, Media 100 used proprietary codecs to enable high-quality video editing and playback on standard desktop computers.
  • RealVideo (1997): Developed by RealNetworks, RealVideo was one of the first codecs designed specifically for streaming video over the internet. RealPlayer became popular for watching video clips online despite the relatively low quality compared to today’s standards.
  • DivX (1998): Initially based on a hacked Microsoft MPEG-4 Part 2 codec, DivX offered high-quality video at reduced file sizes, becoming popular for DVD-ripping and internet distribution.
  • Final Cut Pro (1999): Developed by Macromedia and later acquired by Apple, Final Cut Pro became a major player in the professional editing market, known for its user-friendly interface and powerful features.

2000s

  • VP3 (2000): Developed by On2 Technologies, VP3 was an early open-source video codec that evolved into VP6 and VP7, used in Adobe Flash video. VP3 laid the groundwork for the VP8 and VP9 codecs later used by Google.
  • Sorenson Video (Early 2000s): Used primarily in QuickTime files, Sorenson Video provided good quality at relatively low bitrates, facilitating early internet video streaming.
  • Xvid (2001): An open-source alternative to DivX, Xvid was based on the MPEG-4 Part 2 codec and gained popularity for its ability to compress video files without significant loss of quality.
  • 264 (2003): Also known as AVC (Advanced Video Coding), H.264 became the standard for video compression, offering high-quality video at lower bitrates and being widely adopted for streaming, broadcasting, and Blu-ray discs.
  • Avid DNxHD (2004): Developed for high-definition video editing, DNxHD provided high quality and manageable file sizes, reducing the strain on storage and processing power.
  • Apple ProRes (2007): An intermediate codec developed by Apple, ProRes balanced high quality and low compression, becoming a standard in professional video production.

2010s

  • VP8 (2010): Acquired by Google, VP8 was used in the WebM format for web video, offering a royalty-free alternative to H.264.
  • 265/HEVC (2013): High Efficiency Video Coding (HEVC) provided improved compression efficiency over H.264, reducing bitrates by about 50% while maintaining the same quality. It was crucial for 4K video streaming and broadcasting.
Diverging Paths: Post-Production vs. Broadcast Consumption

The future of media compression can be divided into two distinct areas: post-production and broadcast consumption. Each has unique requirements and challenges as we move towards a world with less reliance on compression.

Post-Production: Towards Lossless Workflows

In the realm of post-production, the trend is unmistakably moving towards lossless and uncompressed media. This shift is driven by the pursuit of maintaining the highest possible quality throughout the editing process. Here’s why this evolution is taking place:

Quality Preservation: In post-production, maintaining the highest possible quality is paramount. Compression artifacts can interfere with editing, color grading, and special effects, ultimately compromising the final output. By working with uncompressed media, filmmakers and editors can ensure that the integrity of their footage is preserved from start to finish.

Storage Solutions: The rapid advancement in storage technology has made it feasible to handle vast amounts of lossless media. High-speed NVMe SSDs and large-capacity HDDs provide the necessary space and access speeds for handling these large files efficiently. Additionally, cloud storage solutions offer virtually unlimited space, further reducing the dependency on compression.

High-Resolution Content: The increasing demand for 4K, 8K, and even higher resolution content requires lossless files to preserve every detail and maintain dynamic range. As viewing standards continue to rise, the need for pristine, high-quality footage becomes even more critical.

Raw and Lossless Formats for Popular Cameras:
  • REDCODE RAW (2007): Used by RED cameras, REDCODE RAW offers high-quality, lossless or lightly compressed video suitable for post-production workflows, maintaining high dynamic range and color fidelity.
  • ARRIRAW (2010): The uncompressed, unencrypted format used by ARRI cameras, ARRIRAW provides maximum image quality and flexibility in post-production, capturing the full sensor data for precise color grading and effects work.
  • KineRAW (2012): Employed by Kinefinity cameras, KineRAW offers uncompressed or lightly compressed RAW video, ensuring high image quality and flexibility for color grading and visual effects.
  • DJI RAW (2015): Found in DJI’s professional aerial and handheld cameras, DJI RAW offers high-quality, uncompressed or lightly compressed video, capturing detailed image data for robust post-production workflows.
  • Sony X-OCN (eXtended Original Camera Negative) (2016): Used in Sony’s high-end cinema cameras, X-OCN offers high-quality, lightly compressed video, balancing file size and image quality for extended recording times and efficient post-production workflows.
  • Canon Cinema RAW Light (2017): A lightly compressed RAW format used in Canon’s cinema cameras, Cinema RAW Light balances quality and file size, capturing extensive image data for detailed post-production work.
  • Apple ProRes RAW (2018): Widely used in professional video production, Apple ProRes RAW combines high-quality video with efficient compression, compatible with various cameras and editing software. It allows for flexible adjustments in post-production.
  • Blackmagic RAW (BRAW) (2018): An efficient codec from Blackmagic Design, BRAW offers high-quality, lightly compressed video with flexible post-production options. It includes metadata for enhanced editing capabilities and preserves sensor data for high dynamic range.
  • ZRAW (2018): Used by Z CAM cameras, ZRAW is a lightly compressed RAW format that maintains high image quality and provides flexibility in post-production, allowing for extensive color correction and grading.
  • Panasonic V-RAW (2019): Utilized by Panasonic’s high-end cameras, V-RAW provides high-quality, lightly compressed footage, preserving the sensor’s dynamic range and color depth for detailed post-production adjustments.

These RAW and uncompressed formats are essential for professional video production, providing filmmakers with the flexibility and quality needed to achieve the best possible results in post-production. The move towards lossless workflows signifies a commitment to excellence and the pursuit of the highest visual standards in the industry.

Editing in RAW Format with NLEs

Modern NLE systems have advanced to support the editing of RAW formats, providing filmmakers and editors with unparalleled flexibility and control over their footage. NLEs such as Adobe Premiere Pro, Final Cut Pro, DaVinci Resolve, and Avid Media Composer are equipped to handle various RAW formats like REDCODE RAW, Apple ProRes RAW, ARRIRAW, Blackmagic RAW, and more. These systems enable real-time editing and color grading of RAW footage, allowing editors to leverage the full dynamic range and color depth captured by high-end cameras. By preserving the original sensor data, NLEs offer extensive post-production capabilities, including non-destructive adjustments to exposure, white balance, and other critical image parameters, ensuring the highest quality output for professional film and video projects.

Broadcast Consumption: The Push for Lossless Media

On the consumption side, the trend towards losslessly compressed media is gaining significant momentum, although the challenges here are different from those in post-production.

Bandwidth Expansion: The rollout of 5G and the expansion of fiber optic networks promise dramatically increased internet speeds. This advancement makes it feasible to stream high-quality, lossless media to end-users, reducing the need for traditional lossy compression techniques. With these higher speeds, consumers can enjoy pristine audio and video quality that was previously unattainable due to bandwidth limitations.

Streaming Platforms: Services like Apple Music, Amazon Music HD, and Tidal have been offering lossless audio streaming for some time, providing users with a higher quality listening experience. This trend is likely to extend to video streaming, with platforms like Netflix and Disney+ exploring ways to deliver losslessly compressed 4K and HDR content. As these services push the envelope, they will set new standards for media quality in the streaming industry.

Wireless Technologies: Advances in wireless technology, including Wi-Fi 6, Wi-Fi 7, and future iterations, will support higher data rates and more reliable connections. These improvements will facilitate the streaming of lossless media, making it more accessible to a broader audience. With these advancements, users can expect seamless streaming experiences with minimal buffering and superior quality, regardless of their location.

As the infrastructure for high-speed internet and advanced wireless technologies continues to grow, the consumption of losslessly compressed media will become more widespread. This shift not only enhances the user experience but also pushes the industry towards a new standard of quality, reflecting the full potential of modern digital media technologies.

Emerging Formats and Technologies

Several modern video codecs and technologies are emerging that offer significant improvements in compression efficiency and quality, with some poised to support lossless video capabilities. Additionally, advancements in storage and transmission technologies will facilitate the handling of large lossless media files

Video Codecs

  • AV1 (AOMedia Video 1) – 2018: Developed by the Alliance for Open Media, AV1 is a royalty-free, open-source codec designed specifically for video streaming. It offers superior compression efficiency compared to older codecs like H.264 and H.265/HEVC. Major companies like Google, Netflix, and Amazon are backing AV1, and Apple’s recent endorsement by including AV1 support in the iPhone 15 Pro (2023) is expected to accelerate its adoption.
  • Versatile Video Coding (VVC or H.266) – 2020: VVC aims to provide significant improvements in compression efficiency over its predecessor, HEVC. It can reduce bitrates by about 50% compared to HEVC while maintaining the same quality, which is particularly beneficial for 4K and 8K video streaming. VVC is starting to be integrated into new hardware and smart TVs, with broader adoption expected as more devices gain support.
  • Low Complexity Enhancement Video Coding (LCEVC) – 2020: LCEVC is an enhancement codec that works in conjunction with existing codecs like AVC, HEVC, VP9, and AV1 to improve compression efficiency and reduce computational load. It is designed to be lightweight, allowing it to run on devices without dedicated hardware support, making it suitable for mobile and browser-based applications.
  • Essential Video Coding (EVC) – 2020: EVC was developed with a focus on providing both a baseline profile that is license-free and a main profile that offers higher efficiency with some associated licensing costs. It aims to balance performance and cost, making it a flexible option for various use cases.

AI and Compression: AI is increasingly being used to develop smarter compression algorithms. For example, Google’s AI compression system, RAISR, uses machine learning to enhance images after compression, reducing file sizes while maintaining quality.

Storage and Transmission Technologies

  • Holographic Storage – 2030s (Projected): Innovations in holographic storage will revolutionize how we store large amounts of uncompressed data by providing high-density storage solutions. This technology uses laser beams to store data in three dimensions, offering significantly higher storage capacities.
  • DNA Data Storage – 2030s (Projected): DNA data storage offers a futuristic approach to storing massive amounts of data in a very compact form, potentially transforming how we archive uncompressed media. By encoding data into synthetic DNA, this technology promises unparalleled density and durability.
  • Quantum Internet – 2040s (Projected): On the transmission side, the quantum internet promises unprecedented data transfer speeds, which could facilitate the rapid transmission of large, uncompressed media files. Quantum entanglement could enable instant data transfer over long distances, revolutionizing data communication.
  • 5G and Beyond – 2020s and Beyond: The rollout of 5G and future wireless technologies will support higher data rates and more reliable connections, enabling seamless streaming of high-quality, lossless media. Future generations like 6G are expected to further enhance these capabilities, making real-time, high-fidelity media streaming ubiquitous.

These emerging formats and technologies are set to transform the landscape of media production, storage, and consumption, driving us towards a future where uncompressed and lossless media become the norm.

The Bandwidth Paradox: Rising Demand

Just as Moore’s Law predicts the doubling of transistors on a chip every two years, Nielsen’s Law of Internet Bandwidth states that high-end user connection speeds grow by 50% per year. As bandwidth increases, so too does the demand for new technologies that consume it. This phenomenon is often referred to as the “bandwidth paradox.” Despite advancements that provide higher speeds and greater capacity, emerging technologies continually push the limits of available bandwidth.

Virtual Reality (VR) and Augmented Reality (AR)

  • VR and AR Technologies: Virtual reality and augmented reality are at the forefront of the next generation of immersive experiences. These technologies require high-resolution, low-latency streaming to create convincing and responsive environments. For VR, a fully immersive experience typically requires video resolutions of at least 4K per eye and frame rates of 90 to 120 frames per second. AR, which overlays digital content onto the real world, also demands significant bandwidth for real-time processing and high-quality visuals.
  • Bandwidth Requirements: Current VR and AR applications already require substantial bandwidth, and as these technologies evolve, the demand will only increase. Advanced VR and AR setups may require 50-100 Mbps of sustained bandwidth to ensure smooth, lag-free experiences. This requirement can strain even the most robust networks, especially when multiple users are accessing the same services simultaneously.

Advanced Immersive Recording Devices

  • 360-Degree Cameras and Volumetric Capture: Modern recording devices like 360-degree cameras and volumetric capture systems create highly detailed and interactive content. These devices capture vast amounts of data to produce immersive videos and holograms, which can be used for everything from virtual tours to interactive educational content.
  • Data Intensity: The data generated by these devices is immense. For example, a single minute of 360-degree 4K video can consume several gigabytes of storage. When this content is streamed, it requires equally substantial bandwidth to ensure that the end-user experience is seamless and high quality.

Cloud Gaming and Interactive Streaming

  • Cloud Gaming Services: Services like Google Stadia, NVIDIA GeForce Now, and Microsoft’s Xbox Cloud Gaming (formerly Project xCloud) deliver high-quality gaming experiences over the internet. These services render games on powerful cloud servers and stream the video output to users’ devices.
  • Bandwidth Requirements: Cloud gaming requires low latency and high bandwidth to deliver responsive and immersive gameplay. For a 1080p stream at 60 frames per second, the required bandwidth can range from 15 to 25 Mbps. As 4K gaming becomes more prevalent, the bandwidth requirements can skyrocket to 35 Mbps or more.

The Growing Demand for High-Quality Streaming

  • 4K and 8K Streaming: As consumer demand for high-definition content grows, streaming services like Netflix, Amazon Prime Video, and Disney+ are shifting towards 4K and even 8K video resolutions. While 4K streaming requires approximately 25 Mbps, 8K streaming can demand upwards of 100 Mbps, depending on the compression technologies used.
  • Interactive and Live Streaming: Live streaming platforms like Twitch and YouTube Live are increasingly popular. High-quality, interactive live streams, particularly those involving multiple camera angles or real-time audience interaction, require substantial bandwidth to maintain quality and responsiveness.

Contradiction: Chattanooga, TN, already boasts 25Gb home internet, yet the adoption rate of 1Gb speeds remains low, highlighting the ongoing challenges in achieving widespread high-speed internet saturation.

Conclusion

As we stand on the brink of a new era in digital media, the concept of compression as we know it is poised to become a relic of the past. The relentless march of technological advancement in storage and bandwidth promises a future where lossless or uncompressed, high-fidelity media becomes the norm. Imagine a world where terabytes per second of data transfer speeds and petabytes of storage are commonplace, even on devices as ubiquitous as smartphones. Just twenty years ago, in 2004, typical consumer hard drives had capacities ranging from 40 GB to 160 GB—considered impressive at the time. This impending reality will usher in unprecedented levels of quality and immediacy in media consumption and production. The shift towards uncompressed workflows in post-production, driven by the need for maximal quality, coupled with the exponential growth in streaming capabilities through 5G, fiber optics, and beyond, sets the stage for a future where the limitations of today are no more. As these technologies mature, the cumbersome processes of compression and decompression will fade into history, making way for a seamless digital experience that reflects the true potential of human creativity and technological innovation.

References

  • (2024). AV1 Codec Overview.
  • (2024). The Future of Video Compression with VVC.
  • Streaming Media Magazine. (2023). LCEVC: Enhancing Video Compression Efficiency.
  • Streaming Media Magazine. (2023). Essential Video Coding (EVC): Balancing Performance and Cost.
  • Cisco Systems. (2021). Cisco Visual Networking Index: Forecast and Trends, 2018–2023.
  • International Telecommunication Union. (2020). The State of Broadband 2020: Tackling Digital Inequalities.
  • Seagate Technology. (2021). The Data Age 2025: The Digital World.
  • Future Storage Innovations: Holographic Storage and DNA Data Storage. (2030s).
  • Quantum Internet: The Next Frontier in Data Transmission. (2040s).
  • Shannon, C. E. (1948). A Mathematical Theory of Communication. Bell System Technical Journal.
  • Fraunhofer Institute. (1993). Development of the MP3 Audio Compression Format.
  • ITU-T. (2003). Recommendation H.264: Advanced Video Coding for Generic Audiovisual Services.
  • Alliance for Open Media. (2018). AV1 Video Codec Specification.
  • Google AI Blog. (2017). RAISR: Rapid and Accurate Image Super-Resolution.
  • Lucasfilm Ltd. (1984). Introduction of EditDroid.
  • RED Digital Cinema. (2007). REDCODE RAW Technical Specifications.
  • ARRI Group. (2010). ARRIRAW Technology Overview.
  • (2012). KineRAW: A New Era of Raw Video.
  • (2015). DJI RAW: High-Quality Aerial Footage.
  • Sony Corporation. (2016). X-OCN: Extended Original Camera Negative.
  • Canon Inc. (2017). Cinema RAW Light: Balancing Quality and File Size.
  • Apple Inc. (2018). ProRes RAW: Professional Video Production.
  • Blackmagic Design. (2018). Blackmagic RAW: The Next Generation Codec.
  • Z CAM. (2018). ZRAW: Flexibility in Post-Production.
  • Panasonic Corporation. (2019). V-RAW: High-Quality Video Capture.
  • On2 Technologies. (2000). VP3: The Early Days of Video Compression.

Google. (2010). Acquisition of VP8 and WebM Project.

« BACK TO BLOG POSTS