Categories
Technology

AI Virtual Actors: Revolutionizing Hollywood and Resurrecting Legends

Introduction

AI is reshaping the future of film and TV production in unprecedented ways. One of its most fascinating developments is the rise of AI-generated actors—digital creations that mimic the appearance, voice, and mannerisms of real people, living or deceased. These virtual actors are taking on more roles in Hollywood, not just augmenting human performers but, in some cases, replacing them entirely. With AI now powerful enough to resurrect long-dead celebrities like James Dean for new films, it raises important questions about creativity, ethics, and the future of acting in a digital world.

The Rise of AI Virtual Actors

AI virtual actors are digitally created entities that can perform in movies, television shows, and commercials. They are generated using advanced techniques like deep learning, CGI, and motion capture. While CGI characters have been part of Hollywood for decades, AI has taken these virtual actors to a whole new level. AI not only makes them more lifelike but also enables them to perform autonomously, using algorithms to learn and imitate human behavior, expressions, and voice patterns.

A major turning point came with James Dean’s digital resurrection. Nearly 70 years after his death, Dean is set to star in the upcoming sci-fi film Back to Eden, thanks to AI technology that uses old footage, audio, and photos to digitally clone the iconic actor. Dean’s AI-powered clone will interact with real actors on-screen, raising profound questions about what it means to perform in a world where the dead can “come back to life”.

This development echoes earlier breakthroughs in CGI. For instance, Carrie Fisher, Paul Walker, and Harold Ramis were all digitally resurrected for posthumous appearances in films like Star Wars: The Rise of Skywalker and Ghostbusters: Afterlife. But AI goes beyond merely pasting an old face onto a new body. The technology now allows for more seamless, believable performances where the virtual actor can speak, move, and respond in ways that blur the line between human and machine.

A Historical Timeline of Virtual and Digital Actors

The concept of digital or virtual actors has a long history. As technology has evolved, so too has the ambition to create lifelike performers. Here’s a look at how virtual actors have developed over time:

1930s: The First Virtual Performers – Mechanical Mannequins

While not digitally created, early forms of “virtual” performers date back to the 1930s with mechanical mannequins like Cynthia, a life-sized mannequin that became a celebrity in her own right. Cynthia was used in fashion and entertainment, becoming one of the earliest examples of non-human entities marketed as performers.

1950s: Animated Performers – Alvin and the Chipmunks

In 1958, Alvin and the Chipmunks entered pop culture, marketed as real performers despite being animated. Their music career and cartoon series became cultural phenomena, setting the stage for virtual characters to engage audiences as entertainers.

1980s: The Birth of Virtual Actors – Max Headroom

Max Headroom, introduced in 1985, was the first computer-generated TV personality. Though partially portrayed by a human actor, the character was a breakthrough in the integration of CGI and live-action, foreshadowing the future of virtual actors.

2001: The First Digital Lead – Final Fantasy: The Spirits Within

In 2001, the movie Final Fantasy: The Spirits Within became the first film to feature a fully CGI lead character, Dr. Aki Ross. This was a significant leap forward, demonstrating how digital characters could act as lifelike performers, paving the way for more sophisticated AI-driven actors in the future.

2010s: Digital Resurrection of Deceased Actors

The 2010s saw the return of deceased actors through digital means. Peter Cushing was digitally resurrected to reprise his role as Grand Moff Tarkin in Rogue One: A Star Wars Story. Additionally, Carrie Fisher and Paul Walker were also digitally recreated for final film appearances after their deaths, marking a new era of posthumous digital performances.

2020s: AI-Generated Actors

Today, AI-generated actors like James Dean in Back to Eden will become increasingly common. These actors are no longer just CGI models controlled by human puppeteers but are powered by AI algorithms that allow them to perform autonomously, learning human behaviors and expressions.

How AI Virtual Actors Work

The creation of AI actors involves combining several advanced technologies. CGI is used to recreate the physical appearance of the actor, while AI algorithms control their speech, facial expressions, and movements. Motion capture data from real actors can also be used to give AI characters a lifelike performance. This technology allows AI actors to “learn” how to mimic real humans, down to the smallest gestures or intonations in their voice.

One notable example of this is the Star Wars franchise, where both Carrie Fisher and Peter Cushing were digitally brought back to life. AI enabled filmmakers to create realistic performances from actors who had passed away or were unavailable. The result was virtual actors that not only looked like their real-life counterparts but also moved and spoke as convincingly as any living performer.

The Benefits of AI Virtual Actors
1. Flexibility and Creative Control

For filmmakers, AI virtual actors offer several advantages. First, they provide greater flexibility. AI actors don’t have schedules, they don’t age, and they can be “cast” in roles long after the real actor has passed away. This allows for the return of beloved characters or the casting of actors who otherwise wouldn’t be available. AI actors also present no risks when performing dangerous stunts, reducing the need for human stunt doubles.

Additionally, AI offers unparalleled creative control. Directors can manipulate every aspect of the actor’s performance, ensuring consistency and precision. This is particularly valuable in big-budget productions where time and cost efficiency are crucial. With AI, filmmakers can have their digital actors perform tirelessly, take direction without question, and deliver perfect performances on command.

2. Cost and Time Efficiency

Using AI actors can also lower production costs. Traditional actors require salaries, travel expenses, and accommodations, and they need time off for rest. AI actors, however, do not have these demands. Once the digital model is created, the actor can be used repeatedly across different scenes or even films without additional costs. In an industry where budgets are often tight, this level of efficiency can be game-changing.

Ethical Implications of AI Actors
1. Creativity Versus Profit

The rise of AI in Hollywood has sparked debates about the balance between creativity and profitability. Actors’ unions, including the Screen Actors Guild, have raised concerns about the potential for AI to replace human actors, reducing job opportunities in an already competitive field. AI actors could monopolize certain roles, especially for voice-over or background characters, eliminating opportunities for real performers to showcase their talent.

Actors like Susan Sarandon have expressed concern about the creative limitations AI may impose. Sarandon warned of a future where AI could make her “say and do things I have no choice about”. This scenario could lead to actors losing control over their own image, with AI manipulating their likeness without their consent.

2. Resurrecting the Dead: Who Owns an Actor’s Image?

Another ethical dilemma arises with the digital resurrection of deceased actors. With AI capable of creating lifelike performances, actors who have long since passed away can now “star” in new films. But who owns the rights to their digital likeness? James Dean’s appearance in Back to Eden was only possible with permission from his estate. However, the broader question remains—what rights do actors, or their estates, have over their likeness once they’ve died?

There’s also the issue of creative integrity. Would James Dean have wanted to appear in a sci-fi film had he been alive? What if an actor’s AI likeness was used in a film or genre they would have never agreed to? These are questions that the film industry will need to address as AI continues to blur the lines between the living and the digital.

The Future of AI in Hollywood

AI is poised to play an even bigger role in the future of Hollywood, especially as the technology continues to evolve. We may soon see fully AI-generated actors starring in their own films, without any connection to a real-life counterpart. These actors could take on any role, in any genre, and even adapt their performance based on audience feedback or input from directors in real time.

Some experts predict that AI-generated actors could dominate the industry, especially in genres like science fiction or animation where CGI already plays a major role. However, there is still likely to be a demand for human actors, particularly in roles that require emotional depth and genuine human connection.

Conclusion

AI virtual actors are transforming Hollywood, offering unprecedented flexibility, creative control, and cost efficiency. While the resurrection of legends like James Dean and Carrie Fisher has captured public attention, it also raises serious ethical questions about ownership, consent, and the future of human performers in an industry increasingly dominated by technology. As AI continues to advance, it will undoubtedly shape the future of filmmaking, blurring the line between reality and the digital world. However, the challenge will be ensuring that creativity and human expression remain at the heart of storytelling in cinema.

Sources:
Categories
Technology

AI Musicians: Virtual Voices, Resurrected Legends, and the Future of Music

Introduction

AI is fundamentally transforming the music industry, doing much more than helping musicians compose tracks or experiment with new sounds. AI is creating entire virtual musicians, some of whom never existed in the real world, and resurrecting long-deceased artists through sophisticated algorithms and deep learning techniques. This fascinating frontier raises questions about creativity, authenticity, and the future of music. How are fans embracing these virtual creations? And what does the rise of AI musicians mean for the future of the industry?

This article will explore the world of AI-generated musicians, the digital resurrection of legends, and the industry’s complex reaction to these technological advancements.

Virtual Musicians: AI Voices That Never Existed

In the world of AI-generated music, the boundary between human artistry and machine-made creation is becoming increasingly indistinct. Today, AI is capable of generating entire musical personas that are indistinguishable from those created by humans. AI-generated musicians can compose and perform songs, appear in virtual concerts, and even interact with fans, offering new experiences that stretch the limits of creativity.

One remarkable example is the AI-generated band Aisis, a virtual homage to the iconic Britpop group Oasis. Using sophisticated machine learning models trained on Liam Gallagher’s voice and style, Aisis released songs that captured the essence of the original band. Fans were amazed by how accurately AI was able to recreate the sound, prompting widespread curiosity about the future of AI in music. This experiment demonstrated the potential of AI not only to mimic but to evolve existing musical styles.

Similarly, the pseudonymous producer Ghostwriter used AI to generate convincing “collaborations” between artists like Drake, The Weeknd, and Bad Bunny. While these tracks stirred controversy, sparking legal and ethical debates, they also showcased the growing interest in AI-generated music that mimics well-known artists without their involvement.

The Virtual Idol Scene in Japan

Japan has long embraced the concept of virtual idols—computer-generated personas who perform in concerts, release albums, and interact with fans online. Leading the charge is Hatsune Miku, a digital pop star who performs at sold-out holographic concerts worldwide. Created by Crypton Future Media, Miku is one of Japan’s most beloved virtual influencers, with a loyal fan base that continues to grow. Virtual idols like Miku not only dominate the music scene in Japan but are increasingly popular across the globe.

Alongside Miku, other virtual stars like Kizuna AI and Liam Nikuro are reshaping what it means to be a musical artist. These digital idols have thriving social media profiles, produce hit songs, and collaborate with major brands—all without human intervention. Their influence is so significant that they are often seen as a new class of musicians, one that merges music, technology, and digital culture seamlessly.

Resurrecting Music Legends with AI

Perhaps the most controversial use of AI in music is the resurrection of deceased artists. AI has the potential to analyze recordings, performances, and even interviews of late musicians, recreating their voices and styles with stunning accuracy. This capability allows fans to hear “new” music from long-deceased legends, raising both excitement and ethical concerns.

In 2023, AI played a crucial role in the release of a new song by The Beatles, isolating John Lennon’s voice from an old demo tape and allowing it to be featured on a new track. This collaboration between AI and the remaining band members resulted in a pristine, posthumous performance from Lennon, creating both wonder and unease about the future of music.

Similarly, the estate of Steve Marriott, the late lead singer of Small Faces and Humble Pie, has discussed using AI to generate new recordings. By analyzing Marriott’s past performances and vocal style, AI could produce entirely new music that aligns with his original work. This kind of technological resurrection points toward a future where music legends could continue creating well after their deaths.

A Threat to Artistic Integrity?

While some see AI as a valuable creative tool, many musicians view it as a significant threat to the authenticity and integrity of music. In April 2024, more than 200 prominent artists, including Billie Eilish, Katy Perry, Smokey Robinson, and Nicki Minaj, signed an open letter urging AI developers to stop using their voices and likenesses without permission. The letter, organized by the Artist Rights Alliance (ARA), warned that AI is “sabotaging creativity” and undermining artists’ rights by allowing anyone to replicate their voices without consent.

These concerns highlight the broader issue of intellectual property in the age of AI. As AI systems become more sophisticated, the lines between human and machine-made music blur, raising fears that AI could replace human musicians, lead to job losses, and diminish the authenticity of artistic expression. Steve Grantley, drummer for Stiff Little Fingers, expressed concern that AI could dehumanize music entirely, envisioning a future where fans may not even know if their favorite songs were composed by humans or machines.

AI as a Creative Tool: Enhancement, Not Replacement

Despite these fears, many artists believe that AI has the potential to enhance creativity rather than replace it. Platforms like Amper Music and BandLab enable musicians to generate chord progressions, melodies, and beats quickly, providing inspiration and allowing artists to focus on more complex aspects of music-making.

Tina Fagnani, drummer for Frightwig, acknowledges that while AI offers new ideas and perspectives, it cannot replace the emotional and spiritual depth of human-generated music. For many, AI represents a powerful tool for experimentation and collaboration, but it lacks the “soul” that defines great music).

AI’s role as an assistant to musicians may ultimately be its most effective application. By automating tedious tasks like mixing, mastering, and generating ideas for new tracks, AI frees up artists to focus on the more nuanced, emotional aspects of music creation. This AI-human collaboration could push the boundaries of musical experimentation, resulting in sounds and styles that would have been impossible to achieve with human creativity alone.

New Generations Embrace AI Music

Interestingly, younger generations of fans are more likely to embrace AI-generated music. As digital culture becomes increasingly pervasive, AI musicians feel like a natural extension of online life. AI-generated songs and virtual artists have a growing presence on platforms like TikTok, where novel AI-human collaborations often go viral.

Virtual K-pop groups like Aespa have successfully combined real members with AI-generated avatars, appealing to fans who are as interested in the technology behind the performance as they are in the music itself. These groups showcase how the future of music could seamlessly blend human and virtual performers, creating immersive experiences that push the boundaries of live and recorded entertainment.

Virtual idols like Hatsune Miku and Kizuna AI are also gaining a foothold among international audiences. These idols perform in live concerts as holograms, release AI-generated music, and even engage with fans via social media. The appeal of these digital performers lies in their flawless, carefully curated personas, which are immune to scandals or personal issues that might affect human artists.

Ethical and Creative Implications of AI Music

Despite the excitement surrounding AI music, it raises major ethical questions. Who owns the rights to AI-generated music that imitates deceased artists? How should the royalties from these creations be distributed? More fundamentally, can AI ever truly replicate the emotional depth of human-generated music?

Music has always been deeply personal, reflecting the artist’s experience of love, loss, joy, and pain. While AI can mimic human voices with technical precision, it lacks the life experience that gives music its emotional power. For now, AI excels at recreating sounds and styles but struggles to match the emotional authenticity of human composers.

These questions will only grow more urgent as AI continues to evolve, with more estates considering the use of AI to resurrect deceased artists for new releases. Balancing technological innovation with the preservation of human creativity will be one of the defining challenges for the future of the music industry.

The Future of AI in Music: Collaboration or Competition?

The most likely future for AI in music may lie in collaboration rather than competition. AI offers immense potential for generating new sounds, experimenting with structures, and blending genres in ways humans may never have imagined. Musicians can use these AI-generated compositions as a foundation, adding their emotional depth, creativity, and personal touch to create something entirely unique.

However, the challenge will be to ensure that AI complements, rather than replaces, human artistry. The future of music will depend on how well artists, technologists, and policymakers can balance the creative possibilities of AI with the need to protect the authenticity and rights of human musicians.

Conclusion: Embracing AI, but Protecting Creativity

AI-generated musicians are a fascinating glimpse into the future of music, offering both exciting opportunities and significant challenges. From creating virtual artists like Aisis to resurrecting deceased musicians, AI is reshaping the way music is made, performed, and consumed. However, while younger generations may embrace these digital creations, the music industry must carefully navigate the ethical and creative implications of AI-generated music.

As AI technology continues to evolve, the line between human and machine-made music will blur. But at its core, music remains an emotional, personal experience that AI alone cannot replicate. The future of music lies in collaboration—where AI serves as a tool for innovation, and human musicians provide the heart and soul that makes music truly resonate.

Sources:
Categories
Digital Asset Management

Understanding C2PA: Enhancing Digital Content Provenance and Authenticity

Overview of C2PA

The Coalition for Content Provenance and Authenticity (C2PA) is a groundbreaking initiative aimed at combating digital misinformation by providing a framework for verifying the authenticity and provenance of digital content. Formed by a consortium of major technology companies, media organizations, and industry stakeholders, C2PA’s mission is to develop open standards for content provenance and authenticity. These standards enable content creators, publishers, and consumers to trace the origins and modifications of digital media, ensuring its reliability and trustworthiness.

C2PA’s framework is designed to be globally adopted and integrated across various digital platforms and media types. By offering a standardized approach to content verification, C2PA aims to build a more transparent and trustworthy digital ecosystem.

Importance of Provenance and Authenticity

In today’s digital age, misinformation and manipulated media are pervasive challenges that undermine trust in digital content. The ability to verify the provenance and authenticity of media is crucial for combating these issues. Provenance refers to the history and origin of a digital asset, while authenticity ensures that the content has not been tampered with or altered in any unauthorized way.

C2PA addresses these challenges by providing a robust system for tracking and verifying the origins and modifications of digital content. This system allows consumers to make informed decisions about the media they consume, enhancing trust and accountability in digital communications. By establishing a reliable method for verifying content authenticity, C2PA helps to mitigate the spread of misinformation and fosters a healthier digital information environment.

CHESA’s Commitment to C2PA
Embracing C2PA Standards

CHESA fully embraces the tenets of the C2PA now officially as a Contributing Member, and is poised to assist in implementing these standards into our clients’ workflows. By integrating C2PA’s framework, CHESA ensures that our clients can maintain the highest levels of content integrity and trust.

Customized Solutions for Clients

CHESA offers customized solutions that align with C2PA’s principles, helping clients incorporate content provenance and authenticity into their digital asset management systems. Our expertise ensures a seamless adoption process, enhancing the credibility and reliability of our clients’ digital content.

Technical Specifications of C2PA
Architecture and Design

The C2PA framework is built on a set of core components designed to ensure the secure and reliable verification of digital content. The architecture includes the following key elements:

  1. Provenance Model: Defines how provenance information is structured and stored, enabling the tracking of content history from creation to dissemination.
  2. Trust Model: Establishes the mechanisms for verifying the identity of content creators and publishers, ensuring that provenance information is reliable and trustworthy.
  3. Claim Model: Describes the types of claims that can be made about content (e.g., creation date, creator identity) and how these claims are managed and verified.
  4. Binding Techniques: Ensures that provenance information is cryptographically bound to the content, preventing unauthorized alterations and ensuring the integrity of the provenance data.

These components work together to provide a comprehensive solution for content provenance and authenticity, facilitating the adoption of C2PA standards across various digital media platforms.

Establishing Trust

Central to the C2PA framework is the establishment of trust in digital content. The trust model involves the use of cryptographic signatures to verify the identity of content creators and the integrity of their contributions. When a piece of content is created or modified, a digital signature is generated using the creator’s unique cryptographic credentials. This signature is then included in the provenance data, providing a verifiable link between the content and its creator.

To ensure the credibility of these signatures, C2PA relies on Certification Authorities (CAs) that perform real-world due diligence to verify the identities of content creators. These CAs issue digital certificates that authenticate the identity of the creator, adding an additional layer of trust to the provenance data. This system enables consumers to confidently verify the authenticity of digital content and trust the information provided in the provenance data.

Claims and Assertions

Claims and assertions are fundamental concepts in the C2PA framework. A claim is a statement about a piece of content, such as its origin, creator, or the modifications it has undergone. These claims are cryptographically signed by the entity making the claim, ensuring their integrity and authenticity. Assertions are collections of claims bound to a specific piece of content, forming the provenance data.

The process of creating and managing claims involves several steps:

  1. Creation: Content creators generate claims about their content, such as metadata, creation date, and location.
  2. Signing: These claims are digitally signed using the creator’s cryptographic credentials, ensuring their authenticity.
  3. Binding: The signed claims are then bound to the content, forming a tamper-evident link between the content and its provenance data.
  4. Verification: Consumers and applications can verify the claims by checking the digital signatures and ensuring the provenance data has not been altered.
  5. This structured approach to managing claims and assertions ensures that the provenance data remains reliable and verifiable throughout the content’s lifecycle.
Binding to Content

Binding provenance data to content is a critical aspect of the C2PA framework. This binding ensures that any changes to the content are detectable, preserving the integrity of the provenance data. There are two main types of bindings used in C2PA: hard bindings and soft bindings.

  1. Hard Bindings: These create a cryptographic link between the content and its provenance data, making any alterations to the content or data immediately detectable. Hard bindings are highly secure and are used for content where integrity is paramount.
  2. Soft Bindings: These are less stringent and allow for some modifications to the content without invalidating the provenance data. Soft bindings are useful for content that may undergo minor, non-substantive changes after its initial creation.

Both binding types play a crucial role in maintaining the integrity and reliability of provenance data, ensuring that consumers can trust the content they encounter.

Guiding Principles of C2PA
Privacy and Control

C2PA is designed with a strong emphasis on privacy and user control. The framework allows content creators and publishers to control what provenance data is included with their content, ensuring that sensitive information can be protected. Users have the option to include or redact certain assertions, providing flexibility in how provenance data is managed.

Key principles guiding privacy and control include:

  1. User Consent: Content creators must consent to the inclusion of their provenance data.
  2. Data Minimization: Only the necessary provenance data is included to maintain privacy.
  3. Redaction: Users can redact specific claims to protect sensitive information without invalidating the remaining provenance data.

These principles ensure that the C2PA framework respects user privacy while maintaining the integrity and reliability of the provenance data.

Addressing Potential Misuse

To prevent misuse and abuse of the C2PA framework, a comprehensive harms, misuse, and abuse assessment has been integrated into the design process. This assessment identifies potential risks and provides strategies to mitigate them, ensuring the ethical use of C2PA technology.

Key aspects of this assessment include:

  1. Identification of Potential Harms: Analyzing how the framework might negatively impact users and stakeholders.
  2. Mitigation Strategies: Developing guidelines and best practices to prevent misuse and abuse.
  3. Ongoing Monitoring: Continuously assessing the impact of the framework and updating mitigation strategies as needed.

By addressing potential misuse proactively, C2PA aims to create a safe and ethical environment for digital content verification.

Security Considerations

Security is a paramount concern in the C2PA framework. The framework incorporates a range of security features to protect the integrity of provenance data and ensure the trustworthiness of digital content.

These features include:

  1. Provenance Model: Ensures that provenance information is securely stored and managed.
  2. Trust Model: Utilizes cryptographic signatures and certification authorities to verify identities.
  3. Claim Signatures: Cryptographically signs all claims to prevent tampering.
  4. Content Bindings: Uses hard and soft bindings to detect unauthorized changes.
  5. Validation: Provides mechanisms for consumers to verify the authenticity of provenance data.
  6. Protection of Personal Information: Ensures that personal data is handled in compliance with privacy regulations.

These security features work together to create a robust system for verifying the authenticity and provenance of digital content, protecting both content creators and consumers from potential threats.

Practical Applications of C2PA
Use in Journalism

One of the most significant applications of C2PA is in journalism, where the integrity and authenticity of content are paramount. By using C2PA-enabled devices and software, journalists can ensure that their work is verifiable and tamper-proof. This enhances the credibility of journalistic content and helps combat the spread of misinformation.

Real-world examples include photojournalists using C2PA-enabled cameras to capture images and videos that are then cryptographically signed. These assets can be edited and published while retaining their provenance data, allowing consumers to verify their authenticity. This process increases transparency and trust in journalistic work.

Consumer Benefits

C2PA provides numerous benefits for consumers by enabling them to verify the authenticity and provenance of the digital content they encounter. With C2PA-enabled applications, consumers can check the history of a piece of content, including its creator, modifications, and source. This empowers consumers to make informed decisions about the media they consume, reducing the risk of falling victim to misinformation.

Tools and applications developed for end-users can seamlessly integrate with C2PA standards, providing easy access to provenance data and verification features. This accessibility ensures that consumers can confidently trust the content they interact with daily.

Corporate and Legal Applications

Beyond journalism and consumer use, C2PA has significant applications in corporate and legal contexts. Corporations can use C2PA to protect their brand by ensuring that all published content is verifiable and tamper-proof. This is particularly important for marketing materials, official statements, and other critical communications.

In the legal realm, C2PA can enhance the evidentiary value of digital assets. For example, in cases where digital evidence is presented in court, the use of C2PA can help establish the authenticity and integrity of the evidence, making it more likely to be admissible. This application is vital for legal proceedings that rely heavily on digital media.

Application in Media and Entertainment
Enhancing Content Integrity

In the M&E industry, content integrity is crucial. C2PA’s standards ensure that digital media, including videos, images, and audio files, retain their authenticity and provenance data throughout their lifecycle. This is essential for maintaining audience trust and protecting intellectual property.

Streamlining Workflow

CHESA’s integration of C2PA into client workflows will help streamline the process of content creation, editing, and distribution. By automating provenance and authenticity checks, media companies can focus on creating high-quality content without worrying about the integrity of their digital assets.

Protecting Intellectual Property

For media companies, protecting intellectual property is a top priority. C2PA’s framework provides robust mechanisms for verifying content ownership and tracking modifications, ensuring that original creators receive proper credit and protection against unauthorized use.

Implementation and Adoption
Global Adoption Strategies

C2PA aims to achieve global, opt-in adoption by fostering a supportive ecosystem for content provenance and authenticity. This involves collaboration with various stakeholders, including technology companies, media organizations, and governments, to promote the benefits and importance of adopting C2PA standards.

Strategies to encourage global adoption include:

  1. Education and Outreach: Raising awareness about the importance of content provenance and authenticity through educational initiatives and outreach programs.
  2. Partnerships: Building partnerships with key industry players to drive the adoption and implementation of C2PA standards.
  3. Incentives: Offering incentives for early adopters and providing resources to facilitate the integration of C2PA into existing workflows.

By implementing these strategies, C2PA aims to create a robust and diverse ecosystem that supports the widespread use of content provenance and authenticity standards.

Implementation Guidance

To ensure consistent and effective implementation, C2PA provides comprehensive guidance for developers and implementers. This guidance includes best practices for integrating C2PA standards into digital platforms, ensuring that provenance data is securely managed and verified.

Key recommendations for implementation include:

  • Integration with Existing Systems: Leveraging existing technologies and platforms to integrate C2PA standards seamlessly.
  • User-Friendly Interfaces: Designing user-friendly interfaces that make it easy for content creators and consumers to interact with provenance data.
  • Compliance and Security: Ensuring compliance with relevant privacy and security regulations to protect personal information and maintain data integrity.

By following these recommendations, developers and implementers can create reliable and user-friendly applications that adhere to C2PA standards.

Future Developments

C2PA is committed to ongoing maintenance and updates to its framework to address emerging challenges and incorporate new technological advancements. Future developments will focus on enhancing the robustness and usability of the framework, expanding its applications, and fostering a diverse and inclusive ecosystem.

Key goals for future developments include:

  1. Continuous Improvement: Regularly updating the framework to address new security threats and technological advancements.
  2. Expanded Applications: Exploring new use cases and applications for C2PA standards in various industries.
  3. Community Engagement: Engaging with a broad range of stakeholders to ensure the framework meets the needs of diverse user groups.

By focusing on these goals, C2PA aims to maintain its relevance and effectiveness in promoting content provenance and authenticity in the digital age.

Conclusion

The Coalition for Content Provenance and Authenticity (C2PA) represents a significant step forward in the fight against digital misinformation and the promotion of trustworthy digital content. By providing a comprehensive framework for verifying the authenticity and provenance of digital media, C2PA enhances transparency and trust in digital communications.

Through its robust technical specifications, guiding principles, and practical applications, C2PA offers a reliable solution for content creators, publishers, and consumers. The framework’s emphasis on privacy, security, and ethical use ensures that it can be adopted globally, fostering a healthier digital information environment.

As C2PA continues to evolve and expand, its impact on the digital landscape will only grow, helping to build a more transparent, trustworthy, and informed digital world.

Categories
Digital Asset Management digital media storage Technology

Strategies for Effective Data and Content Management

Discover essential strategies for effective data and content management, including indexing, storage solutions, toolsets, and cost optimization from an experienced media manager and Senior Solutions Architect. 

Introduction 

Data and content management is a critical concern for organizations of all sizes. Implementing effective strategies can significantly optimize storage capacities, reduce costs, and ensure seamless access to valuable media. Drawing from my experience as a media manager and a Senior Solutions Architect, this article will explore best practices for data and content management, offering insights and practical solutions to enhance your organization’s efficiency. 

Itemizing Your Indexes 

The first step in data or media management involves identifying the locations of your content and the appropriate tools for indexing and management. Utilizing an asset management system, which typically covers roughly 40% of your total data, whether structured or unstructured, is a common approach to managing the subset of media or content. To begin organizing your full data set, consider these questions: 

  • What storage solutions are you using?
  • What are the capacities and the organizational structure of these storages (e.g., volumes, shares, and directories)? How are they utilized?
  • What are the costs associated with each storage per terabyte?
  • What tools are currently in place for managing the data?
  • How is content transferred and moved within your system?
  • What retention policies are in place, and are they automated?
  • What content is not managed by the Asset Management platform?

Answering these questions will set you on the right path toward effective management and cost optimization. Additionally, implementing measures like checksums during content indexing can help media managers quickly identify duplicate content in the storage, enhancing efficiency. 

Saving Your Toolsets 

Media management toolsets can vary significantly in their interfaces, ranging from Command Line Interfaces (CLI) to more visual interfaces like Finder or Asset Management UIs. Each interface offers a unique way to interact with and manage media effectively. 

Most Media Asset Management (MAM), Production Asset Management (PAM), and Digital Asset Management (DAM) systems feature Web UIs that support saved searches. These saved searches enable consistent content management across different teams and facilitate the sharing of management strategies. Implementing routine searches—whether daily, weekly, or monthly—is considered best practice in media management. For instance, during my time at a news broadcasting company in NYC, we used the term “Kill Kill Kill” to tag content for rapid removal. This industry-specific term signaled to everyone in production that the content was no longer in use. Although the word “Kill” might appear in a news headline or tagging field, it was distinctive in this triple format, making it a straightforward target for search-based content removal. This method efficiently reclaimed production and editorial storage space. 

Searches could also be organized by creation dates or hold dates to manage content systematically. Content older than three months was typically archived or deleted, and anything past its “hold” date by a week was also removed. 

For content like auto-saves and auto-renders in editorial projects, specific searches through a “finder”-like application were vital. Having a well-organized storage system meant we knew exactly where to look for and find this content. If content remained on physical storage but was no longer on the MAM, aka- “Orphaned”, it could be identified by its modified date. 

Using a CLI for content management is generally more complex and unforgiving, often reserved for content that was not deleted using other methods. This process should be handled solely by an administrator with the appropriate storage credentials. Preparing a list of CLI commands beforehand can significantly streamline the use of this interface. 

Maximizing Storage Efficiency and Minimizing Costs 

Just as nearly everyone has a junk drawer at home, organizations typically have their equivalent where users casually store content and documents, often forgetting about them. This leads to the gradual accumulation of small files that consume significant storage capacity. 

Assigning Storage Volumes 

To address this, organizations can benefit from assigning storage volumes or shares for specific uses rather than allowing open access, which helps prevent wasted space. For example, ensuring that only editorial content resides on the “Editing Share” simplifies the identification and management of caching and temporary files. 

Implementing Storage Tiering Policies 

Implementing a storage tiering policy for data at rest can also optimize production costs. By relocating less active projects to nearline storage, space is freed up for active projects. Many organizations differentiate between high-cost, high-performance Tier 1 storage and lower-cost Tier 3 storage, such as Production and Archive. Data that is not actively in use but should not yet be archived can remain costly if kept on Tier 1 storage due to its higher per-terabyte cost. For instance, if Tier 1 storage costs $30 per terabyte and Tier 2 costs $6 per terabyte, maintaining dormant data on Tier 1 can be unnecessarily expensive—$24 more per terabyte. This cost differential becomes especially significant in cloud storage, where monthly fees can quickly accumulate. Choosing a cloud provider with “free-gress” will also help control or enable costs to be predictable. 

Additionally, configuring alerts to notify when storage capacities are nearing their limits can help media managers prioritize their processes more effectively. These notifications also aid in reducing or eliminating overage fees charged by cloud providers when limits are exceeded. 

Refreshing the Evergreen 

“Evergreen content” refers to materials that are frequently used and never become obsolete, thus exempt from archiving. This includes assets like lower thirds, wipes, banners, intros, outros, and animations—items that are continually in demand. Such content benefits from being stored on nearline for swift access or on Tier 1 production storage, where it can be effectively managed with an optimized codec and bitrate to reduce its storage footprint while maintaining quality. The choice of codec is crucial here; graphic content that is originally rendered as lossless and uncompressed can be compressed before distribution to enhance efficiency and speed up access. 

Additionally, evergreen “beauty shots” such as videos of building exteriors or well-known landmarks should also be stored on nearline rather than archived. This placement allows for easy updating or replacement as soon as the content becomes dated, ensuring that it remains current and useful. Systems that allow for proxy editing should also use a strategy, where non-essential or evergreen content remains on the Tier 2 nearline. This ensures that content is housed at a cost effective and accessible space. 

Optimized Cloud Costs 

Cloud costs are a critical consideration in media management, especially with egress fees associated with restoring archived content, which can quickly accumulate if not carefully managed. Media managers can significantly reduce these costs with strategic planning. When content is anticipated to be frequently used by production teams, fully restoring a file is advisable. This will prevent multiple users from partially restoring similar content with mismatching timecodes. Additionally, carefully selecting a representative set of assets on a given topic and communicating this selection to production staff can streamline processes and reduce costs. 

For example, in the context of news, when a story about a well-known celebrity emerges, a media manager might choose to restore a complete set of widely recognized assets related to that celebrity. This approach prevents multiple users from restoring parts of the same content with different timecodes. Providing a well-chosen, easily accessible set of assets on a specific topic can prevent production teams from unnecessarily restoring a large volume of content that ultimately goes unused. 

Conclusion 

Each organization has unique production and data management needs. By strategically planning, defining, and organizing content lifecycles, they can streamline access to frequently used assets and minimize unnecessary expenses. Effective data and content management are essential for optimizing storage capacities, reducing costs, and ensuring unrestricted access to valuable media. Implementing diverse media management toolsets and defined retention policies facilitates organized archiving and retrieval, enhancing team collaboration and storage space optimization. By adopting these approaches and strategies, organizations can maintain a well-organized, cost-effective, and highly accessible data storage system that supports both current and future needs, ensuring seamless content management and operational efficiency. 

Categories
Technology Video

The Rise of Lossless Media: A Compression Tale

Introduction

Compression has been crucial in managing the storage and transmission of large media files. However, as technological advancements continue, the role of compression is evolving. This article delves into the history of media compression, differentiates its role in post-production and broadcast consumption, and explores the future of lossless media. We also discuss the evolution of bandwidth, streaming platforms, and wireless technologies driving this transformation. As we move towards a future where terabytes per second of data transfer speeds and petabytes of storage become commonplace, lossy compression may become a relic of the past, giving way to a new era of lossless, high-fidelity media.

Fun Fact: Claude Shannon, known as the father of information theory, developed the first theoretical model of data compression in 1948. His groundbreaking work laid the foundation for all modern data compression techniques.

The Genesis of Media Compression

Compression techniques were developed to address the limitations of early digital storage and transmission technologies, enabling the efficient handling of large media files.

  • Audio Compression: The MP3 format, introduced in the early 1990s, significantly reduced audio file sizes by removing inaudible frequencies, revolutionizing music distribution and storage.
  • Image Compression: JPEG compression, developed around the same time, reduced image file sizes by exploiting human visual limitations, impacting digital photography and web development.
  • Video Compression: Standards like MPEG-1, MPEG-2, and H.264 were created to reduce video data requirements while maintaining visual quality, facilitating efficient video streaming and storage.
  • Editing Formats Compression: Early editing systems like CineWave and Media 100 used their proprietary codecs to enable real-time video editing and playback, providing a foundation for the development of modern high-efficiency editing formats. Later, formats like Avid DNxHD were developed to balance high quality and manageable file sizes, allowing for smoother editing workflows by reducing the strain on storage and processing power. Following this, codecs such as Apple ProRes emerged, further enhancing editing efficiency while preserving much of the original quality. These advancements set the stage for the use of proxy workflows, where lower-resolution copies of high-resolution files are used during the editing process to improve performance and reduce system demands.
Honoring the Codec Pioneers

These early codecs and non-linear editing (NLE) systems, despite their limitations, were essential in the development of digital video technology. They enabled the first steps towards online video streaming, multimedia content distribution, and advanced video editing workflows. While many of these codecs and systems have since fallen out of use, they paved the way for the advanced compression technologies and editing capabilities we rely on today.

1970s

  • CMX 600 (1971): Developed by CMX Systems, the CMX 600 was one of the first computerized video editing systems. It used magnetic tape to store data and allowed for basic non-linear editing capabilities.

1980s

  • Ampex VideoFile (1982): One of the first commercial non-linear editing systems, VideoFile used digital storage for editing purposes, laying the groundwork for future NLE systems.
  • Lucasfilm EditDroid (1984): Developed by Lucasfilm, EditDroid used laserdiscs to store video footage, offering more flexibility than tape-based systems.
  • Cinepak (1989): One of the earliest video codecs, Cinepak was used extensively in the early days of digital video, particularly within Apple’s QuickTime and Microsoft’s Video for Windows platforms. It offered low compression efficiency but widespread compatibility.

1990s

  • Avid Media Composer (1989): One of the first widely adopted NLE systems, Avid Media Composer revolutionized video editing by allowing editors to manipulate digital video with great flexibility and precision.
  • Microsoft AVI Codecs (Early 1990s): The Audio Video Interleave (AVI) format supported a variety of codecs such as Intel Indeo, Cinepak, and Microsoft Video 1, enabling early digital video playback and editing.
  • QuickTime (1991): Apple’s multimedia framework included support for various codecs like Sorenson Video and Cinepak, becoming a popular format for video playback on both Mac and Windows platforms.
  • JPEG (1992): The JPEG standard for compressing still images reduced file sizes by exploiting human visual limitations, making it crucial for digital photography and web images.
  • MP3 (1993): The MPEG-1 Audio Layer III, or MP3, became the standard for audio compression, significantly reducing file sizes and revolutionizing music distribution.
  • Media 100 (1993): An early digital non-linear editing system, Media 100 used proprietary codecs to enable high-quality video editing and playback on standard desktop computers.
  • RealVideo (1997): Developed by RealNetworks, RealVideo was one of the first codecs designed specifically for streaming video over the internet. RealPlayer became popular for watching video clips online despite the relatively low quality compared to today’s standards.
  • DivX (1998): Initially based on a hacked Microsoft MPEG-4 Part 2 codec, DivX offered high-quality video at reduced file sizes, becoming popular for DVD-ripping and internet distribution.
  • Final Cut Pro (1999): Developed by Macromedia and later acquired by Apple, Final Cut Pro became a major player in the professional editing market, known for its user-friendly interface and powerful features.

2000s

  • VP3 (2000): Developed by On2 Technologies, VP3 was an early open-source video codec that evolved into VP6 and VP7, used in Adobe Flash video. VP3 laid the groundwork for the VP8 and VP9 codecs later used by Google.
  • Sorenson Video (Early 2000s): Used primarily in QuickTime files, Sorenson Video provided good quality at relatively low bitrates, facilitating early internet video streaming.
  • Xvid (2001): An open-source alternative to DivX, Xvid was based on the MPEG-4 Part 2 codec and gained popularity for its ability to compress video files without significant loss of quality.
  • 264 (2003): Also known as AVC (Advanced Video Coding), H.264 became the standard for video compression, offering high-quality video at lower bitrates and being widely adopted for streaming, broadcasting, and Blu-ray discs.
  • Avid DNxHD (2004): Developed for high-definition video editing, DNxHD provided high quality and manageable file sizes, reducing the strain on storage and processing power.
  • Apple ProRes (2007): An intermediate codec developed by Apple, ProRes balanced high quality and low compression, becoming a standard in professional video production.

2010s

  • VP8 (2010): Acquired by Google, VP8 was used in the WebM format for web video, offering a royalty-free alternative to H.264.
  • 265/HEVC (2013): High Efficiency Video Coding (HEVC) provided improved compression efficiency over H.264, reducing bitrates by about 50% while maintaining the same quality. It was crucial for 4K video streaming and broadcasting.
Diverging Paths: Post-Production vs. Broadcast Consumption

The future of media compression can be divided into two distinct areas: post-production and broadcast consumption. Each has unique requirements and challenges as we move towards a world with less reliance on compression.

Post-Production: Towards Lossless Workflows

In the realm of post-production, the trend is unmistakably moving towards lossless and uncompressed media. This shift is driven by the pursuit of maintaining the highest possible quality throughout the editing process. Here’s why this evolution is taking place:

Quality Preservation: In post-production, maintaining the highest possible quality is paramount. Compression artifacts can interfere with editing, color grading, and special effects, ultimately compromising the final output. By working with uncompressed media, filmmakers and editors can ensure that the integrity of their footage is preserved from start to finish.

Storage Solutions: The rapid advancement in storage technology has made it feasible to handle vast amounts of lossless media. High-speed NVMe SSDs and large-capacity HDDs provide the necessary space and access speeds for handling these large files efficiently. Additionally, cloud storage solutions offer virtually unlimited space, further reducing the dependency on compression.

High-Resolution Content: The increasing demand for 4K, 8K, and even higher resolution content requires lossless files to preserve every detail and maintain dynamic range. As viewing standards continue to rise, the need for pristine, high-quality footage becomes even more critical.

Raw and Lossless Formats for Popular Cameras:
  • REDCODE RAW (2007): Used by RED cameras, REDCODE RAW offers high-quality, lossless or lightly compressed video suitable for post-production workflows, maintaining high dynamic range and color fidelity.
  • ARRIRAW (2010): The uncompressed, unencrypted format used by ARRI cameras, ARRIRAW provides maximum image quality and flexibility in post-production, capturing the full sensor data for precise color grading and effects work.
  • KineRAW (2012): Employed by Kinefinity cameras, KineRAW offers uncompressed or lightly compressed RAW video, ensuring high image quality and flexibility for color grading and visual effects.
  • DJI RAW (2015): Found in DJI’s professional aerial and handheld cameras, DJI RAW offers high-quality, uncompressed or lightly compressed video, capturing detailed image data for robust post-production workflows.
  • Sony X-OCN (eXtended Original Camera Negative) (2016): Used in Sony’s high-end cinema cameras, X-OCN offers high-quality, lightly compressed video, balancing file size and image quality for extended recording times and efficient post-production workflows.
  • Canon Cinema RAW Light (2017): A lightly compressed RAW format used in Canon’s cinema cameras, Cinema RAW Light balances quality and file size, capturing extensive image data for detailed post-production work.
  • Apple ProRes RAW (2018): Widely used in professional video production, Apple ProRes RAW combines high-quality video with efficient compression, compatible with various cameras and editing software. It allows for flexible adjustments in post-production.
  • Blackmagic RAW (BRAW) (2018): An efficient codec from Blackmagic Design, BRAW offers high-quality, lightly compressed video with flexible post-production options. It includes metadata for enhanced editing capabilities and preserves sensor data for high dynamic range.
  • ZRAW (2018): Used by Z CAM cameras, ZRAW is a lightly compressed RAW format that maintains high image quality and provides flexibility in post-production, allowing for extensive color correction and grading.
  • Panasonic V-RAW (2019): Utilized by Panasonic’s high-end cameras, V-RAW provides high-quality, lightly compressed footage, preserving the sensor’s dynamic range and color depth for detailed post-production adjustments.

These RAW and uncompressed formats are essential for professional video production, providing filmmakers with the flexibility and quality needed to achieve the best possible results in post-production. The move towards lossless workflows signifies a commitment to excellence and the pursuit of the highest visual standards in the industry.

Editing in RAW Format with NLEs

Modern NLE systems have advanced to support the editing of RAW formats, providing filmmakers and editors with unparalleled flexibility and control over their footage. NLEs such as Adobe Premiere Pro, Final Cut Pro, DaVinci Resolve, and Avid Media Composer are equipped to handle various RAW formats like REDCODE RAW, Apple ProRes RAW, ARRIRAW, Blackmagic RAW, and more. These systems enable real-time editing and color grading of RAW footage, allowing editors to leverage the full dynamic range and color depth captured by high-end cameras. By preserving the original sensor data, NLEs offer extensive post-production capabilities, including non-destructive adjustments to exposure, white balance, and other critical image parameters, ensuring the highest quality output for professional film and video projects.

Broadcast Consumption: The Push for Lossless Media

On the consumption side, the trend towards losslessly compressed media is gaining significant momentum, although the challenges here are different from those in post-production.

Bandwidth Expansion: The rollout of 5G and the expansion of fiber optic networks promise dramatically increased internet speeds. This advancement makes it feasible to stream high-quality, lossless media to end-users, reducing the need for traditional lossy compression techniques. With these higher speeds, consumers can enjoy pristine audio and video quality that was previously unattainable due to bandwidth limitations.

Streaming Platforms: Services like Apple Music, Amazon Music HD, and Tidal have been offering lossless audio streaming for some time, providing users with a higher quality listening experience. This trend is likely to extend to video streaming, with platforms like Netflix and Disney+ exploring ways to deliver losslessly compressed 4K and HDR content. As these services push the envelope, they will set new standards for media quality in the streaming industry.

Wireless Technologies: Advances in wireless technology, including Wi-Fi 6, Wi-Fi 7, and future iterations, will support higher data rates and more reliable connections. These improvements will facilitate the streaming of lossless media, making it more accessible to a broader audience. With these advancements, users can expect seamless streaming experiences with minimal buffering and superior quality, regardless of their location.

As the infrastructure for high-speed internet and advanced wireless technologies continues to grow, the consumption of losslessly compressed media will become more widespread. This shift not only enhances the user experience but also pushes the industry towards a new standard of quality, reflecting the full potential of modern digital media technologies.

Emerging Formats and Technologies

Several modern video codecs and technologies are emerging that offer significant improvements in compression efficiency and quality, with some poised to support lossless video capabilities. Additionally, advancements in storage and transmission technologies will facilitate the handling of large lossless media files

Video Codecs

  • AV1 (AOMedia Video 1) – 2018: Developed by the Alliance for Open Media, AV1 is a royalty-free, open-source codec designed specifically for video streaming. It offers superior compression efficiency compared to older codecs like H.264 and H.265/HEVC. Major companies like Google, Netflix, and Amazon are backing AV1, and Apple’s recent endorsement by including AV1 support in the iPhone 15 Pro (2023) is expected to accelerate its adoption.
  • Versatile Video Coding (VVC or H.266) – 2020: VVC aims to provide significant improvements in compression efficiency over its predecessor, HEVC. It can reduce bitrates by about 50% compared to HEVC while maintaining the same quality, which is particularly beneficial for 4K and 8K video streaming. VVC is starting to be integrated into new hardware and smart TVs, with broader adoption expected as more devices gain support.
  • Low Complexity Enhancement Video Coding (LCEVC) – 2020: LCEVC is an enhancement codec that works in conjunction with existing codecs like AVC, HEVC, VP9, and AV1 to improve compression efficiency and reduce computational load. It is designed to be lightweight, allowing it to run on devices without dedicated hardware support, making it suitable for mobile and browser-based applications.
  • Essential Video Coding (EVC) – 2020: EVC was developed with a focus on providing both a baseline profile that is license-free and a main profile that offers higher efficiency with some associated licensing costs. It aims to balance performance and cost, making it a flexible option for various use cases.

AI and Compression: AI is increasingly being used to develop smarter compression algorithms. For example, Google’s AI compression system, RAISR, uses machine learning to enhance images after compression, reducing file sizes while maintaining quality.

Storage and Transmission Technologies

  • Holographic Storage – 2030s (Projected): Innovations in holographic storage will revolutionize how we store large amounts of uncompressed data by providing high-density storage solutions. This technology uses laser beams to store data in three dimensions, offering significantly higher storage capacities.
  • DNA Data Storage – 2030s (Projected): DNA data storage offers a futuristic approach to storing massive amounts of data in a very compact form, potentially transforming how we archive uncompressed media. By encoding data into synthetic DNA, this technology promises unparalleled density and durability.
  • Quantum Internet – 2040s (Projected): On the transmission side, the quantum internet promises unprecedented data transfer speeds, which could facilitate the rapid transmission of large, uncompressed media files. Quantum entanglement could enable instant data transfer over long distances, revolutionizing data communication.
  • 5G and Beyond – 2020s and Beyond: The rollout of 5G and future wireless technologies will support higher data rates and more reliable connections, enabling seamless streaming of high-quality, lossless media. Future generations like 6G are expected to further enhance these capabilities, making real-time, high-fidelity media streaming ubiquitous.

These emerging formats and technologies are set to transform the landscape of media production, storage, and consumption, driving us towards a future where uncompressed and lossless media become the norm.

The Bandwidth Paradox: Rising Demand

Just as Moore’s Law predicts the doubling of transistors on a chip every two years, Nielsen’s Law of Internet Bandwidth states that high-end user connection speeds grow by 50% per year. As bandwidth increases, so too does the demand for new technologies that consume it. This phenomenon is often referred to as the “bandwidth paradox.” Despite advancements that provide higher speeds and greater capacity, emerging technologies continually push the limits of available bandwidth.

Virtual Reality (VR) and Augmented Reality (AR)

  • VR and AR Technologies: Virtual reality and augmented reality are at the forefront of the next generation of immersive experiences. These technologies require high-resolution, low-latency streaming to create convincing and responsive environments. For VR, a fully immersive experience typically requires video resolutions of at least 4K per eye and frame rates of 90 to 120 frames per second. AR, which overlays digital content onto the real world, also demands significant bandwidth for real-time processing and high-quality visuals.
  • Bandwidth Requirements: Current VR and AR applications already require substantial bandwidth, and as these technologies evolve, the demand will only increase. Advanced VR and AR setups may require 50-100 Mbps of sustained bandwidth to ensure smooth, lag-free experiences. This requirement can strain even the most robust networks, especially when multiple users are accessing the same services simultaneously.

Advanced Immersive Recording Devices

  • 360-Degree Cameras and Volumetric Capture: Modern recording devices like 360-degree cameras and volumetric capture systems create highly detailed and interactive content. These devices capture vast amounts of data to produce immersive videos and holograms, which can be used for everything from virtual tours to interactive educational content.
  • Data Intensity: The data generated by these devices is immense. For example, a single minute of 360-degree 4K video can consume several gigabytes of storage. When this content is streamed, it requires equally substantial bandwidth to ensure that the end-user experience is seamless and high quality.

Cloud Gaming and Interactive Streaming

  • Cloud Gaming Services: Services like Google Stadia, NVIDIA GeForce Now, and Microsoft’s Xbox Cloud Gaming (formerly Project xCloud) deliver high-quality gaming experiences over the internet. These services render games on powerful cloud servers and stream the video output to users’ devices.
  • Bandwidth Requirements: Cloud gaming requires low latency and high bandwidth to deliver responsive and immersive gameplay. For a 1080p stream at 60 frames per second, the required bandwidth can range from 15 to 25 Mbps. As 4K gaming becomes more prevalent, the bandwidth requirements can skyrocket to 35 Mbps or more.

The Growing Demand for High-Quality Streaming

  • 4K and 8K Streaming: As consumer demand for high-definition content grows, streaming services like Netflix, Amazon Prime Video, and Disney+ are shifting towards 4K and even 8K video resolutions. While 4K streaming requires approximately 25 Mbps, 8K streaming can demand upwards of 100 Mbps, depending on the compression technologies used.
  • Interactive and Live Streaming: Live streaming platforms like Twitch and YouTube Live are increasingly popular. High-quality, interactive live streams, particularly those involving multiple camera angles or real-time audience interaction, require substantial bandwidth to maintain quality and responsiveness.

Contradiction: Chattanooga, TN, already boasts 25Gb home internet, yet the adoption rate of 1Gb speeds remains low, highlighting the ongoing challenges in achieving widespread high-speed internet saturation.

Conclusion

As we stand on the brink of a new era in digital media, the concept of compression as we know it is poised to become a relic of the past. The relentless march of technological advancement in storage and bandwidth promises a future where lossless or uncompressed, high-fidelity media becomes the norm. Imagine a world where terabytes per second of data transfer speeds and petabytes of storage are commonplace, even on devices as ubiquitous as smartphones. Just twenty years ago, in 2004, typical consumer hard drives had capacities ranging from 40 GB to 160 GB—considered impressive at the time. This impending reality will usher in unprecedented levels of quality and immediacy in media consumption and production. The shift towards uncompressed workflows in post-production, driven by the need for maximal quality, coupled with the exponential growth in streaming capabilities through 5G, fiber optics, and beyond, sets the stage for a future where the limitations of today are no more. As these technologies mature, the cumbersome processes of compression and decompression will fade into history, making way for a seamless digital experience that reflects the true potential of human creativity and technological innovation.

References

  • (2024). AV1 Codec Overview.
  • (2024). The Future of Video Compression with VVC.
  • Streaming Media Magazine. (2023). LCEVC: Enhancing Video Compression Efficiency.
  • Streaming Media Magazine. (2023). Essential Video Coding (EVC): Balancing Performance and Cost.
  • Cisco Systems. (2021). Cisco Visual Networking Index: Forecast and Trends, 2018–2023.
  • International Telecommunication Union. (2020). The State of Broadband 2020: Tackling Digital Inequalities.
  • Seagate Technology. (2021). The Data Age 2025: The Digital World.
  • Future Storage Innovations: Holographic Storage and DNA Data Storage. (2030s).
  • Quantum Internet: The Next Frontier in Data Transmission. (2040s).
  • Shannon, C. E. (1948). A Mathematical Theory of Communication. Bell System Technical Journal.
  • Fraunhofer Institute. (1993). Development of the MP3 Audio Compression Format.
  • ITU-T. (2003). Recommendation H.264: Advanced Video Coding for Generic Audiovisual Services.
  • Alliance for Open Media. (2018). AV1 Video Codec Specification.
  • Google AI Blog. (2017). RAISR: Rapid and Accurate Image Super-Resolution.
  • Lucasfilm Ltd. (1984). Introduction of EditDroid.
  • RED Digital Cinema. (2007). REDCODE RAW Technical Specifications.
  • ARRI Group. (2010). ARRIRAW Technology Overview.
  • (2012). KineRAW: A New Era of Raw Video.
  • (2015). DJI RAW: High-Quality Aerial Footage.
  • Sony Corporation. (2016). X-OCN: Extended Original Camera Negative.
  • Canon Inc. (2017). Cinema RAW Light: Balancing Quality and File Size.
  • Apple Inc. (2018). ProRes RAW: Professional Video Production.
  • Blackmagic Design. (2018). Blackmagic RAW: The Next Generation Codec.
  • Z CAM. (2018). ZRAW: Flexibility in Post-Production.
  • Panasonic Corporation. (2019). V-RAW: High-Quality Video Capture.
  • On2 Technologies. (2000). VP3: The Early Days of Video Compression.

Google. (2010). Acquisition of VP8 and WebM Project.

Categories
Digital Asset Management Technology

Blockchain Storage Demystified: Transforming Media Production

Introduction

Blockchain technology is revolutionizing various industries, with media production being among the most promising beneficiaries. Blockchain storage, in particular, offers a novel approach to managing vast amounts of data securely and efficiently. This comprehensive guide explores how blockchain storage works, its benefits, challenges, and specific applications within the M&E industry. We will also look at current vendors, use cases, and future trends.

What is Blockchain Storage?

Blockchain storage refers to the use of blockchain technology to manage and store data across a decentralized network. Unlike traditional centralized storage systems where data is stored on a single server or a group of servers, blockchain storage distributes data across multiple nodes in a network. Each piece of data is encrypted, time-stamped, and linked to the previous and subsequent data entries, forming a secure chain.

How Does Blockchain Storage Work?
  1. Data Segmentation and Encryption:
    1. Data is divided into smaller segments.
    2. Each segment is encrypted for security.
  2. Distribution Across Nodes:
    1. Encrypted data segments are distributed across various nodes in the blockchain network.
    2. This ensures redundancy and availability even if some nodes fail.
  3. Consensus Mechanism:
    1. Nodes in the network use consensus mechanisms like Proof of Work (PoW) or Proof of Stake (PoS) to validate and agree on the data being stored.
    2. This process ensures that the data is accurate and tamper-proof.
  4. Immutable Ledger:
    1. Once data is validated, it is added to the blockchain, creating an immutable ledger.
    2. Any attempt to alter the data would require changing all subsequent blocks, making tampering virtually impossible.
Benefits of Blockchain Storage
  1. Enhanced Security:
    1. Data is encrypted and distributed, reducing the risk of hacks and data breaches.
    2. The decentralized nature makes it difficult for malicious actors to compromise the system.
  2. Transparency and Traceability:
    1. Every transaction and data entry is recorded on the blockchain, providing a transparent and traceable history.
    2. This is particularly useful for audit trails and regulatory compliance.
  3. Data Integrity and Immutability:
    1. Once data is added to the blockchain, it cannot be altered or deleted.
    2. This ensures the integrity and authenticity of the stored data.
  4. Decentralization:
    1. Eliminates the need for a central authority or intermediary.
    2. Users have more control over their data and how it is managed.
  5. Reduced Costs:
    1. By removing intermediaries and relying on peer-to-peer networks, blockchain storage can reduce costs associated with data management and storage.
Challenges and Limitations
  1. Scalability:
    1. Blockchain networks can face scalability issues as the size of the blockchain grows.
    2. Solutions like sharding and layer-2 protocols are being developed to address these challenges.
  2. Energy Consumption:
    1. Some consensus mechanisms, particularly Proof of Work, require significant computational power, leading to high energy consumption.
    2. More energy-efficient consensus mechanisms like Proof of Stake are being explored.
  3. Regulatory Uncertainty:
    1. The regulatory landscape for blockchain technology is still evolving.
    2. Organizations need to navigate varying regulations across different jurisdictions.
  4. Data Privacy:
    1. While blockchain ensures data integrity and security, privacy remains a concern.
    2. Solutions like zero-knowledge proofs and private blockchains are being developed to enhance data privacy.
Applications of Blockchain Storage in Media Production
  1. Enhanced Security and IP Protection:
    1. Blockchain storage can significantly improve the security of media assets, protecting intellectual property from piracy and unauthorized distribution.
    2. Smart contracts can automate and enforce licensing agreements, ensuring that creators are fairly compensated for their work.
  2. Improved Collaboration:
    1. Decentralized storage allows multiple stakeholders, such as producers, editors, and special effects teams, to access and work on the same files securely and efficiently.
    2. Blockchain can facilitate real-time collaboration across different geographical locations, streamlining the production process.
  3. Cost Efficiency:
    1. By reducing the need for intermediaries and enhancing data security, blockchain storage can lower operational costs in media production.
    2. Efficient data management and distribution can lead to cost savings in storage infrastructure and bandwidth usage.
  4. Transparency and Accountability:
    1. Blockchain’s transparent nature ensures a verifiable and traceable record of all data transactions and modifications.
    2. This accountability is crucial for compliance with industry regulations and maintaining the integrity of media content.
Case Studies
  1. Storj:
    1. Storj is a decentralized cloud storage platform that leverages blockchain technology.
    2. It allows users to rent out their unused storage space, creating a peer-to-peer network.
    3. Data is encrypted, segmented, and distributed across multiple nodes, ensuring security and redundancy.
  2.  Filecoin:
    1. Filecoin is a decentralized storage network that incentivizes users to provide storage space.
    2. Users can store and retrieve data in a secure and efficient manner.
    3. The network uses a combination of Proof of Replication and Proof of Space-Time to ensure data integrity and availability.
  3.  Siacoin:
    1. Siacoin offers decentralized cloud storage services.
    2. It uses smart contracts to manage storage agreements between users and hosts.
    3. Data is encrypted and distributed across multiple nodes, providing security and redundancy.
  4.  MovieCoin:
    1. MovieCoin is leveraging blockchain technology to transform film financing and distribution.
    2. By using blockchain for transparent and secure transactions, MovieCoin aims to streamline the production process and enhance revenue sharing among stakeholders.
  5.  Videocoin:
    1. Videocoin is a decentralized video encoding, storage, and distribution network.
    2. It utilizes blockchain technology to create a peer-to-peer network for media processing, reducing costs and improving efficiency.
Competing Technologies: What Are the Big Three Doing?

Traditional cloud storage solutions offered by industry giants like Amazon Web Services (AWS), Google Cloud, and Microsoft Azure are significant competitors to blockchain storage. These services provide highly scalable and efficient storage without the complexities of blockchain technology.

However, the big three are not resting on their laurels. They are actively exploring and integrating advanced technologies to enhance their offerings:

  1. Hybrid Storage Solutions:
    1. AWS, Google Cloud, and Microsoft Azure are developing hybrid storage solutions that combine traditional cloud storage with blockchain elements. These hybrid solutions aim to leverage the best of both worlds— the scalability and efficiency of cloud storage with the security and transparency of blockchain.
  2. Distributed File Systems:
    1. Technologies like the InterPlanetary File System (IPFS) offer decentralized file storage that competes with blockchain by providing a peer-to-peer method of storing and sharing hypermedia in a distributed file system. While not blockchain-based, IPFS shares the decentralized ethos and provides an alternative to traditional cloud storage.
  3. New Data Storage Innovations:
    1. Continuous innovation in data storage technologies is another factor. For example, advances in quantum storage and next-generation data compression techniques are being researched and developed by the big three, offering potential future alternatives to both traditional and blockchain storage.

The Big Three’s Response to Blockchain Storage:

  • Amazon Web Services (AWS): AWS is exploring blockchain through its managed blockchain services, which allow users to set up and manage scalable blockchain networks using popular open-source frameworks. AWS also offers storage services that integrate with blockchain for enhanced security and transparency.
  • Google Cloud: Google Cloud is investing in blockchain through its blockchain-as-a-service (BaaS) offerings, partnering with leading blockchain companies to provide secure and scalable blockchain solutions. Google Cloud’s hybrid solutions enable integration with existing cloud services, enhancing data management capabilities.
  • Microsoft Azure: Microsoft Azure is actively promoting its Azure Blockchain Service, which helps businesses build and manage blockchain networks. Azure’s focus is on creating enterprise-grade blockchain solutions that integrate seamlessly with its cloud infrastructure, providing robust and scalable storage options.

In summary, while traditional cloud storage remains a strong competitor to blockchain storage, the big three—AWS, Google Cloud, and Microsoft Azure—are not only maintaining their current offerings but also innovating and integrating blockchain technologies into their services. This proactive approach ensures they stay competitive in the evolving landscape of data storage solutions.

Future Trends in Blockchain Storage for Media Production
  1. Advanced Cryptographic Techniques:
    1. Development of zero-knowledge proofs and homomorphic encryption to enhance data privacy without compromising security.
    2. These techniques can make blockchain storage more suitable for handling sensitive media content.
  2. Interoperability:
    1. Efforts to enhance interoperability between different blockchain networks and traditional storage systems.
    2. This will enable seamless data sharing and collaboration across various platforms and technologies.
  3. AI and Machine Learning Integration:
    1. Combining blockchain with AI and machine learning to automate and optimize data management processes.
    2. AI can help in efficient data segmentation, encryption, and distribution across the blockchain network.
  4. Regulatory Developments:
    1. As blockchain technology matures, regulatory frameworks will evolve to address the specific needs of blockchain storage.
    2. Clear regulations will provide guidance and certainty for media companies looking to adopt blockchain solutions.
Conclusion

Blockchain storage holds significant promise for managing the large data sets used in M&E. Its security, transparency, and immutability can revolutionize how media assets are stored and managed. While challenges like scalability and regulatory uncertainty need to be addressed, ongoing innovations and advancements are paving the way for a more robust and sustainable future for blockchain storage. As the technology evolves, it is poised to become an integral part of media production, enhancing security, efficiency, and collaboration.

Expanded FAQs
  1. Can blockchain storage handle petabytes of data for media production?
    1. While current blockchain networks face scalability challenges, innovative solutions like layer-2 protocols and sharding are being developed to handle large data sets efficiently. For instance, sharding can break down a blockchain into smaller, more manageable pieces, while layer-2 protocols can handle transactions off the main chain to reduce congestion and improve speed. These advancements suggest that blockchain storage could eventually handle petabytes of data effectively, though widespread adoption in media production is still on the horizon.
  2. How far away are we from seeing its use in production as the norm? Is it inevitable?
    1. The use of blockchain storage in media production as the norm is still a few years away. While pilot projects and small-scale implementations are underway, widespread adoption will depend on overcoming scalability, energy consumption, and regulatory challenges. However, the benefits of enhanced security, transparency, and cost efficiency make it likely that blockchain storage will become more prevalent in the future. As technology evolves and matures, it seems inevitable that blockchain will play a significant role in data storage solutions.
  3. What are the benefits of blockchain storage for media production?
    1. The benefits of blockchain storage for media production include enhanced security through encryption and decentralization, transparency and traceability of data transactions, data integrity and immutability, decentralization reducing reliance on central authorities, and cost efficiency by eliminating intermediaries. These advantages can significantly improve the management and protection of media assets, streamline production processes, and reduce operational costs.
  4. What challenges does blockchain storage face in handling large data sets?
    1. The main challenges include scalability, network congestion, storage efficiency, and regulatory uncertainty. Scalability is crucial as the blockchain network grows in size, potentially leading to slower transaction speeds and higher costs. Network congestion can further exacerbate these issues. Ensuring efficient storage and retrieval of large data sets is another technical hurdle. Additionally, navigating the evolving regulatory landscape and ensuring compliance with data protection laws are significant challenges.
  5. What is the future of blockchain storage in the M&E industry?
    1. The future of blockchain storage in the media and entertainment industry includes advanced cryptographic techniques for enhanced data privacy, improved interoperability between blockchain networks and traditional storage systems, integration with AI and machine learning for optimized data management, and evolving regulatory frameworks to provide clearer guidelines. These trends suggest a growing adoption of blockchain storage, driven by its potential to enhance security, efficiency, and collaboration in media production.
Categories
Technology

SDI – The Backbone of Broadcast

Welcome to Our “Future of Broadcast Infrastructure Technology” Series

Dive into the heart of innovation with us as we embark on a journey through the evolving world of broadcast infrastructure technology. This series is a window into the dynamic shifts shaping the industry’s future, whether you’re a seasoned professional or a curious enthusiast.

A Journey Through Time: The Evolution of Broadcast Technology

Imagine a world where the magic of broadcasting was a novel marvel — that’s where our story begins. Giulio Marconi’s pioneering radio broadcast in 1895 set the stage for a revolution in communication. Fast forward from the fuzzy black-and-white imagery to today’s ultra-sharp high-definition videos. The milestones have been nothing short of extraordinary. Remember the times of meticulously cutting analog sync cables? Contrast that with today’s systems, which are nearing a self-timing brilliance. The leap from analog to digital has been a game-changer, enhancing the quality and reach of broadcast content. Now, as we edge closer to IP-based systems and other emerging tech, we’re witnessing the dawn of a new era. But where does this leave the trusty SDI?

Demystifying Serial Digital Interface (SDI)

For years, SDI has been the backbone of broadcast facilities around the globe. But let’s break it down: What is SDI, really? Birthed by the SMPTE 259M standard in 1989, SDI is the reliable workhorse for transmitting pristine digital video via coaxial cable, ensuring integrity, latency-free, and lossless delivery. Evolving over the decades, SDI now supports 4K workflows, thanks to SMPTE ST 2082, managing 12Gbps signals and 2160p resolution at 60FPS. Yet, the real question is whether SDI can keep pace with the industry’s insatiable appetite for growth and innovation.

SDI: The Past, Present, and Future in Broadcasting

SDI’s legacy of reliability and quality is undisputed. Its simplicity has made high-quality broadcasting an achievable standard. However, the relentless march of progress doesn’t play favorites, and SDI has little room to evolve beyond its current capabilities without significant technological breakthroughs. While transitioning to IP-based or cloud-based workflows becomes increasingly common, SDI’s relevance remains strong. But with scalability as its Achilles’ heel, SDI’s future is a hot topic of debate. Considering the economics of cabling, from coaxial to CAT6A to fiber, we’re at a crossroads where cost and technology intersect, guiding us to what’s next.

On the Horizon: What’s Coming Next

This conversation is just the beginning. In the next installments, we’ll delve into the promise of IP-based systems like ST 2110, the transformative role of NDI in live production, and the groundbreaking potential of technologies like 4K/8K, HDR, and cloud workflows.

We’ve only started peeling back the layers of the broadcasting world’s future. Join us as we navigate through the technologies, carving out the path forward, their implications for the industry, and what these changes could mean for you. Look out for our next installment in April and engage with us. Your insights, inquiries, and perspectives are the pulse of this exploration.

Join the Dialogue

Your voice is integral to our series. Share your thoughts, spark a discussion, or simply ask questions. We’re here to delve into the future together. Follow our journey, contribute to the narrative, and let’s decode the complexities of broadcast infrastructure technology as one.

Categories
Uncategorized

Sports Broadcast Learnings from 2020 Inform the Future. And it’s Grand.

Monday Morning Quarterback Sports Video Survey – The Future Looks Bright 

They say hindsight is 2020. As I reflect on the past year and our world of sports, I won’t soon forget the cancellation of games, the job/family/community bubbles we created, and of course how we watched sports. As fans, we generally only think about the play on the fields or courts. We should also remember all of the men and women at the venues who tell the stories, with amazing graphics and videos, that get us excited. What happened to them during the pandemic?  

I had a lot of questions. Like, how did they navigate these new waters we all found ourselves in? And, what do future post-pandemic environments look like, to thrive in their job? So, I called friends in the team and venue space to discuss what they learned from the challenges of working through COVID 2020 shutdowns and working in a bubble. 

Sports Teams During Covid-19

Overall what I learned from these discussions is that most teams were not ready for any kind of work from home scenario. A handful of teams had been working with a Media Asset Management system, but not necessarily in the cloud, and certainly not as a remote solution. Others had nothing prepared for a work from home (WFH) scenario and relied completely on their IT departments. Unfortunately, most of these IT departments were also not suited to sustain a WFH media production environment. Needless to say, most of us were caught off guard by the rapid changes that were needed to adapt to our new “normal”. 

So, how did these teams continue to deliver? A lot of creative thinking, intelligent workarounds, and perhaps some unapproved, but socially distanced hard drive exchanges. Many decided to use Teamview or remote desktop because they were already on those systems. Others went distinctly old school and used the public internet to exchange files through dropbox, Google Drive, or One Drive. Not efficient…but cheap. Hey, it was an emergency. 

I spoke to a few teams who luckily had both centralized storage and a MAM system that they turned into their own private “cloud” systems which allowed them to continue working effectively at home. They could access their systems through either a VPN or RGS login that allowed them to have full access to their entire catalog of assets, NLE, storage, and music. By utilizing their private cloud they didn’t have to worry about ingress or egress fee charges via a public cloud partner.  

Fan Engagement More Important Than Ever

Even though sports wasn’t happening, teams did have to keep their social media, marketing, and community engagement going. The fans, man! But with no one in the office, it made it much more difficult for those without a MAM system in place to find the assets they needed to get that fan base the fix they needed.  It was even worse for those who did not have a centralized storage system in place. Their only option was to SLACK their co-workers with questions of “who has the blah, blah, blah shot?” I don’t wish that pain on anyone.  

The overwhelming feedback I heard from the teams I spoke with was how much their IT departments were essential to them getting their work done. Something I’m sure IT has been trying to tell them for years. Whether it was to access the corporate network remotely, set up the Teamviewer credentials, or simply troubleshoot connectivity issues, the media department and IT teams found themselves collaborating in new ways throughout 2020.  

When I questioned how working from home affects the creative process, the overwhelming response was -“it sucked”!  There was no more walking next door to ask someone to take a quick look at a timeline for instant feedback, or bouncing ideas off each other in a weekly creative session. The in-person camaraderie they had with their colleagues had been snatched away, and programs like Teams, Zoom, or Google Hangouts became the norm. More than a few found the process less than productive and missed the interaction of in-person meetings. Incredibly, this didn’t stop these creatives from turning out unbelievable pieces of content that normally would be showing in stadiums or arenas on massive video boards and now had to be focused on their Facebook, Instagram, and Twitter accounts. Big shout out to team storytellers. 

The Future…

So, what does a Post Pandemic workflow look like for some of our favorite sports teams moving forward? To be honest, the plan is different for each team as has always been the case. Some are planning to bring everyone from the team’s facilities back, in person, to work as soon as possible. Others are going to implement a more flexible environment where WFH still exists, allowing for more work-life balance.   

How were these teams impacted financially due to COVID? Budgets are always tough in sports media production and 2020 brought some new challenges to the table. I asked each team how they handled budgeting during the pandemic. Every single one said that their upper management was amazing and asked what they needed to help get the job done. Keep in mind, no one knew how long this would last so no team completely overhauled their environments. Many pushed off upgrades or planned projects due to state and local restrictions of allowing anyone into the facilities. Most expect to make do with current systems through the upcoming 2021 seasons in hopes that 2022 may bring new technology solutions. 

All agreed that not only have they learned a lot personally, but they also learned a lot professionally. 95% of the teams I spoke to are now looking at a longer-term solution for a better remote working environment, either through a managed service via their IT department or an outside vendor who can take on the day-to-day support of such systems, allowing them to focus on being creative and telling us the story. And we do love the story, don’t we?

None of us can wait for our favorite sport to start playing again. The folks who make sure us FANS enjoy our time at their venues cannot wait either. So the next time you go to a game and watch one of those funny videos or amazing player intros, remember the people behind the production. They figured out creative ways for us to stay engaged with our teams and our communities. They helped us forget, just for a few minutes, that we were all stuck at home during the darkest days of 2020, and allowed us to have a bit of good ole sports entertainment.

 

About the Author:

Doug Price is an award-winning video editor and sales professional. He has spent over 25+ years in the media industry with a focus on sports creative content and media departments. Doug has worked directly with sports teams, leagues, venues, and broadcasters across North America to help develop media production efficiency through technology solutions for the past 10 years. 

Categories
Uncategorized

Introducing the Women of CHESA

We are in a new year with new challenges and new aspirations. After closing out 2020, I think we are all in search of a bit of inspiration. I feel incredibly lucky that this inspiration came in the form of two incredible individuals who also happen to be my colleagues.

I’m proud to introduce Marina Blandino and Ashley Williams. Ashley and Marina are the co-founders of Women of CHESA – a community of women empowering women in the media and entertainment (M&E) industry.

Marina has been with CHESA for over 5 years and is our Director of Support Services and Customer Success. She’s also CHESA’s first Director woman of color. Ashley has been with us nearly 2 years and is one of our amazing Project Managers.

What’s interesting is they’ve only met in person briefly and for the most part, their jobs don’t intersect often. So, how did this spark of creativity lead us to where we are currently?

The Beginning:

As fellow women at CHESA (Chesapeake Systems), Ashley and Marina were seeking a way to connect with each other and with other women in the tech space, while in lock-down. This resulted in submitting requests to attend a virtual conference for women in technology. CHESA’s CEO, Jason Paquin, encouraged them to invite all female employees to attend, regardless of their role (sales, engineering, accounting, etc.). Nearly all said yes.

Ashley and Marina quickly realized they could do more. They realized they wanted to do more. And, their teammates wanted more.

With management’s full support, the Women of CHESA was born.

They started by launching monthly brunches with all their female coworkers. No work talk allowed – this was a time to just connect with each other. We started to share some of our personal likes and dislikes, our success stories and even some of the struggles we encounter. This was a way for all of us to come together and create those crucial supportive relationships that being remote was making more challenging.

CHESA has hired two women during COVID. Women of CHESA was a way for them to quickly feel welcome and create allies while never meeting in person. The group was already bearing fruit.

This is a journey that we are going on together. And it’s so exciting because we are creating something so positive. A reason why I took my promotion is because I want to make sure I’m on that leadership team making those changes. I have a voice to say we need more women.

Marina Blandino, Co-Founder, Women of CHESA

Currently, Women of CHESA have all female members but a lot of male allies.

Big Picture Goals:

    • Target college campuses and help drive awareness of the various careers available in the M&E industry. This will ideally create opportunities to hire a more diverse work pool.
    • Partner with other industry organizations that are working towards similar goals. Eventually they want to launch their own mentoring program, but for now plan to support Rise’s North America mentoring program this year.
    • Create a paid internship program to bring more women into our industry and give them the chance to see what a career in M&E could look like.
    • Create a scholarship program to target women before entering university or deciding on a major.

2020 proved that we can be productive while remote. Ideally, we still have face to face time but, for the moments in between, Women of CHESA will continue to find ways to connect with each other and maintain that crucial support system.

Categories
Uncategorized

IT for the Creative Professional – Is NVMe Right for You Right Now?

At CHESA, we like to say that our passion for the bleeding edge of technology helps to keep our clients on the cutting edge. That motto fuels the primary purpose of our work – to help you use technology to your advantage. Sometimes that means you need the latest and greatest products in the marketplace. Just as often, you don’t.

A current trend that we’re starting to see become part of the architecture for possible shared storage solutions is NVMe (Non-Volatile Memory Express). NVMe is a new storage protocol designed to provide direct data transfer between central processing units and SSDs using a computer’s PCIe (Peripheral Component Interconnect Express) bus. It offers an alternative to SATA (Serial Advanced Technology Attachment) and SAS (Serial Attached SCSI) protocols and was designed to address the bottlenecks inherent in these previous technologies, unlocking the full potential of solid-state media. Its benefits include higher input/output operations per second (IOPs), gobsmacking throughput, and greatly reduced latency. The specifications for these drives report them to be roughly 20 times faster than traditional spinning HDDs (Hard Disk Drives) are today.

Naturally, people are talking about NVMe, especially after IBC 2019 where Quantum and Western Digital, to name just two, showed off their solutions. It’s the hot new thing. But do you really need it?

If your media creation environment needs extremely low latency and fast access to large amounts of data, then NVMe should certainly be considered. A couple of examples would be if you’re working in a collaborative environment with multiple streams of uncompressed or lightly compressed 8K or 4K video in use per workstation, or if you are orchestrating an event with a large number of concurrent ingest or playout feeds, such as an eSports competition.

Additionally, VFX houses with large numbers of real-time or non-real time renders might profit from every conceivable advantage to make shots available as fast as possible. There are some non-video workflows that also need extremely low latency and extremely fast performance as well. If I were trying to master global finances via high-frequency trading, build the perfect human via genomic research, or profit from understanding the human condition via real-time big data analytics, you bet I’d want to build a fire breathing NVMe monster.

In our corner of the industry, unsurprisingly, we are seeing organizations dealing with large amounts of data as quickly as possible interested in NVMe, such as big media conglomerates ingesting a lot of high-resolution media and networks acquiring shows at the highest resolution so they can future-proof content. These organizations may still be delivering primarily in HD but they are archiving 4K files for a time in the future when viewers may expect higher resolution as a matter of course.

Furthermore, Quantum, whose F-Series NVMe storage arrays received two industry awards during the IBC 2019 show, may be considered at the forefront of NVMe. Their F-Series storage, designed specifically for high-end video workflows, uses 24 NVMe drives per chassis and provides users with 25 gigabytes of aggregate bandwidth to work from simultaneously.

But outside this super high-end usage, most of our customers really don’t need a shared storage NVMe solution yet.

Most production and post environments currently don’t require extremely low latency or high IOPs because video playback is about large streams of sequential data. They’re ingesting and working with video on large, centralized, shared storage volumes that are using dozens if not hundreds of hard disk drives, which allows them to retain petabytes of information with great performance. Currently, that’s still the best bang for the buck, and the best solution for most use cases right now because compared to a petabyte of HDDs, a petabyte of NVMe is exponentially more expensive.

That’s not to say that NVMe doesn’t have its place in your workflow today — it’s just a matter of the scale of adoption. People are moving towards solid-state technology wherever it is affordable to do so. For example, I wouldn’t buy a new workstation or laptop without one. On a smaller scale where NVMe’s reliability and performance truly shine, it can make all the difference in the world to a freelancer or editor working remotely from internal or direct-attached storage. In fact, desktop workstations and laptops equipped with NVMe storage outperform some of the SAN volumes we built for customers five years ago and certainly weigh hundreds of pounds less.

Additionally, a hybrid approach to shared storage — where some storage vendors provide a layer of NVMe cache on top of traditional HDDs — could be commonplace in the near future. This solution could provide the best of both worlds with the speed of NVMe drives and the capacity and price of HDDs, so long as the software controlling the data flows between the drives works transparently and seamlessly.

In conclusion, we believe that customers at the top end – like studios, networks, and VFX houses – will use large NVMe shared storage volumes first. Additionally, the single, independent editor can make NVMe the cornerstone of their business by buying a machine equipped with a 1-4TB NVMe drive or attaching this storage to their machine via Thunderbolt 3. This option is cost-effective and offers 10 times the speed and performance of previous configurations.

Mid-tier media entities that comprise the majority of content producers and asset owners, who are currently working with compressed 4K or mezzanine HD video, might still consider the greater capacity and average performance provided by HDD storage solutions versus a smaller capacity NVMe storage solution with weapons-grade performance, as a better fiscal choice. You don’t need a supercar if a nice SUV will get the job done.

NVMe is in their future, too, as the industry inevitably moves to 4K, 8K, HDR, and whatever other astounding and immersive new technologies are waiting in the wings ready to blow our collective minds. Right now, the decision to explore NVMe really boils down to IOPs and latency.

CHESA is ready to help if you are considering NVMe. As your creative IT resource, we know time is money. Let us help you find the right solution for your business.