The 357 Model

The 357 Model: A Strategic Framework for Technology Management

No technology plan or model is bulletproof (and yes, pun intended), but embracing a 3-5-7 model for technology analysis, expansion, refresh, and retirement helps organizations stay at the cutting edge of innovation while keeping their systems fully supported. This model isn’t a universal fix for every type of technology lifecycle, but it proves quite effective for hardware, software, and infrastructure when applied independently.

Understanding the Technology Flywheel Concept

A technology flywheel is a metaphor for a self-reinforcing cycle that gains momentum and efficiency as it grows—imagine a heavy wheel that becomes easier to spin the faster it goes. In the world of technology and business, it’s akin to a process where advancements in one area lead to increased performance, reduced costs, or enhanced capabilities, thereby unlocking new avenues for further innovation. This creates a virtuous circle, where each success builds upon the last, spiraling up to drive exponential growth and a competitive edge. Having demystified the flywheel concept, let’s connect it to our proposed model for media supply chains and technology lifecycles.

Detailed Breakdown of the 3-5-7 Model:

  • Year 1: Specify, purchase and deploy
  • Year 2: Finalize implementation, system “Burn-in” and data collection
  • Year 3: Analyzing the technology landscape and kickstarting the budget for Year 5.
  • Year 4: Re-strategize and roadmap
  • Year 5: Executing comprehensive system upgrades, expanding products, or refreshing systems using the planned budget.
  • Year 6: Finalize legacy data migration and second system “Burn-in”
  • Year 7: Retiring technologies that have been replaced or reached EOSL (End of Service Life).
  • Year 8: Starts the flywheel back to “Year 3 Analysis” of the Year 5 changes

Application of the 3-5-7 Model in Video Production Technology

Focusing on video production technology, let’s see how software fits into this 3-5-7 framework. Two years post-purchase (note: not implementation), it’s crucial to concentrate on minor version updates, feature enhancements, industry advancements, and how well the system integrates with existing platforms while assessing its alignment with your organization’s specific needs. This stage is ideal for a detailed cost-benefit analysis to determine the anticipated return on investment, setting the stage for decisions about immediate purchases versus what can wait until Year 5. Whether it’s adopting a new release, updating to a major version, or switching vendors for a better fit, the analysis conducted in Year 3 lays the groundwork. Year 5 restarts the purchasing and commissioning cycle, and Year 7 closes the chapter with a thorough legacy migration and decommissioning.

Hardware’s lifecycle, though distinct from software, also aligns well with the 3-5-7 framework. Inspired by Moore’s Law—which observes that the capacity of integrated circuits roughly doubles every two years, leading to significantly enhanced computing capabilities—this model is particularly apt. For example, the performance evolution of workstations and laptops, closely tied to processor speeds, reflects this trend and impacts their compatibility with operating systems and software. IT departments typically initiate hardware upgrades in the third year and aim to retire them by the fifth year, with a final act of securely erasing or destroying the hardware by the seventh year. Server replacements, though more gradual, follow this rhythm as well, with the third year reserved for planning and the fifth for upgrades, ensuring a robust, supported, and secure technology infrastructure. By the seventh year, clients are usually notified of the product’s end of sale or service, often with a six-month heads-up.

Storage systems, which utilize processors within their controllers, similarly adhere to Moore’s Law. The third year is an opportune time to assess storage performance and utilization, deciding whether additional capacity is needed or if integrating more cost-effective nearline storage for inactive data is advisable. This assessment is vital for budgeting enhancements in the fifth year, with many storage controllers needing upgrades by the seventh year due to EOSL.

Avoiding Pitfalls: The Risk of Bargain Bin Purchases

While cost optimization is generally beneficial, “Bargain Bin” shopping can disrupt the Flywheel’s momentum, as manufacturers often offer significant discounts for technology nearing EOSL. To achieve the best return on investment, value-engineered solutions should leverage the 3-5-7 model. A frequent pitfall for smaller organizations is acquiring technology close to EOSL, forcing them to rely on platforms like eBay for spare parts or face unexpected full product replacements.

Integrating New Technologies: Ensuring Maturity and Compatibility

The allure of “New Technology” every three years can be tempting, but its integration and API maturity must be assessed to avoid costly and continuous upgrades that disrupt the Flywheel.  The increasing interdependence of different technological systems (e.g., IoT devices, cloud computing, AI-driven analytics) suggests that changes in one area can necessitate faster adaptations elsewhere, potentially requiring more frequent review intervals.

Challenges and Opportunities with Cloud Technology Under the 3-5-7 Model

The application of the 3-5-7 model to cloud technology mirrors its use in software lifecycle management. Often, cloud solutions project ROI beyond the five-year mark, meaning initial migration costs may not yield immediate returns. By the fifth year, hardware upgrades fall to the cloud provider, usually without disrupting the end-user. This shifts the end-user group’s focus from infrastructure analysis to evaluating how their Cloud provider or MSP addresses their current and future needs.

Cloud storage, while following the 3-5-7 model, presents unique challenges with its ongoing costs. Unlike Linear Tape-Open (LTO) storage, which incurs no additional expenses after archiving, cloud storage continues to rack up charges even for dormant data. This has led many organizations to reevaluate their data retention strategies, aiming to keep less data over time. By evaluating data relevance every three years, organizations can optimize costs more effectively. For instance, general “Dated” b-roll footage might be deleted after five years, reflecting its reduced utility, while only content deemed “Historic” after seven years is reserved for long-term use.

Conclusion: A Foundation for Future-Proof Technology Investments

While the 3-5-7 model isn’t a magic bullet, it establishes a solid foundation for maintaining a technology flywheel, ensuring investments continue to meet evolving needs and maintaining a competitive edge.  Overall, the 3-5-7 model provides a structured approach to technology lifecycle management. Tweaks and adjustments will occur depending on organizational initiatives, such as sustainability, trends and evolutions in the industry or economic and market dynamics. Organizations might increasingly look to customize this model to fit their particular circumstances, ensuring that their technology investments are both strategic and sustainable.


Embracing the Future of Broadcasting: What comes after SDI?

Embracing the Future of Broadcasting: What comes after SDI?

The prominent buzzword at the 2024 NAB Show was Artificial Intelligence (AI). Still, if you look beyond the vast AI offerings, you will notice that the broadcasting industry is witnessing a significant transformation in infrastructure. The industry is moving from traditional infrastructure models to more flexible, IP-based solutions. This results in leaner and easily scalable systems that are ready to bridge the gap between true software-based solutions and newly imagined workflows. The SMPTE ST 2110 family of standards and Network Device Interface (NDI) technology are at the forefront of this revolution. These IP-based transport solutions redefine how content is created and delivered and shape the future of production. These changes involve adopting and merging long-standing IT-based technologies with new media technologies and workflows. For those familiar with the concepts of SMPTE ST 2110 and NDI but new to their practical application, here’s a look at implementing these technologies effectively.

Understanding SMPTE ST 2110 in Practice

The SMPTE ST 2110 family of standards offers a robust IP-based broadcasting framework, separating video (uncompressed or compressed), audio, and metadata into different essence streams. This separation is crucial for enhancing the flexibility and scalability of broadcast operations. It’s important to remember that ST 2110 is a media data-plane transport protocol based on RTP (Real-Time Transport Protocol) for sending media over a network. The network,  Typically called a media fabric, is the infrastructure, but it’s not uncommon to refer to the combined protocol and the media fabric as ST 2110.

Key Considerations for Implementation:

  • Infrastructure Needs: Transitioning to ST 2110 requires a network infrastructure or media fabric capable of handling high bandwidth flows with low latency for high-quality video and audio transmission. Implementing a robust IP network with sufficient switches and routers designed for media-centric transmission is essential. Most media fabric designs will utilize fiber optic cabling due to the higher bandwidth requirements. A fabric can utilize single-mode or multimode, but it’s becoming more mainstream to prioritize single-mode fiber.
  • Timing and Synchronization: Unlike the baseband world, where timing is inherent, IP systems require precise synchronization. Implementing Precision Time Protocol (PTP) as per SMPTE ST 2059 standards ensures that all devices in the network are synchronized, which is critical for maintaining audio and video alignment. Most broadcast and production facilities use a GPS signal from roof-based antennas feeding a reference signal generator. That generator is then connected to the media fabric to allow the distribution of PTP.
  • Multicast Management: A cornerstone of effective SMPTE ST 2110 deployments, enabling broadcasters to utilize network resources efficiently while ensuring the high quality and timely delivery of audio and video streams. Unlike unicast, which requires individual streams for each endpoint, multicast allows multiple endpoints to receive the same stream simultaneously, dramatically reducing the bandwidth requirements for distributing the same content to multiple locations.

Integrating Network Device Interface (NDI) into Live Productions

NDI complements IP workflows by providing a versatile and low-latency compressed method for video transmission over IP networks. It is particularly beneficial in live production environments where speed and flexibility are paramount. NDI is software-centric and relies on video compression to move media across existing or lower-bandwidth network fabrics efficiently, compared to ST 2110-20, which requires a dedicated high-bandwidth network for uncompressed video.

Practical Steps for NDI Integration:

  • Network Configuration: Ensure your network can handle NDI’s bandwidth requirements. NDI can run over existing 1 Gigabit networks, but 10 Gigabit infrastructure is recommended for handling multiple high-quality streams without compromise.
  • Software and Hardware Compatibility: Check your existing production software and hardware compatibility with NDI. Many modern manufacturers support NDI natively; however, interface devices like converters and gateways can bridge gaps with non-NDI-compatible hardware.
  • Workflow Optimization: Use NDI’s capabilities to streamline your workflow. For example, with a free software download, NDI tools can monitor and record feeds directly from the network without specialized hardware. NDI’s software-focused approach makes workflow optimization simple and allows for a wide variety of tools from third parties. This setup can significantly reduce the complexity and cost of live productions such as corporate town halls, religious gatherings, and sporting events.

Adapting to Industry Changes with Flexible IP Technologies

The shift towards technologies like ST 2110 and NDI is driven by their potential to create more dynamic, scalable, and high-value production environments. As the industry adapts, the flexibility of IP-based solutions becomes increasingly critical.

IP greatly enhances remote production capabilities allowing broadcast teams to manage and coordinate productions from multiple locations, reducing the need for extensive on-site personnel and equipment. This shift cuts down on logistical costs and enables a more agile response to changing production requirements.

Moreover, integrating ST 2110 or NDI into broadcast infrastructures is also a strategic move towards future-proofing. These technologies are designed to accommodate future video and audio technology advancements, including higher resolutions, emerging media formats, and immutable software infrastructure. By embracing these standards and systems now, organizations are better prepared to adapt to new trends and innovations, ensuring their systems remain relevant and highly functional in the evolving media landscape.

In conclusion, practical integration into existing systems can unlock unprecedented flexibility and efficiency for broadcasting professionals familiar with the theoretical aspects of SMPTE ST 2110 and NDI. By focusing on proper network infrastructure, synchronization, and compatibility, broadcasters can harness the full potential of these IP-based technologies to revolutionize their production workflows, making broadcasts more adaptable and future-ready. As the industry continues to evolve, embracing these changes will be key to staying competitive and meeting the increasingly complex demands of audiences worldwide.


SDI – The Backbone of Broadcast

Welcome to Our “Future of Broadcast Infrastructure Technology” Series

Dive into the heart of innovation with us as we embark on a journey through the evolving world of broadcast infrastructure technology. This series is a window into the dynamic shifts shaping the industry’s future, whether you’re a seasoned professional or a curious enthusiast.

A Journey Through Time: The Evolution of Broadcast Technology

Imagine a world where the magic of broadcasting was a novel marvel — that’s where our story begins. Giulio Marconi’s pioneering radio broadcast in 1895 set the stage for a revolution in communication. Fast forward from the fuzzy black-and-white imagery to today’s ultra-sharp high-definition videos. The milestones have been nothing short of extraordinary. Remember the times of meticulously cutting analog sync cables? Contrast that with today’s systems, which are nearing a self-timing brilliance. The leap from analog to digital has been a game-changer, enhancing the quality and reach of broadcast content. Now, as we edge closer to IP-based systems and other emerging tech, we’re witnessing the dawn of a new era. But where does this leave the trusty SDI?

Demystifying Serial Digital Interface (SDI)

For years, SDI has been the backbone of broadcast facilities around the globe. But let’s break it down: What is SDI, really? Birthed by the SMPTE 259M standard in 1989, SDI is the reliable workhorse for transmitting pristine digital video via coaxial cable, ensuring integrity, latency-free, and lossless delivery. Evolving over the decades, SDI now supports 4K workflows, thanks to SMPTE ST 2082, managing 12Gbps signals and 2160p resolution at 60FPS. Yet, the real question is whether SDI can keep pace with the industry’s insatiable appetite for growth and innovation.

SDI: The Past, Present, and Future in Broadcasting

SDI’s legacy of reliability and quality is undisputed. Its simplicity has made high-quality broadcasting an achievable standard. However, the relentless march of progress doesn’t play favorites, and SDI has little room to evolve beyond its current capabilities without significant technological breakthroughs. While transitioning to IP-based or cloud-based workflows becomes increasingly common, SDI’s relevance remains strong. But with scalability as its Achilles’ heel, SDI’s future is a hot topic of debate. Considering the economics of cabling, from coaxial to CAT6A to fiber, we’re at a crossroads where cost and technology intersect, guiding us to what’s next.

On the Horizon: What’s Coming Next

This conversation is just the beginning. In the next installments, we’ll delve into the promise of IP-based systems like ST 2110, the transformative role of NDI in live production, and the groundbreaking potential of technologies like 4K/8K, HDR, and cloud workflows.

We’ve only started peeling back the layers of the broadcasting world’s future. Join us as we navigate through the technologies, carving out the path forward, their implications for the industry, and what these changes could mean for you. Look out for our next installment in April and engage with us. Your insights, inquiries, and perspectives are the pulse of this exploration.

Join the Dialogue

Your voice is integral to our series. Share your thoughts, spark a discussion, or simply ask questions. We’re here to delve into the future together. Follow our journey, contribute to the narrative, and let’s decode the complexities of broadcast infrastructure technology as one.

Digital Asset Management Digital Media MAM Technology

Media Workflow Management in a Remote Editing Era

The digital landscape is continuously evolving. With recent shifts towards remote work, the industry has entered the remote editing era in which short turnaround times and access to a global talent pool are the norm. The traditional studio environment has been reimagined. But this transformation is not without its challenges. Managing media assets and orchestrating efficient workflows is essential, or productions can get bogged down with inefficiencies and reworks. Effective media workflow management is critical in the remote editing era to compete in an industry expecting quick turnaround and high-quality content.

At its core, media workflow management involves overseeing the entire lifecycle of a media asset from ingest to final distribution. Effective media workflow management requires that each step be meticulously mapped. The objective? To streamline processes, ensure consistent quality, and deliver media content efficiently, regardless of where editing team members are located.


The Role of Media Assets in Workflow Management

The building blocks of any finished video are comprised of the many assets that go into creating the content. These blocks include raw footage, audio, in-process editing files, special effects files, graphic and branding elements, and polished videos ready for distribution on a wide range of platforms and formats. These files are precious, yet too often, they are underutilized. Effective asset management ensures that these media files are cataloged, retrievable, and ready for processing.

In a remote editing setting, this becomes even more critical. When creative teams work in an inefficient and fragmented asset management and storage system, efficiency and quality take a hit. Teams need real-time access to assets without the latency or bottlenecks that can hamper creativity.


Dissecting the Media Workflow Process

Your media assets have a project lifecycle from pre- and post-production to transcode, QC, distribution, and beyond. A comprehensive media workflow process is a roadmap that guides a media asset through its lifecycle. Critical points in the workflow include:


Acquisition and Ingest

Every project begins with acquisition and media ingest, where raw content is imported into the system. This phase requires tools that can handle vast amounts of data swiftly and seamlessly, especially when dealing with high-definition or even 8K content. The best systems will enhance metadata at ingest, adding information about location, format, film dates, and even looking inside for faces, objects, speech-to-text, and other attributes.



Once ingested, the editorial phase kicks in. This phase is a dynamic and creative workflow stage from video editing, visual effects, animation, and motion graphics to photography, audio editing, color-grading, and finishing. Different creatives may be working with different apps. They need to be able to collaborate effectively and share files seamlessly. In today’s remote era, cloud-based tools and platforms allow editors to collaborate in real-time, annotate, and share feedback without being in the same physical space. Bottlenecks in this phase result in lost time and expensive reworks and can pull creators out of the flow.

Media management steps into the limelight at this critical content creation stage, ensuring that the processed assets are organized, backed up, and stored with metadata tagging. This optimization is crucial for easy retrieval, version control, and updates. In remote editing, it’s not just about storage but accessibility. Cloud-based asset management solutions allow teams to pull or push content irrespective of their geographical location.


Transcoding and Distribution

Finally, the media distribution phase takes center stage. Once content is polished and ready, it’s dispatched to various platforms – be it streaming services, broadcast channels, or digital platforms. Ensuring content reaches the right platform in the correct format in a fragmented media consumption world is paramount. The sheer number of broadcast outlets, OTT, and social media platforms are as numerous as they are diverse. Viewers are accessing content on every conceivable device. Gone are the days when media distribution was linear. Today, it’s multi-directional and multi-platform. As media is edited and refined remotely, it must also be distributed to a global audience. Media workflow management ensures that distribution is timely, format-compliant, and aligned with the target audience’s consumption habits.


Archiving and Repurposing

The value of your assets shouldn’t disappear after distribution. An effective media management system will support extending the life of your media files and allow you to repurpose valuable content.


Integrating Workflow Management in the Remote Era

With teams now dispersed, robust workflow management is the glue that holds the process together. It’s not just about individual tasks but orchestrating them to work harmoniously. Whether it’s ensuring that media assets are easily accessible to editors across the globe or streamlining feedback loops, workflow management tools must be agile, cloud-native, and intuitive.

The remote editing era has redefined the boundaries of media creation and distribution. It’s dismantled geographical barriers but introduced new challenges in collaboration and accessibility. Amidst these shifts, media workflow management stands as the backbone, ensuring that from media ingest to distribution, every step is executed flawlessly.

Organizations can thrive in this new landscape by integrating tools and solutions that cater to media asset management, processing, and distribution. As the adage goes, ‘change is the only constant.’ The key to navigating this change in the media world is a robust, flexible, and efficient media workflow management system.


Contact Us Today

CHESA has a passion for the nuances of media workflow integration. We have strong partnerships with the best-of-breed technology providers in the creative IT industry. We take a holistic approach in recommending solutions that bring real value and benefits to your organization rather than selling technology for technology’s sake. Our team comes to the table with deep knowledge of the tools and vendors. It is ready to address the demands and requirements of your environment and advance your business goals. Contact us today to find out more about how automating workflows in the Adobe ecosystem can bring greater efficiency and free up your creatives for their very best work.

Digital Asset Management Digital Media MAM Technology

Multi-Faceted Media Systems Integration

On the journey from inspiration to a finished video, your creative team will have their hands on quite a bit of technology. There are many specialized, robust software solutions for every step, from production to postproduction to transcoding and distribution. You may have several capture devices and may have unique ingest needs. Everyone on your team works with media files, so a good Media Asset Management (MAM) solution is essential. Team members may be spread all over the globe. Some are on location, others in on-prem studios, and others work from home.

Many small and medium-sized video production teams find that they have loosely connected a hodgepodge of software, hardware, and media storage solutions into a fragile and overly complex system. A system that has evolved may be inefficient and easily broken.

The organic and haphazard adoption of tools may have left your team with ineffective, poorly documented workflows. These workflows may have evolved without ever being designed for efficiency, creativity, or high performance. With so many innovations on the market promising to transform your editing process, you may wonder how to get the greatest efficiency and quality. It may be time to take a good look at multifaceted systems integration.

When properly engineered, these disparate solutions can work seamlessly as one. Multifaceted media system integration is the process of combining all these tools into one system. The result is a powerful single-source content supply chain.

When you commit to multifaceted media systems integration, the first step will be to get a picture of the current hardware and software, all the locations where files are needed, what team members require access, what software applications are used in their work, and the related hardware at each step of preproduction, production, postproduction, and file distribution.

A system integrator will partner with you to dig deep into an analysis of the system architecture and assess how the components work together. While many new innovations are available, there is often the need to continue preserving and using valuable legacy systems. A customized and personalized system integration strategy will allow you to implement new technologies while benefiting from legacy systems.

Workflow analysis is also essential. Once workflow issues have been identified, the workflow engineer can design fresh solutions that will bring your team the greatest efficiencies and free up time and energy for creative work. Once the needs have been assessed, the next step is architecting and deploying systems that incorporate all essential aspects. The result is a reliable, properly integrated system.

Investing in a media system and single-source content supply chain integration brings operational efficiencies to your team, including automation, streamlined workflows, improved access to assets, powerful search capabilities, and better collaboration and sharing.


Advantages of Single Source Content Supply Chain Integration

A content supply chain is the system to plan, produce, and deliver content. Integration into a single source brings tangible value and benefits to any organization. When your infrastructure aligns with the content your customers want, your team will create high-quality videos efficiently.

  • Single-source content supply chain integration improves efficiency by reducing the time it takes to produce and distribute content.
  • Your creative team will spend less time searching for assets and are freed up to create content.
  • Having a single source of content makes it easier to manage workflows.
  • Versioning control ensures that everyone is working on the same version of the media files, reducing delays and improving content production speed.
  • Single-source content supply chains can reduce storage needs by eliminating the need for multiple copies of the same content.

Effective multifaceted media systems link the tools so that these many different components function and act as a single coordinated solution. Creative applications can be set up to interact with other software, hardware, network, storage, and media asset management systems to facilitate and streamline workflows.


Contact Us Today

CHESA can evaluate your current setup and ensure the proper infrastructure is in place to meet your needs and deliver your product with quality, speed, and efficiency.

CHESA has a passion for the nuances of media workflow integration. We have strong partnerships with the best-of-breed technology providers in the creative IT industry. We take a holistic approach in recommending solutions that bring real value and benefits to your organization rather than selling technology for technology’s sake. Our team comes to the table with deep knowledge of the tools and vendors. It is ready to address the demands and requirements of your environment and advance your business goals. Contact us today to learn more about how a multifaceted media systems integration can enable your creative team to create high-quality videos efficiently.


Understanding the Epic: A Closer Look at Agile Software Development

In the world of agile software development, there’s a term we use a lot – “Epic.” An Epic for agile software development is much like a novel, a substantive body of work, but in our world, it’s made up of smaller, easier-to-digest pieces known as ‘user stories.’ Picture it like chapters in a book, all contributing to the whole story. Some of the key characteristics of epics are:

  • They stretch over numerous iterations and sprints, just like a novel stretches over many chapters.
  • Epics serve as a roadmap, helping to organize and prioritize the product backlog.

The Spotlight on Our Epic: Building an End-to-End Interoperable Master Format (IMF) Workflow

Now, the epic we’re focusing on here is all about building a fully functional platform to oversee an end-to-end Interoperable Master Format (IMF) workflow. To those outside the industry, the IMF is a universal standard in the production and distribution of digital motion pictures and television programs. By bringing an end-to-end IMF workflow to life, we can deliver some incredible benefits:

  • Smoothing out the production processes, much like a well-oiled machine.
  • Boosting efficiency so that everyone can do more with less.
  • Cutting down costs, who doesn’t love that?

Adding the Air Traffic Control (ATC) Layer: Taking Command of Production

We know how important it is to have control over all production-related work. That’s why we suggest including an Air Traffic Control (ATC) layer – think of it as the command center for your production process.

Our epic story centers around an IMF work process, where we’ve identified three main characters, or as we say in the business, ‘user personas.’ With this incredible system in place, our users can:

  • Set off automated events, giving them a complete Interoperable Master Package (IMP) for the next steps of validation and processing.
  • Utilize an alternate workflow where individual deliverables required for a full IMP are treated as ingredients in a “recipe.”
  • Trust in the system to assemble a standard IMP from these ingredients once all are received.

Boosting User Interactivity and Improving Communication with Notification Mechanisms

Now, our epic for agile software development wouldn’t be complete without keeping our users in the loop. That’s why we’ve included notification mechanisms for every event, keeping both users and the system workflow orchestration layer in sync.

Our users will have the power to interact with the ATC layer in a number of ways:

  • They can create, manage, and keep an eye on activities happening in the workflow process.
  • They have the power to inform a go/no-go decision at any stage in the process.
  • The automated process can help by integrating fully qualified IMP-S files into the original IMP.

End-Point Deliverables: The Balance of Automation and Manual Requests

Just like how every book has an ending, our process too has end-point deliverables. They can be manually requested for predefined, one-off deliveries or produced as part of an automated process once we have achieved certain upstream success factors.

We do require a strict adherence to a predefined Studios delivery package standard, influenced by the likes of Netflix and Amazon, but don’t worry – we’ve made sure it’s easy to follow.

Transparent Troubleshooting: Addressing Failures through ATC

We’ve all experienced hiccups in a process, and our system ensures that if any issues occur, they’re visible through the ATC user interface. Users can easily initiate resubmission or cancellation of a given work process directly from the ATC. Picture it as a “command-Z” option; something went wrong? No problem, let’s take a step back and try again.

Bringing it All Together

In this epic journey, we’ve brought together a wide array of processes, tools, and user interactions. We’ve built a platform that is designed to streamline and simplify the complexities of digital motion picture and television program production.

Just like the chapters of a book, every element in this epic has its unique role, contributing to the grand narrative of increasing efficiency and reducing costs. The ATC layer, the automation, and the user interface all coalesce to deliver a seamless experience, keeping users in control and informed every step of the way.

This story isn’t just about the nuts and bolts of Agile Software Development or about the technicalities of an IMF workflow. At its core, this epic is a story about people – the users who interact with the system, the teams who manage the workflow, and the audience who will ultimately enjoy the results of a smoother, more efficient production process. And that’s the beauty of it: our work may be technical, but it’s all about creating a more engaging, personable, and effective experience for everyone involved.

Digital Media Technology

Improving Creative Workflows through Broadcast Systems Integration

Live shows. On-location news crews covering a big story. Live broadcasts of sporting events with millions of viewers globally. Large creative teams on location and in studios. Video coming in from multiple cameras at every angle. Impactful graphics and special effects. This is the world of broadcast media production, where viewers and advertisers demand the highest quality. Broadcast systems integration is essential in this environment to ensure your production team operates efficiently.

Modern consumer demand stretches broadcast production teams to the limits, and efficiency and seamless collaboration are necessary for producing high-quality video content rapidly and getting it out to viewers. In broadcast media production, agile workflows are not a luxury; they are crucial to getting the job done well.

Streamlined Asset Management for Broadcasting Teams

Streamlined asset management is the key to bringing it all together for broadcast teams. It is not uncommon for live productions to have production crews on location. At the same time, postproduction team members are offsite in the studios and at home, poised and ready to take uploaded content and quickly create videos prepared for immediate distribution. While a huge volume of assets is being assembled and ingested, the teams must also get their hands on the rich archive of assets. All MAMs will have basic features like metadata management, search and retrieve, and version control.

The broadcast team relies on the media asset management (MAM) system. Media Asset Management systems ensure that assets are easy to find and retrieve but do more than manage content. The right media asset management system will streamline workflows and allow broadcast teams to break through the challenges of this sector to create impactful content on short timelines. Broadcast teams need an asset management system that is agile and includes workflow orchestration tools as well. Optimized workflows that provide immediate access to assets enable faster sharing of media files with all creative team members. These efficiencies impact the finished video and expedite related content creation like highlight reels, interviews, and news clips. A MAM that can handle the demand of broadcast media will include:

  • Powerful metadata enrichment features to allow easy search and retrieval automated by AI and machine learning.
  • Robust security to manage user access to the media files and to protect against cyberattacks as well as unauthorized access and use.
  • Broadcast systems integration is essential. Your production team will be most efficient and creative when your MAM seamlessly integrates with their video editing software and other tools.
  • Built-in review, approval, and collaboration tools.
  • Hybrid system capabilities. Because broadcast studios often have significant on-premises investments, a hybrid solution that brings cloud-based media asset management while leveraging on-prem assets, equipment, and team members can be an effective solution. This capability gives broadcast production companies the power to harness the power of their on-prem investments while ensuring access for remote creatives, a growing segment of the production industry’s talent pool.
  • Automation is a game-changer, allowing creatives to do more in less time while ensuring consistency through many time-consuming and repetitive processes, including ingesting, metadata enrichment, validation of files, to the final distribution of finished video content.
  • Because many broadcasters distribute through multiple channels at the same time, broadcasters need to have a MAM that will support distribution to various devices and platforms and work with the most prominent players in social media like Facebook, Instagram, YouTube, TikTok, and Twitter, and the nonlinear platforms like video on demand and OTT.

With so many MAMs on the market, it’s challenging for companies to match the solution with their unique needs. One struggle that established broadcast media companies face when upgrading their MAM is how to bring it all together. When Kroenke Sports and Entertainment needed to modernize their MAM, they required a partner to understand the value of their legacy content and on-prem investments. Kroenke Sports owns several sports franchises and broadcasts collegiate and high school sports in their markets. CHESA worked with Kroenke to implement the IPV Curator MAM system. This system ensures consistency in content management and increased the efficiency of post-production. CHESA workflow engineers worked with the team to design effective proxy-based workflows and integrate them with other software and systems. Their media management system is robust to meet their current needs while also being designed to adapt to their future needs.

Contact Us Today

CHESA has supported broadcasters in identifying and deploying the solutions they need to produce and distribute high-quality content efficiently. CHESA has a passion for the nuances of media workflow integration. We take a holistic approach in recommending solutions that bring real value and benefits to your organization rather than selling technology for technology’s sake. Our team comes to the table with deep knowledge of the tools and vendors and is ready to address the demands and requirements of your environment and advance your business goals. Contact us today to find out more about how a Media Asset Management Platform can foster collaboration at your organization.

Digital Media Technology

Why Switch to SMPTE ST 2110

Media over IP solutions, such as SMPTE ST 2110 and NDI, offer numerous benefits to media professionals, including increased flexibility, scalability, and cost savings. ST 2110 is a suite of standards that represents a shift towards a more future-proof infrastructure for the production and distribution of media content, while NDI is a software-based protocol that allows video systems to share video, audio, and metadata over IP networks in real-time.

SMPTE ST 2110 is the preferred choice for major news organizations and broadcasters as it offers improved quality and reliability, while NDI is better suited for smaller groups that need flexibility, rapid deployability, and cost consciousness. However, there are challenges with both solutions. SMPTE ST 2110 requires manufacturers to interpret the Networked Media Open Specifications (NMOS) standard consistently, leading to conflicts in environments with multiple manufacturers. Additionally, not all manufacturers have native SMPTE ST 2110 options for their equipment, which can lead to costly integrations. On the other hand, NDI compresses the video being fed into the environment, making it less attractive for large broadcast groups. It is also less reliable under high network loads and requires a minimum of 1GB networking on a stand-alone network.

Overall, transitioning to a Media over IP solution can help organizations streamline their workflows, collaborate more effectively, and create high-quality video content at a lower cost. SMPTE ST 2110 and NDI are both great options for achieving this, and it’s important to weigh the benefits and challenges of each solution to determine which is the best fit for your organization’s needs.

However, both SMPTE ST 2110 and NDI have their limitations and challenges. SMPTE ST 2110 can be complex to implement and may require significant investment in infrastructure and equipment. Additionally, there can be compatibility issues between equipment from different manufacturers. On the other hand, NDI may not be suitable for high-demand applications, such as live broadcasts, due to compression and network bandwidth limitations.

Despite these challenges, Media over IP solutions are the way of the future for the media industry. As technology continues to evolve and improve, these solutions will become more accessible, affordable, and reliable. As such, media professionals and organizations should consider the benefits and drawbacks of each solution when planning their transition to a Media over IP infrastructure.

There are several reasons why people should consider making the switch to SMPTE ST 2110 now rather than waiting:

  1. Improved Efficiency: SMPTE ST 2110 provides more efficient data transport and processing. With SMPTE ST 2110, you can separate audio, video, and data into separate streams, allowing for greater flexibility in routing, processing, and managing different streams of content. This can significantly improve overall workflow efficiency and make it easier to manage and manipulate content.
  2. Interoperability: One of the primary benefits of the SMPTE ST 2110 standard is that it allows for greater interoperability between different systems and devices. By using standardized protocols and interfaces, it becomes much easier to integrate different systems and devices, which can help streamline workflows and reduce costs associated with custom integration.
  3. Future-Proofing: By adopting SMPTE ST 2110 now, you can ensure that your systems and workflows are future-proofed for new developments and advances in the industry. With the rapid pace of technological change, it’s important to have systems that can adapt and evolve as new technologies emerge. SMPTE ST 2110 provides a flexible and scalable architecture that can accommodate future advancements in the industry.
  4. Cost Savings: By implementing SMPTE ST 2110, you can potentially save money in the long run by reducing the need for custom integration and simplifying workflow management. Additionally, the increased interoperability and flexibility of SMPTE ST 2110 can help reduce costs associated with equipment and maintenance.
  5. Industry Standard: SMPTE ST 2110 is quickly becoming the industry standard for IP-based video and audio transport. By adopting this standard, you can ensure that your systems are compatible with other systems and devices that are also using the same standard. This can help increase collaboration and facilitate greater interoperability between different organizations and companies.

SMPTE ST 2110 offers many benefits for all types of content producers from live broadcast media organizations to corporate and government public affairs and media divisions. As the industry continues to move towards using IP networks for the production and distribution of media content, SMPTE ST 2110 will become increasingly important for media professionals looking to stay competitive in the changing landscape of the media industry.

These are convincing reasons to make the switch to SMPTE ST 2110, but the question is when should we do it? Simply stated, the answer is now. There are many compelling reasons why organizations should consider making the switch to SMPTE ST 2110 now rather than waiting. By adopting this standard, you can improve workflow efficiency, reduce costs, future-proof your systems, and increase interoperability with other systems and devices.

In conclusion, Media over IP solutions are rapidly becoming the preferred choice for media professionals. SMPTE ST 2110 and NDI are two popular options that offer unique benefits and challenges. While SMPTE ST 2110 may be better suited for large broadcasters and news organizations, NDI may be a better fit for smaller groups that prioritize flexibility, rapid deployability, and cost-consciousness. Ultimately, the decision between these two solutions will depend on each organization’s specific needs, budget, and goals. If you have questions about either or would like to set up a call to figure out which is best for your organization, you can count on Chesa to help you navigate these often challenging decisions. Give us call. We are happy to help.

Adobe Blog Digital Asset Management MAM Technology

JAVA EXPLOIT – vulnerability with Log4j

Continue Following this Blog Post for Live Updates!

On Friday, December 10, 2021, CHESA received notice that there is a vulnerability with Log4j. “Log4j is a Java-based logging audit framework within Apache. Apache Log4j2 2.14.1 and below are susceptible to a remote code execution vulnerability where a remote attacker can leverage this vulnerability to take full control of a vulnerable machine.” CHESA Support is evaluating all environments for any vulnerabilities related to the Log4j. We have reached out to our vendors to gather information on if their software presents this vulnerability.

The following vendors have identified vulnerabilities or provided feedback. If there is a vulnerability in your environment CHESA support will open a case under your service contract to address the vulnerability.

Amazon Web ServicesAWS BlogUsing AWS security services to protect against, detect, and respond to the Log4j vulnerability | Amazon Web Services. December 20, 2021: The blog has been updated to include Amazon Route 53 Resolver DNS Firewall info.

ArchiwareP5 and Pure are not affected by the Java Log4j vulnerability. P5 and Pure do not use any Java code, that also excludes the use of the Java Log4j library. It is thus not affected be the Log4j vulnerability. Both products are based on the Naviserver that is written in the C programming language. 

AsperaAspera does not use log4jv2. The java applications use log4j-over-sl4j – which uses the same API interface as log4j but it is a different software component. There is one part of the java stack that does use log4jv1 – that is the trapd component when it is interfacing with the hdfs:// type storage. There are not many customers using HDFS. Since this is log4jv1 it is also not vulnerable.

Avid – December 20, 2021 Update: Avid is aware of the recently reported Apache Log4j RCE vulnerability.
CVE-2021-44228 – Please review the following document for more information, and follow Avid Best Practices for isolating your Avid systems from the internet.

None of the Docker images that we currently distribute as part of Accurateplayer or Accurate.Video includes any version of log4j. Our product, Accurate Player Vidispine Edition (APVE), did have an issue with one of its renditions but this has been fixed and rolled out.
Cantemo, Vidispine, and any other components are not impacted directly by this vulnerability. In Cantemo we have the following components that use Java: Elasticsearch – no remote code execution issue. Rules Engine 3: Tomcat/Activiti – using an older log4j that is not affected Vidispine and its components like Solr – no remote code execution issue. We will still release upgrades for all Portal versions under maintenance with an upgraded Elasticsearch, and potential automatic configuration changes to other components. Vidispine’s analysis here

If you want an immediate fix you can apply configuration changes to Elasticsearch here– and Vidispine+Solr (see Vidispine support message above).

Dalet – Flex: Flex itself is not affected, however, two third-party services are. Flex Java services and apps use SLF4J with logback, not log4j2, read here -vulnerability-and-spring-boot not affected. Third-party services exposed to this vulnerability: Elasticsearch and Logstash. This documentation explains more about the log4shell vulnerability in the context of these two services. Entire Security Bulletin and Remediation Instructions here

File Catalyst – At this time, FileCatalyst products are not impacted by this vulnerability. For the latest guidance.

Iconik – We determined that we had internal components which were running the vulnerable version of log4j but with a configuration that most likely made them not vulnerable (a recent enough Java with default settings which made it not execute any malicious code). We did however proceed to patch the vulnerable software to be doubly sure. We have also investigated our logs and have not seen any indications that there have been any exploits though we do see active attempts at exploitation from various sources.

IPVPlease rest assured that the use of Solr (read more here) in Curator is not exposed publicly on Curator systems. However, we do understand that the vulnerability is concerning so we’re recommending a patch to further mitigate any risk. For more info
You will need to do the following: Edit the Solr command file found in [Curator Server InstallationPath]\Server\Solr\bin\ by adding the following line: set SOLR_OPTS=%SOLR_OPTS% -Dlog4j2.formatMsgNoLookups=true
Following this, restart Curator Server. To confirm the setting has been changed successfully, check the Solr Admin page on your Curator Server machine (located at: http://localhost:8983/solr/#/ ) to find the following under the JVM Args heading: “-Dlog4j2.formatMsgNoLookups=true”

Levels Beyond On December 10, 2021 A Log4j Security Vulnerability known as CVE-2021-44228 was brought to the attention of our TechOps and SecOps engineers. After a full investigation of REACH ENGINE code, packages, systems, environments, completed shortly after notification, it was determined that all versions of Log4j libraries currently leveraged are not impacted by the reported vulnerability. We at REACH ENGINE take security very seriously and continually monitor the health of our code libraries and rapidly respond to any information of risk for our customer or their business. For now, all REACH ENGINE code packages are without impact however we will continue to be vigilant and follow the issue appropriately.

North Shore AutomationNSA Software – In addition, NSA does not use Log4j in any of our software. NSA VM deployments – A previous and unaffected version was installed as part of the base CentOS install on some older NSA VMs. It is an older version (1.2.x) and is not impacted by this vulnerability. This vulnerability was introduced in v2.x. The old version can safely be removed from the VMs without impacting any of the software running on them with the following command: sudo yum remove log4j

Open-E In order to ensure the highest levels of security for our users, both Open-E JovianDSS and Open-E DSS V7 have been checked for any possible vulnerabilities related to the Log4Shell exploit. Despite the fact that our products’ core systems don’t contain the affected Log4j Java library, we’ve conducted multiple tests to check if the 3rd party management tools (which are run in cases where the related hardware is installed on the server) have not been affected.

Prime Stream – PENDING

Quantum and CatDV – Read Bulletin here Quantum is aware of the recent Common Vulnerabilities and Exposures (CVE) database entry regarding the open-source Apache Log4j utility and is actively monitoring the issue and evaluating its impact on Quantum products.

Scale Logic – PENDING

Signiant – Please note that we have investigated the Apache Log4j security vulnerability (CVE-2021-44228) and confirmed that NONE of the Signiant products are exposed or impacted by this vulnerability.

Studio Network Solutions – At this time we have not discovered any versions of our products that are vulnerable to this exploit. Our Statement

Telestream – Telestream has determined that the following products are not affected: Vantage, ContentAgent, Aurora, Cerify, Vidchecker, CaptionMaker, MacCaption, GLIM, Switch, Wirecast, Wirecast Gear, ScreenFlow, WFM, PRISM, Signal Generators, MPEG Analyzers, DIVAView, MassStore, iVMS, iVMS ASM, InspectorLive, Cricket, Geminus, IQ Media Monitor, Surveyor TS, SurveyorABR Active, PLM, cVOC, cPAR, Sentry, Sentry Verify, Medius, Consul and our Telestream Cloud Services . For products DIVACore, DIVAConnect, Kumulate, SurveyorABR Passive and Inspect 2110, contact  for more information.

If you have any questions, please open a case at or call the support line at 410-705-6286.


Marina Tucker – Director of Support Services and Customer Success











Adobe Blog Digital Asset Management MAM Technology

How and Why CHESA Became an Adobe Video Solution Partner

The primary purpose of a solution architect’s work is to help clients use technology to their advantage. Given the prevalence of Premiere Pro and After Effects in our industry, I was already very familiar with Adobe’s video editing software applications and regularly sought to stay informed regarding changes and advancements in their products. CHESA has been working closely with Adobe for years, and when the opportunity arose to learn more and help CHESA become certified as an Adobe Video Solution Partner (AVSP), I jumped at the chance.

The training Adobe put together to become an AVSP was explicitly designed for systems integrators who regularly help clients smoothly transition their creative content through the many software applications and platforms they use to do good work. A few quick examples include best practices for transitioning sequences between Premiere Pro and Black Magic Design’s Davinci Resolve. Or, transitioning audio tracks between Premiere Pro and Avid’s Pro Tools.

We also explored the best ways to fuse tools like Media Asset Management (MAM) and Digital Asset Management (DAM) systems with Adobe’s software to help companies organize and share their work. Always with the goal of keeping their creative teams focused on what they do best. Adobe’s mission in providing this training was to share the best of what they have learned working with their customers. This then allows Adobe Video Solution Partners to help more end users/creatives/editors/VFX artists, etc., to fully leverage their software’s capabilities. 

Adobe started us off with baseline training. I went through modules covering a wide range of Adobe’s best practices, including setting up project templates and custom workspaces in Premiere Pro, everyday working practices and common keyboard shortcuts, hardware performance guidelines, balancing sound in projects, and standard delivery methodologies, etc. Each class essentially made sure we understood the basics of the editorial process using Adobe’s software. 

When we progressed to the more complicated modules, which covered more advanced topics, such as proxy workflows, Adobe Team Projects, or Premiere Pro Productions, that baseline curriculum served as a solid foundation to build upon. Also, Adobe made sure there were no shortcuts to certification, by the way. Tests with proofs were all built-in, so Adobe knew “yes, they did the work”. And, because I’m a nerd, I created an Adobe knowledge base for our engineers at CHESA to utilize, organizing all of our notes from the certification training. Ultimately it is now a knowledge repository that will continue to grow, where our engineers can find information to support our customers readily.

As a solutions architect, part of my motivation to dive into the training, and a key part of Adobe’s plan, is to provide customers with more access to expert resources regarding the best ways to use and integrate their tools with other platforms. Now customers can work with certified Adobe Video Solution Partners who can provide a conduit for communication with Adobe’s experts and engineers to solve problems and create even better tools. Certified partners were a missing link between the brilliant teams at Adobe and the incredible creatives in our industry. But, not any longer. Now, teams like CHESA can act as a force multiplier for Adobe and continue to hone our workflow therapy skills. 

I think the industry as a whole is going to benefit markedly from this program as it leads to greater collaboration and innovation. Creatives, media IT, and engineers now have a partner to provide constant feedback directly to Adobe’s teams on what creatives want and need and help refine and fast-track better user experiences.

Adobe’s investment in our industry, via AVSPs like CHESA, shows the level of commitment on their part. It illustrates their awareness of their shortcomings and their desire to share their valuable experience and knowledge to bridge the gaps between them and their customers. They’ve done the work to find systems integrators they can entrust their customers’ workflows to, and have prepared these new partners to dig even deeper into the hard questions that inevitably will help the platforms become better. Adobe knows that sending a client to a consultant/system integrator without knowing how strong their knowledge of Adobe’s ecosystem is, is not helpful to the industry or the success of their platforms. This process has ensured Adobe can have confidence that their valued community is in good hands with partners who can help them get the most out of their software and put unique workflows together to refine and empower their work.

More on the Adobe Video Solution Partner Program:
How can CHESA help me with my Adobe workflow?
The Workflow Show podcast with Adobe regarding the program
CHESA’s Press Release
Adobe’s blog on the Adobe Video Solution Program