Sports Broadcast Learnings from 2020 Inform the Future. And it’s Grand.

Monday Morning Quarterback Sports Video Survey – The Future Looks Bright 

They say hindsight is 2020. As I reflect on the past year and our world of sports, I won’t soon forget the cancellation of games, the job/family/community bubbles we created, and of course how we watched sports. As fans, we generally only think about the play on the fields or courts. We should also remember all of the men and women at the venues who tell the stories, with amazing graphics and videos, that get us excited. What happened to them during the pandemic?  

I had a lot of questions. Like, how did they navigate these new waters we all found ourselves in? And, what do future post-pandemic environments look like, to thrive in their job? So, I called friends in the team and venue space to discuss what they learned from the challenges of working through COVID 2020 shutdowns and working in a bubble. 

Sports Teams During Covid-19

Overall what I learned from these discussions is that most teams were not ready for any kind of work from home scenario. A handful of teams had been working with a Media Asset Management system, but not necessarily in the cloud, and certainly not as a remote solution. Others had nothing prepared for a work from home (WFH) scenario and relied completely on their IT departments. Unfortunately, most of these IT departments were also not suited to sustain a WFH media production environment. Needless to say, most of us were caught off guard by the rapid changes that were needed to adapt to our new “normal”. 

So, how did these teams continue to deliver? A lot of creative thinking, intelligent workarounds, and perhaps some unapproved, but socially distanced hard drive exchanges. Many decided to use Teamview or remote desktop because they were already on those systems. Others went distinctly old school and used the public internet to exchange files through dropbox, Google Drive, or One Drive. Not efficient…but cheap. Hey, it was an emergency. 

I spoke to a few teams who luckily had both centralized storage and a MAM system that they turned into their own private “cloud” systems which allowed them to continue working effectively at home. They could access their systems through either a VPN or RGS login that allowed them to have full access to their entire catalog of assets, NLE, storage, and music. By utilizing their private cloud they didn’t have to worry about ingress or egress fee charges via a public cloud partner.  

Fan Engagement More Important Than Ever

Even though sports wasn’t happening, teams did have to keep their social media, marketing, and community engagement going. The fans, man! But with no one in the office, it made it much more difficult for those without a MAM system in place to find the assets they needed to get that fan base the fix they needed.  It was even worse for those who did not have a centralized storage system in place. Their only option was to SLACK their co-workers with questions of “who has the blah, blah, blah shot?” I don’t wish that pain on anyone.  

The overwhelming feedback I heard from the teams I spoke with was how much their IT departments were essential to them getting their work done. Something I’m sure IT has been trying to tell them for years. Whether it was to access the corporate network remotely, set up the Teamviewer credentials, or simply troubleshoot connectivity issues, the media department and IT teams found themselves collaborating in new ways throughout 2020.  

When I questioned how working from home affects the creative process, the overwhelming response was -“it sucked”!  There was no more walking next door to ask someone to take a quick look at a timeline for instant feedback, or bouncing ideas off each other in a weekly creative session. The in-person camaraderie they had with their colleagues had been snatched away, and programs like Teams, Zoom, or Google Hangouts became the norm. More than a few found the process less than productive and missed the interaction of in-person meetings. Incredibly, this didn’t stop these creatives from turning out unbelievable pieces of content that normally would be showing in stadiums or arenas on massive video boards and now had to be focused on their Facebook, Instagram, and Twitter accounts. Big shout out to team storytellers. 

The Future…

So, what does a Post Pandemic workflow look like for some of our favorite sports teams moving forward? To be honest, the plan is different for each team as has always been the case. Some are planning to bring everyone from the team’s facilities back, in person, to work as soon as possible. Others are going to implement a more flexible environment where WFH still exists, allowing for more work-life balance.   

How were these teams impacted financially due to COVID? Budgets are always tough in sports media production and 2020 brought some new challenges to the table. I asked each team how they handled budgeting during the pandemic. Every single one said that their upper management was amazing and asked what they needed to help get the job done. Keep in mind, no one knew how long this would last so no team completely overhauled their environments. Many pushed off upgrades or planned projects due to state and local restrictions of allowing anyone into the facilities. Most expect to make do with current systems through the upcoming 2021 seasons in hopes that 2022 may bring new technology solutions. 

All agreed that not only have they learned a lot personally, but they also learned a lot professionally. 95% of the teams I spoke to are now looking at a longer-term solution for a better remote working environment, either through a managed service via their IT department or an outside vendor who can take on the day-to-day support of such systems, allowing them to focus on being creative and telling us the story. And we do love the story, don’t we?

None of us can wait for our favorite sport to start playing again. The folks who make sure us FANS enjoy our time at their venues cannot wait either. So the next time you go to a game and watch one of those funny videos or amazing player intros, remember the people behind the production. They figured out creative ways for us to stay engaged with our teams and our communities. They helped us forget, just for a few minutes, that we were all stuck at home during the darkest days of 2020, and allowed us to have a bit of good ole sports entertainment.


About the Author:

Doug Price is an award-winning video editor and sales professional. He has spent over 25+ years in the media industry with a focus on sports creative content and media departments. Doug has worked directly with sports teams, leagues, venues, and broadcasters across North America to help develop media production efficiency through technology solutions for the past 10 years. 


Introducing the Women of CHESA

We are in a new year with new challenges and new aspirations. After closing out 2020, I think we are all in search of a bit of inspiration. I feel incredibly lucky that this inspiration came in the form of two incredible individuals who also happen to be my colleagues.

I’m proud to introduce Marina Blandino and Ashley Williams. Ashley and Marina are the co-founders of Women of CHESA – a community of women empowering women in the media and entertainment (M&E) industry.

Marina has been with CHESA for over 5 years and is our Director of Support Services and Customer Success. She’s also CHESA’s first Director woman of color. Ashley has been with us nearly 2 years and is one of our amazing Project Managers.

What’s interesting is they’ve only met in person briefly and for the most part, their jobs don’t intersect often. So, how did this spark of creativity lead us to where we are currently?

The Beginning:

As fellow women at CHESA (Chesapeake Systems), Ashley and Marina were seeking a way to connect with each other and with other women in the tech space, while in lock-down. This resulted in submitting requests to attend a virtual conference for women in technology. CHESA’s CEO, Jason Paquin, encouraged them to invite all female employees to attend, regardless of their role (sales, engineering, accounting, etc.). Nearly all said yes.

Ashley and Marina quickly realized they could do more. They realized they wanted to do more. And, their teammates wanted more.

With management’s full support, the Women of CHESA was born.

They started by launching monthly brunches with all their female coworkers. No work talk allowed – this was a time to just connect with each other. We started to share some of our personal likes and dislikes, our success stories and even some of the struggles we encounter. This was a way for all of us to come together and create those crucial supportive relationships that being remote was making more challenging.

CHESA has hired two women during COVID. Women of CHESA was a way for them to quickly feel welcome and create allies while never meeting in person. The group was already bearing fruit.

This is a journey that we are going on together. And it’s so exciting because we are creating something so positive. A reason why I took my promotion is because I want to make sure I’m on that leadership team making those changes. I have a voice to say we need more women.

Marina Blandino, Co-Founder, Women of CHESA

Currently, Women of CHESA have all female members but a lot of male allies.

Big Picture Goals:

    • Target college campuses and help drive awareness of the various careers available in the M&E industry. This will ideally create opportunities to hire a more diverse work pool.
    • Partner with other industry organizations that are working towards similar goals. Eventually they want to launch their own mentoring program, but for now plan to support Rise’s North America mentoring program this year.
    • Create a paid internship program to bring more women into our industry and give them the chance to see what a career in M&E could look like.
    • Create a scholarship program to target women before entering university or deciding on a major.

2020 proved that we can be productive while remote. Ideally, we still have face to face time but, for the moments in between, Women of CHESA will continue to find ways to connect with each other and maintain that crucial support system.


IT for the Creative Professional – Is NVMe Right for You Right Now?

At CHESA, we like to say that our passion for the bleeding edge of technology helps to keep our clients on the cutting edge. That motto fuels the primary purpose of our work – to help you use technology to your advantage. Sometimes that means you need the latest and greatest products in the marketplace. Just as often, you don’t.

A current trend that we’re starting to see become part of the architecture for possible shared storage solutions is NVMe (Non-Volatile Memory Express). NVMe is a new storage protocol designed to provide direct data transfer between central processing units and SSDs using a computer’s PCIe (Peripheral Component Interconnect Express) bus. It offers an alternative to SATA (Serial Advanced Technology Attachment) and SAS (Serial Attached SCSI) protocols and was designed to address the bottlenecks inherent in these previous technologies, unlocking the full potential of solid-state media. Its benefits include higher input/output operations per second (IOPs), gobsmacking throughput, and greatly reduced latency. The specifications for these drives report them to be roughly 20 times faster than traditional spinning HDDs (Hard Disk Drives) are today.

Naturally, people are talking about NVMe, especially after IBC 2019 where Quantum and Western Digital, to name just two, showed off their solutions. It’s the hot new thing. But do you really need it?

If your media creation environment needs extremely low latency and fast access to large amounts of data, then NVMe should certainly be considered. A couple of examples would be if you’re working in a collaborative environment with multiple streams of uncompressed or lightly compressed 8K or 4K video in use per workstation, or if you are orchestrating an event with a large number of concurrent ingest or playout feeds, such as an eSports competition.

Additionally, VFX houses with large numbers of real-time or non-real time renders might profit from every conceivable advantage to make shots available as fast as possible. There are some non-video workflows that also need extremely low latency and extremely fast performance as well. If I were trying to master global finances via high-frequency trading, build the perfect human via genomic research, or profit from understanding the human condition via real-time big data analytics, you bet I’d want to build a fire breathing NVMe monster.

In our corner of the industry, unsurprisingly, we are seeing organizations dealing with large amounts of data as quickly as possible interested in NVMe, such as big media conglomerates ingesting a lot of high-resolution media and networks acquiring shows at the highest resolution so they can future-proof content. These organizations may still be delivering primarily in HD but they are archiving 4K files for a time in the future when viewers may expect higher resolution as a matter of course.

Furthermore, Quantum, whose F-Series NVMe storage arrays received two industry awards during the IBC 2019 show, may be considered at the forefront of NVMe. Their F-Series storage, designed specifically for high-end video workflows, uses 24 NVMe drives per chassis and provides users with 25 gigabytes of aggregate bandwidth to work from simultaneously.

But outside this super high-end usage, most of our customers really don’t need a shared storage NVMe solution yet.

Most production and post environments currently don’t require extremely low latency or high IOPs because video playback is about large streams of sequential data. They’re ingesting and working with video on large, centralized, shared storage volumes that are using dozens if not hundreds of hard disk drives, which allows them to retain petabytes of information with great performance. Currently, that’s still the best bang for the buck, and the best solution for most use cases right now because compared to a petabyte of HDDs, a petabyte of NVMe is exponentially more expensive.

That’s not to say that NVMe doesn’t have its place in your workflow today — it’s just a matter of the scale of adoption. People are moving towards solid-state technology wherever it is affordable to do so. For example, I wouldn’t buy a new workstation or laptop without one. On a smaller scale where NVMe’s reliability and performance truly shine, it can make all the difference in the world to a freelancer or editor working remotely from internal or direct-attached storage. In fact, desktop workstations and laptops equipped with NVMe storage outperform some of the SAN volumes we built for customers five years ago and certainly weigh hundreds of pounds less.

Additionally, a hybrid approach to shared storage — where some storage vendors provide a layer of NVMe cache on top of traditional HDDs — could be commonplace in the near future. This solution could provide the best of both worlds with the speed of NVMe drives and the capacity and price of HDDs, so long as the software controlling the data flows between the drives works transparently and seamlessly.

In conclusion, we believe that customers at the top end – like studios, networks, and VFX houses – will use large NVMe shared storage volumes first. Additionally, the single, independent editor can make NVMe the cornerstone of their business by buying a machine equipped with a 1-4TB NVMe drive or attaching this storage to their machine via Thunderbolt 3. This option is cost-effective and offers 10 times the speed and performance of previous configurations.

Mid-tier media entities that comprise the majority of content producers and asset owners, who are currently working with compressed 4K or mezzanine HD video, might still consider the greater capacity and average performance provided by HDD storage solutions versus a smaller capacity NVMe storage solution with weapons-grade performance, as a better fiscal choice. You don’t need a supercar if a nice SUV will get the job done.

NVMe is in their future, too, as the industry inevitably moves to 4K, 8K, HDR, and whatever other astounding and immersive new technologies are waiting in the wings ready to blow our collective minds. Right now, the decision to explore NVMe really boils down to IOPs and latency.

CHESA is ready to help if you are considering NVMe. As your creative IT resource, we know time is money. Let us help you find the right solution for your business.


Working From Home – A Cautionary Tale

A little over a year ago, I moved from my long-time home in the DC Metro area to Durham, North Carolina. This move was made possible largely by CHESA’s pre-existing geographically diverse client base, where most day-to-day work is done remotely. Going from working in a ‘half-and-half’ remote/on-site work style to an ‘almost entirely’ remote work style (with occasional on-site travel) has involved numerous adjustments – which I thankfully had the opportunity to make while not under threat from a global pandemic. Additionally, as a person who is now able to live alone, I definitely have an easier time of this than others. That said, I think my experience might still be helpful – even if only as reassurance. Or perhaps a cautionary tale.

The first thing I did was to set aside an area for work; the second bedroom in my apartment was perfect for this. Even if all you have is a “work desk,” I think it’s valuable to set aside an area that denotes “I’m in work mode now.” This left me without a desk for my personal computer setup, and its temporary location is an ergonomic nightmare. So I’m about to mess up this idea by replacing it with a huge L-shaped motorized standing desk. On the one hand, this will encourage me to alternate between sitting and standing throughout the day, which is hugely aspirational and makes me feel very much a part of the zeitgeist. On the other hand, the division between personal and work will now be “which side of the desk I’m facing”. Jury is still out on the efficacy of this plan—I’ll report back once I’ve gathered more data.

It’s also important to take breaks to walk around, eat, and get fresh air. I’m terrible at this, often getting absorbed in projects for hours and only realizing I’m hungry once I have a headache. And with pollen season in full swing, the uniform and undisturbed dusting on my balcony leaves little doubt about how often I step out to breathe in that fresh, North Carolina air. Don’t be like me—set a timer for breaks, an alarm for meals, or perhaps rig up an elaborate Rube Goldberg machine with a pointed stick or cattle prod if that’s what it takes.

A lot of folks have asserted that showering and getting dressed “for work” every morning is a good routine to maintain despite not attending the office. Anyone who claims they actually do this everyday is either a liar or an alien wearing a human suit and should not be trusted. Treat yourself to some comfy micro-modal lounge pants, soft cotton T-shirts, and fleece-lined hoodies. In these dark times, we need comfort wherever we can find it. And you’re worth it. (Do try to shower every so often, though, especially if you live with other people—they’re also, presumably, worth it.)

If you have children who are now home 24/7—and especially if you’re trying to juggle them and work on your own—good luck! Do whatever it takes to keep them and yourself alive. Remember, only God can judge you.

In summary, this hungry, isolated, pajama-clad computer gremlin looks forward to answering your support ticket in the near future. Rest assured, I’ll work on it longer than I should, and I’ll be extremely comfortable while doing so—whether I’m standing or sitting. And in all seriousness, best of luck out there, stay safe, and let’s get through this mess together.


Business as Usual: More Important Than Ever, Even If It Takes Extra Planning

Coffee – check! Power cable plugged in – check! Favorite home-office chair – check!

Remote access to media – check???

While COVID-19 is testing the flexibility and preparedness of media teams around the world, enabling and empowering the remote employee has been a core tenet of production and media-IT pros for years. Media & Entertainment (M&E) has proudly stayed ahead of the curve for enabling field production partners, freelancers, and relocated employees to continue meeting critical deadlines. However, even the most tenured media-IT professionals are asking themselves, “Did we architect a system that will meet the demand?”

Whether you’ve been planning for such an eventuality or employee demand has increased remote access requests, the end result will look roughly the same. But, what does a well-designed media environment that can handle heavy demand look like? Setting aside the nuances of each unique environment, here is our checklist for the foundational elements to support your remote team members:

WAN: an obvious starting point, its importance cannot be overstated.

VPN: enhanced security, remote control, increased performance, reduced cost.

Remote Desktop Software: often goes hand in hand with a VPN and is vital to empowering employees to complete their specific tasks on time.

Media Asset Management (MAM) System: centralized, extensible, browser-based access, and fast. This is where the rubber meets the road for most teams. Users gain immediate access to the data they need in the resolutions they require. Many MAM systems enable workflows that include transcoding and access to new resolution options as well. This is the editor-turns-superhero utility belt (username and password required.)

From here, with thoughtful attention to the design of your workflows, the machine should be humming along nicely. It’ll be easy to forget that what you’ve architected will fundamentally change the way your media teams operate moving forward and this can be a scary proposition to embrace. As the possibility of a fully remote team becomes a demand, what are the attendant benefits? There are headline answers: improved productivity, time and expense saving, and now, safety. What is the benefit for the media production environment?

When consumer demand for content spikes, as it surely will in the coming weeks, or deadlines simply cannot be moved, the team is ready and equipped to deliver regardless of any limitation and lack of access to their office environment.

Production team leads can easily delegate or reassign projects to team members that can meet the deadline, regardless of location.

File size becomes a non-issue. MAM systems enable access and manipulation of your media by way of smaller, friendlier proxy files. Simply said: close proximity to the SAN or NAS is no longer a requirement.

Speed, efficiency, and adaptability. These are real business benefits; benefits that can be tied directly back to monetization and consumer satisfaction.

Putting these pieces in place can take many thoughtful discussions, and should. Working from home is a practice in trust. Trust works best when it’s supported, encouraged, and allowed to operate as designed. Learning from your user base will give you direction for what enables the team to be most productive – and Chesapeake Systems can help you with the rest.


E-book: Practical Security Tips

Don’t become the next security breach headline. “Practical Security Tips” will show you how to stay safe.

In this e-book, Chesapeake Systems’ security expert, Terry Melton, takes you through key steps for keeping your systems, accounts, and digital assets secure.

Some of the most effective protections are the least exotic and most easily implemented options out there. Have you crossed these off your security checklist?

This e-book outlines:
• Basic security – what questions should you be asking?
• Authentication management
• Encryption
• Access and protection policies
• Antivirus and anti-malware protection
• Patching schedules
• Backup fundamentals


Planning Ahead for 2020: Digital Content Challenges for Post Production

Devising digital media workflow solutions with the future in mind is what we do here at Chesapeake Systems – diving deep into the latest products and technologies and thinking about their implications for the road ahead. Now that we are halfway through 2019, business planning for 2020 will be in full swing soon for many in post-production and that means planning for the continuing increase in digital content.

A recent study by Cisco predicts an unprecedented 82% of all Internet traffic will be video media by 2020. Furthermore, as more streaming services are launched – Disney+, Apple TV+, and more – it’s clear that content will continue to be produced at record levels. Even more pressing for networks, news outlets and political influencers is the coverage of the 2020 Presidential race that promises to be unprecedented in its scope and detail, including leveraging video content by any and all means possible.

What does this mean for post-production facilities, departments or in-house post teams? Running out of space will happen faster. Accurately assessing the bandwidth you will need is vital to the process of planning for the future. It’s important to have a clear understanding of your provider’s bandwidth offerings in both upload and download speeds. Having the ability to bring media back quickly has become a key factor in the technical post-production equation. Many providers tout high download speeds, but upload speeds have to be equivalent so that creatives and engineers working behind the scenes have the ability to move and deliver assets in a timely manner.

Along with the significant increase in both quantity and resolution of media being created comes the constant challenge of media management. Questions like whether storage should be cloud-based, on-premises, or both; how team members will locate files; and accessing render locations, have motivated some of the leading solutions providers in post to integrate media asset management, once not considered a significant part of the post deliverable equation. For example, Adobe has beefed up its platform, with the expectation that MAM will now be core to every workflow.

Another trend we are seeing is the investment in the editing process. Platforms like Blackmagic’s DaVinci Resolve are adding more tools beyond color grading, shifting things like simple VFX work onto the editor’s list of responsibilities. Tracking all the revisions being made by the director, studio, VFX artists and editors must become intuitive to the MAM process, or version control will not only disrupt the workflow but frustrate the people collaborating on a project, who are typically working from locations around the world. For example, in the political realm, a controlled and streamlined environment is key to enjoying all the benefits of a quick response to news events. Getting clicks wins the news cycle race and that translates to viewers and dollars. But you can’t be first without understanding the analytics – how are your videos performing? The impact of content on people – and integrating that analytical response into the workflow and MAM, alongside fast upload/download speeds – is essential for translating success into profit margins … and ultimately having your voice rise above competitors.

Furthermore, once 5G is implemented, editing in the cloud will become even more prevalent, and interest in physical drives will phase out except for those worried about security. Collaborators on a project won’t care whether the footage they are working on is cloud-based or not – as long as it can upload and download quickly, creatives will be happy; however, the fact that 5G will be easily accessible to the masses makes it a security concern. If people can access things quicker, security teams subsequently have less time to react. Two minutes is an eternity if there is a breach. Implementing a solution without thorough attention to the proper permissions, copyrights and licensing to thwart security risks is a recipe for disaster.

We are also experiencing growth in the prevalence and popularity of collaborative workflows and the platforms that cater to them. The evolving capabilities for creative teams to interact with the many members of the post team in real-time is impacting the pace of finishing in the post-production process. Here at Chesapeake, we are currently solving ways to implement an instantaneous, collaborative process into highly flexible and functional workflows, which should be on everyone’s hotlist.

Advancements in Internet technology should also be taken into consideration when planning for 2020. IP-based approaches, as well as rapid enhancements in disk storage technology, have positively affected the performance levels of NAS devices. The speed and agility of NAS setups, which at one time were only available with a SAN, means that media-rich color correction can now be done on a NAS, opening up new doors for how post teams can operate. While this path will be a viable new option for some, post and IT teams need to understand the ideal environment for this type of move. Hurdles in security or scalability can be serious complications in making this transition successful.

In summary, there is a multitude of moving parts to consider as you evaluate your media workflow management needs for 2020 and beyond. We are thinking ahead, and the experts here at Chesapeake Systems are poised to make this evolution the most successful one you can imagine.


Cloud Storage for M&E: Don’t Let the Buzz Sting You

Considering the changes to the business in media and entertainment, and the technologies associated with managing their digital media supply chain, it’s not surprising to see many of our clients wanting to take a fresh look at their video storage workflows based off the value of assets to an organization.

When the 2011 tsunami in Japan put videotape in short supply, companies were forced to put their media in a digital format on high-performance digital storage platforms, which, in turn, led to substantial investments in production storage. With the advent of new technologies and the cloud, there is now serious attention focused on ways to mitigate the need for expensive, unnecessary hardware and find more efficient and less costly ways of managing video.

Clearly, the market is full of storage options that promise to be everything to everyone but these promises do not take into account the value of content to your company. For example, content that’s in use or that needs to be instantly accessible by your teams can be kept on high availability production storage; however, one-off media that’s ready for long-term archiving can be migrated to less expensive storage platforms.

While you may be aware of the different storage technologies available, and are probably finding business reasons to use them, it can be tough to understand and stay abreast of the value proposition that each technology offers your business. It’s tempting to want a one-size-fits-all solution – after all, that’s what production storage has been for all these years.

However, using a single storage technology to serve multiple workflows is, to put it simply, no longer best practices.

Here is where our conversation starts – by talking about what your company does and how you do it. Do you create and push content on a daily basis? Does content need to be instantly accessible? If so, keeping media on production storage might be the best solution. Or does your content consist mainly of one-offs or programs to which you have limited rights? If that’s the case, you’ll want to move it to a secondary tier of storage – you don’t need to invest in the kind of easy access production storage offers.

Which solutions are right for you? You’ve probably heard about object storage, SATA and SAS drives, Flash and the cloud. It’s easy to gravitate to the latest buzzword or the industry’s “next big thing,” but that may not be what you really need.

Cloud vendors, in particular, have done a very good job of marketing themselves. Its appeal is understandable. Storage requires significant resources: You need a data center, the power to run and cool it, and IT and broadcast engineers to keep the system up and running. If you work in New York, Los Angeles or London, real estate is at a premium. The idea of having storage off-site, in the cloud, is awfully inviting to corporate officers looking to cut costs.

The cloud is also advertised as costing less than a penny per gigabyte to store media but if you need to push and pull content from the cloud frequently, the cloud gets very expensive, very quickly. Many suffer from sticker shock over the cost of recalling large bit-rate video files from the cloud, and for that reason it is still not truly considered a viable active storage tier.

On the other hand, for situations like disaster recovery, cloud storage is often ideal. When disaster strikes your headquarters, you’ll be grateful for the geographical distance between your primary site and your data center.

Middleware may also be a key part of your solution. Typically, part of an overall media asset management solution, middleware can help you utilize different technologies within your ecosystem. You simply set certain policies, and middleware acts like a traffic cop to trigger different storage workflows based on your data policies. For example, if content is not manipulated in six months on production storage, it’s tiered off automatically to nearline disk storage, tape or the cloud. If content needs to be recalled for production, it’s pulled back into object storage or a less expensive pool of storage behind the production server.

As storage options grow, the question of what’s best for your media-centric business or workflow can seem overwhelming. It’s Chesapeake’s goal to help you discover the technologies that best fit your workflows and how to utilize them as part of your daily operations, as well as in response to your short- and long-term storage needs. We’re here to navigate the maze on the market. Let us help you choose the options that meet your business goals.


An Exploration of Object Storage for Media-Centric Workflows

Object storage offers a new paradigm. As a data storage system, it is something that can be installed on site, but it is also the basis for most of the storage available on the public cloud. However, its use as a valuable technology in M&E — both for active workflow storage and long-term asset preservation — is less understood. This tutorial will explain why it is so useful, how it works, the problems it solves, and how it differs from other approaches.

Like many things in IT, the way data is stored, especially in a shared storage environment, can be thought of as a “solution stack.” At a base level, data is stored on storage devices, such as hard drives and flash drives, as blocks of data. Every individual file gets broken up into several blocks of data with each block being a particular number of bytes. These data blocks are mapped to regions of the data storage device, such as sectors on a hard drive. These mappings are stored in a file system, which is a database of these mappings, along with metadata about the files, such as access rights, creation and modification dates, etc. The file system is layered onto the raw storage device when a drive is formatted.

File systems are organized into hierarchies of directories, or folders, and within these directories are files. They are certainly useful for organization, and many users and workgroups come up with elaborate hierarchies with strong naming conventions for directories and files. We have the means of sharing these file systems out to workgroups of users, server systems and associated software platforms, via SANs and file servers.

But there is something rigid about hierarchies. Perhaps there is a better paradigm than the traditional file system.

Blocks, Files, and Objects, Oh My!

Object storage answers this by borrowing the notion of a data object from areas such as object-oriented programming and databases. So, what is a data object? It is the data — any data but probably a file or other captured data stream — referred to by an arbitrary set of attributes, usually expressed as a number of key-value pairs. “File name” would be a key, and the name of the file would be the value for that key. “File creation date” and “file modification date” would be other keys, with their own values.

What object storage gives you that traditional file systems do not is the ability to create your own sets of key-value pairs to associate with the data objects you are storing, which can be integrated, more or less, any way you please through a software application interface. Think of the key-value metadata pairs you may have come up with for different classes of assets stored in a MAM database. You can come up with whatever you want, and they are not inherently hierarchically arranged. This means you could have a search engine application integrate with the metadata index of an object storage system, based on a customized query looking to bring back a list of all files that adhere to that particular set of criteria.

It is an exceptionally flexible way to organize things on an object store. Which might mean it is not really structured at all. You may find it more useful to keep an object’s searchable metadata in a totally separate place, like your MAM database. What the MAM and the Object Store both need to track is the file’s main object ID, which the object system assigns to the files that are stored on it. This ID will be what a MAM or other software application passes to the object store via a GET API call, for example, in order to pull the file back to a file system for processing or use.

Workflow Considerations

Many media software applications today are not able to modify data in place that is stored on an Object Store via that system’s APIs because they are written to utilize file systems on local storage, or shared via SAN and NAS file sharing technologies. Your NLE cannot really edit off object storage. Your transcoder cannot really decode from and encode to your object store. Not via the native APIs, anyway. However, many object storage systems do offer file-sharing protocol “front-ends” to their systems in order to support applications or users that need that interface to the data today. These do work decently, but tend not to match the performance of traditional shared file systems for media processing workloads.

Where this is starting to change more is in the public cloud space. Some providers like Amazon Web Services also offer media-specific services such as transcoding. These cloud providers are based around files being stored on their object platforms, like S3. They have been motivated to adapt and build media tool sets that are able to work with data on an object store. These capabilities will likely, over time, “trickle down” to on-premises deployment models.

For on-premise object storage deployments, the object storage platform is usually the second tier of a two-tier data storage setup. SAN or NAS for processing, and object for longer-term storage. Even smaller object stores can compete performance-wise with very large tape library installations. Tape may still be useful as part of a disaster recovery, or DR, strategy but it seems as though object storage is set to supplant it for many so-called “active archive” applications — archives that store data that is actually utilized on a regular basis.

Preservation at Scale

Another strength of many object storage platforms is how well they scale. Many can grow to tens and hundreds of petabytes of capacity per “namespace,” or unified object storage system. Many traditional shared file system technologies fall apart at this scale, but object truly is “cloud-scale.” We generate a lot of data in media these days, and by the looks of things, data rates and storage requirements are only going to keep increasing with 4K, 8K, HDR, volumetric video, 360-degree video, etc.

But what’s especially exciting about object for those of us in media is that it’s not just storage at scale, but storage with exceptional preservation qualities for the underlying data. Data integrity is usually talked about in terms of “durability,” and is referred to as X number of nines — much like data availability which, unlike durability, speaks more to the accessibility of data at a given moment. Durability for object storage is achieved through a few mechanisms and they result in systems that make it very likely you will never experience any data ever going “bad” or being lost due to bit-flipping, drive loss, or other failures, even when storing many petabytes of data.

The first way this is achieved is through erasure coding algorithms. Similar to RAID, they generate extra data based on all file data that lands on the system. Unlike the generation of RAID parity data in RAID 5 and 6, however, erasure coding does not require costly specialized controllers; rather, it uses the CPU of the host server to do the calculations.

Unlike RAID, erasure coding can allow for more loss per disk set than a RAID 6 set, which can only lose two of its constituent drives. When a third fails, before rebuild is complete, all data on the RAID set is lost. As such, it is imperative to limit the number of total disks when creating a traditional RAID set in order to balance total storage space desired with the risk you are willing to take on multiple disk failures that can cause data loss. Erasure coding algorithms are much more flexible — you can assign six out of 18 drives in a set for parity, so one third is unavailable for actual storage. However, six drives per set can be lost, without data loss! Other ratios, which affect overall system data durability and storage efficiency, can also be selected.

Another mechanism some object storage systems use to achieve durability — even in the case of entire site failures — is the ability to do erasure-coded disk sets across subsystems housed in multiple geographic locations. There are often some fairly stringent networking requirements between sites, but it sure is reassuring to be able to have, for instance, a single object store that is spread between three locations, and is erasure coded in a way where all data can be maintained even if one of the three locations is wiped off the map — all while still achieving the storage efficiency of a single erasure code. If this is a larger geographic footprint than you want, a direct one-to-one replica is often also a function between systems in different locations. This really means two separate object stores do a full replication between one another. There are some whispers of two-site erasure coding becoming an option with some future systems, so things may improve the road from a storage efficiency perspective for two-site setups.

Finally, as far as preservation technologies go, some object stores feature ongoing data integrity checks, via checksum (or hash value comparison) techniques. Hashing algorithms generate a unique value, based on the original set of bits in a file. If the hash is run again at a later time, and the hash value generated is the same as the original, you know that all of the bits in the file are identical to the originals. If the hash value changes, however, you know at least one bit in the file has changed — and that is enough to consider a file ruined in most circumstances.

Thankfully, because multiple copies of bits are stored utilizing the previously-described erasure coding techniques, object stores that offer this kind of feature are capable of figuring out which versions of a file’s bits are uncorrupted and can repair the corrupted bits using these uncorrupted copies. This kind of operation can be set up to run periodically for the entire data set, so such checksums are performed every six or 12 months. When considered in tandem with overall bit-flipping probabilities, this can lead to a stable system. While some data tape systems do offer such checksum features, these are much slower to complete due to the inherent latencies and bandwidth limitations of the data tape format.

So Really, in a Nutshell

Object storage is metadata-friendly, and thus extremely flexible when it comes to organization and discovery of data. It is very easy to integrate with applications like MAMs and media supply chain platforms. It offers quick-accessibility of data and can scale to extremely large capacities while protecting against data loss due to mechanical part or subsystem failure, data corruption, or even site loss. It is not wedded to hard drive technology — you can build, if you want, object stores out of flash drives (we do not advise this but it is possible). You can own it and host it yourself, lease it via the cloud and a broadband connection, and in some cases create hybrid systems of these approaches. And it’s not subject to the risks of RAID when deployed at scale.

I think a common model that will emerge for many media-centric companies will be to build a single-site object store that is relied on for daily use, but a copy of all data is also put into a very low-cost public cloud storage tier as a disaster recovery backup. This will essentially keep any egress or recovery fees for the public cloud tier minimal, other than in a disaster scenario, because you are using the copy of the data that’s on your own system.

There is finally something new under the sun in data storage that offers real value. Object storage is available for everyone, and we look forward to seeing how we can use it to build smarter, more efficient systems for our clients over the coming years.


The Cloud: Inherently, Unpredictably Transformative

At Chesapeake Systems, we’ve been moving “the edges” of our clients’ infrastructure to the cloud for some time now. Some use cloud storage to collect assets from the field, which are then fed into the central management platform (often still found on-premises). Others use the cloud to host a review and approve web application, which might tie into a post-production workflow. The cloud is obviously used for delivery to both partners and the public at large, and all we have to do is look to YouTube to see how much of a shakeup to traditional M&E that has caused.

This makes the cloud transformative. It is more than “someone else’s server,” even though on one level it is that. But I believe that technologies as fundamental as “the cloud” are often inherently unpredictably transformative. It is difficult to imagine the kinds of shakeups they foretell. And this notion is exemplified by the famously controversial invasion of West Coast cities (and beyond, including Baltimore) by electric scooters.

For those catching up, Uber-like services have been renting electric scooters for short, “last-mile” type trips in major American cities. In classic “disrupt-at-all-costs” style, companies, like Bird Rides and Lime, dropped their rentable scooters off in metro area test markets by the hundreds (and maybe even thousands), without any permitting whatsoever. And Santa Monica, California, embraced them full on. These scooters are EVERYWHERE! Ditched on sidewalks. Being ridden down streets and bike lanes. I can only compare it to such “future shock” moments as watching people play Pokémon GO in NYC, the week it was first released. Essentially, one day there really wasn’t anyone tooling around on electric scooters other than maybe as an occasional novelty, and then BAM! Scooters scooters everywhere, scooting to and fro.

What was the confluence of factors that aligned the stars for rentable electric scooters to accelerate from minor influence to “THEY’RE EVERYWHERE!” practically overnight?

It’s simple. The scooters work easily. The consumer downloads the service’s app, which shows you the location of all rentable scooters around you and their electric battery charge level. Thanks to built-in GPS and cellular networking technologies, the scooter ‘s unique QR-code with the iOS or Android app “unlocks” the scooter with payment kicking in via credit card, Apple Pay, etc. The scooter is not literally locked to anything, but it will not function until you pay for it with the app, which is connected, of course, to your identity (you even have to scan your driver’s license when setting up an account). These services are affordable. And when you’re done, you finish your ride, which locks the scooter, and you put it… wherever. The scooters are gathered up every night, charged, and redistributed around the city by non-employee contractors, akin in a way to how Uber or Lyft contracts automobile drivers.

With lithium-ion battery technology reaching certain performance and pricing levels, GPS and cellular data tech expanding, high smartphone ownership (over 75% in the U.S.), easy mobile payment processing, and QR code technology, scooters went from zero to near ubiquity overnight.

But not without the cloud. What does the cloud have to do with electric scooters? The cloud brings to a startup operation the ability to weave the aforementioned technologies and services into a cohesive system without having to make a major technology infrastructure investment. And for a widely-distributed system, it makes the most sense to put the IT backbone in the cloud. It’s scalable and can easily talk to a fleet of thousands and thousands of mobile devices that happen to also be modes of transportation.

I would submit that without the cloud there would be less of a – or even nonexistent – rentable electric scooter craze. It’s a major supporting piece of the puzzle.

Similarly, that is what the cloud is doing to the media technology space.

Now, we can begin to plan out how to put more of the “guts” of a client’s infrastructure up there, and on-premises systems will soon enough be there only to touch systems which make sense to house at the edges of a deployment. Maybe your MAM’s database, viewing proxies, and application stack will be next to go up to the cloud. Maybe the cloud will house your disaster-recovery data set.

It’s even fairly easy to imagine more of high-performance post-production taking place without any significant on-premises infrastructure beyond a workstation. Or will that, too, become a virtualized cloud system? We can already do this, in fact, in a way that works for some workflows.

What’s further out? Here’s just one scenario:

In five years, significantly more of the software application and platform stack that we all rely on today will be “containerized,” and thus ripe for cloud and hybrid-cloud deployments in a much more sophisticated way than is currently done (in our M&E world, at least — other industries already do this). Software containers tend to use a technology called Docker. You can think of a Docker container almost like a VM, but it has no operating system, just a piece of the overall software stack (a “microservice”) and any software dependencies that piece of the overall stack has.

Management platforms, such as the popular Kubernetes (from Google), allow one to manage the containers that make up a software platform, even auto-scaling these microservices as needed on a microservice-by-microservice basis. Say the transcoder element of your solution needs to scale up to meet demand, but short term? Kubernetes can help spin up more container instances that house the transcoder microservices your solution relies on. Same could go for a database that needs to scale, or workflow processing nodes, and on and on.

All of this is basically a complicated way of engineering an infrastructure that on a fine-grained basis can automatically spin up (and likewise, spin down) just the necessary portions of a unified system, incurring additional costs just at the moments those services are needed. This is, as we know, essentially the opposite of a major on-premises capital buildout project as we currently envision it.

What I described above, by itself, is going to be extremely disruptive to our industry. That’s not to say it’s a bad thing, but it will significantly impact which vendors we work with, which ones are with us a decade from now, how we fund projects, what type of projects can even be “built,” who’s building them, etc.

The notion of a “pop-up broadcaster” with significantly greater capabilities than today’s OTT-only players becomes possible. Want to see if a major broadcasting operation can be sustainable? Rent some studio space and production gear, and essentially the rest of the operation can be leased, short term, and scaled in any direction you’d like.

Many, many organizations do the above every single day. Facebook is doing this, and Google/YouTube, Amazon, etc. just to deal with traffic loads for their websites. In fact, you don’t build a mass-scale, contemporary website without using the above-described approaches.

What’s more interesting than the above, or pretty much anything else we can think of today? What will be more challenging? It’ll be our “scooter” moments. It’ll be the confluence of cloud technologies and many others that will lead to innovators coming up with ideas and permutations that we can’t yet anticipate. One day we’ll be doing things in a way we couldn’t even sort of predict. One day, seemingly out of nowhere … the “scooters” will be everywhere.

To learn more about what AI can do you for you, contact Chesapeake at